id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.00379
Image Completion via Dual-path Cooperative Filtering
Given the recent advances with image-generating algorithms, deep image completion methods have made significant progress. However, state-of-art methods typically provide poor cross-scene generalization, and generated masked areas often contain blurry artifacts. Predictive filtering is a method for restoring images, which predicts the most effective kernels based on the input scene. Motivated by this approach, we address image completion as a filtering problem. Deep feature-level semantic filtering is introduced to fill in missing information, while preserving local structure and generating visually realistic content. In particular, a Dual-path Cooperative Filtering (DCF) model is proposed, where one path predicts dynamic kernels, and the other path extracts multi-level features by using Fast Fourier Convolution to yield semantically coherent reconstructions. Experiments on three challenging image completion datasets show that our proposed DCF outperforms state-of-art methods.
Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger
2023-04-30T03:54:53Z
http://arxiv.org/abs/2305.00379v1
# Image Completion via Dual-Path Cooperative Filtering ###### Abstract Given the recent advances with image-generating algorithms, deep image completion methods have made significant progress. However, state-of-art methods typically provide poor cross-scene generalization, and generated masked areas often contain blurry artifacts. Predictive filtering is a method for restoring images, which predicts the most effective kernels based on the input scene. Motivated by this approach, we address image completion as a filtering problem. Deep feature-level semantic filtering is introduced to fill in missing information, while preserving local structure and generating visually realistic content. In particular, a Dual-path Cooperative Filtering (DCF) model is proposed, where one path predicts dynamic kernels, and the other path extracts multi-level features by using Fast Fourier Convolution to yield semantically coherent reconstructions. Experiments on three challenging image completion datasets show that our proposed DCF outperforms state-of-art methods. Pourya Shamsolmoali\({}^{1}\), Masoumeh Zareapoor\({}^{2}\), Eric Granger\({}^{3}\)\({}^{1}\)Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, China \({}^{2}\)School of Automation, Shanghai Jiao Tong University, China \({}^{3}\)Lab. d'imagerie, de vision et d'intelligence artificielle, Dept. of Systems Eng., ETS, Canada Image Completion, Image Inpainting, Deep Learning. ## 1 Introduction The objective of image completion (inpainting) is to recover images by reconstructing missing regions. Images with inpainted details must be visually and semantically consistent. Therefore, robust generation is required for inpainting methods. Generative adversarial networks (GANs) [2, 18] or auto-encoder networks [16, 20, 21] are generally used in current state-of-the-art models [10, 11, 19] to perform image completion. In these models, the input image is encoded into a latent space by generative network-based inpainting, which is then decoded to generate a new image. The quality of inpainting is entirely dependent on the data and training approach, since the procedure ignores priors (for example smoothness among nearby pixels or features). It should be noted that, unlike the generating task, image inpainting has its own unique challenges. First, image inpainting requires that the completed images be clean, high-quality, and natural. These constraints separate image completion from the synthesis tasks, which focuses only on naturalness. Second, missing regions may appear in different forms, and the backgrounds could be from various scenes. Given these constraints, it is important for the inpainting method to have a strong capacity to generalize across regions that are missing. Recent generative networks have made substantial progress in image completion, but they still have a long way to go before they can address the aforementioned problems. For instance, RFRNet [7] uses feature reasoning on the auto-encoder architecture for the task of image inpainting. As shown in Fig. 1, RFRNet produces some artifacts in output images. JPGNet and MISF [5, 8] are proposed to address generative-based inpainting problems [7, 12, 15] by reducing artifacts using image-level predictive filtering. Indeed, image-level predictive filtering reconstructs pixels from neighbors, and filtering kernels are computed adaptively based on the inputs. JPGNet is therefore able to retrieve the local structure while eliminating artifacts. As seen in Fig. 1, JPGNet's artifacts are more efficiently smoother than RFRNet's. However, many details may be lost, and the actual structures are not reconstructed. LaMa [19] is a recent image inpainting approach that uses Fast Fourier Convolution (FFC) [3] inside their ResNet-based LaMa-Fourier model to address the lack of receptive field for producing repeated patterns in the missing areas. Previously, researchers struggled with global self-attention [22] and its computational complexity, and they were still unable to perform satisfactory recovery for repeated man-made structures as effectively as with LaMa. Nonetheless, as the missing regions get bigger and pass the object boundary, LaMa creates faded structures. Figure 1: Examples of an image completed with our DCF model compared to baseline methods on the Paris dataset. DCF generates high-fidelity and more realistic images. In [12], authors adopts LaMa as the base network, and can captures various types of missing information by utilizing additional types of masks. They use more damaged images in the training phase to improve robustness. However, such a training strategy is unproductive. Transformer-based approaches [20, 23] recently have attracted considerable interest, despite the fact that the structures can only be estimated within a low-resolution coarse image, and good textures cannot be produced beyond this point. Recent diffusion-based inpainting models [13, 17] have extended the limitations of generative models by using image information to sample the unmasked areas or use a score-based formulation to generate unconditional inpainted images, however, these approaches are not efficient in real-world applications. To address this problem, we introduce a new neural network architecture that is motivated by the predictive filtering on adaptability and use large receptive field for producing repeating patterns. In particular, this paper makes two key contributions. First, semantic filtering is introduced to fill the missing image regions by expanding image-level filtering into a feature-level filtering. Second, a Dual-path Cooperative Filtering (DCF) model is introduced that integrates two semantically connected networks - a kernel prediction network, and a semantic image filtering network to enhance image details. The semantic filtering network supplies multi-level features to the kernel prediction network, while the kernel prediction network provides dynamic kernels to the semantic filtering network. In addition, for efficient reuse of high-frequency features, FFC [3] residual blocks are utilized in the semantic filtering network to better synthesize the missing regions of an image, leading to improved performance on textures and structures. By linearly integrating neighboring pixels or features, DCF is capable of reconstructing them with a smooth prior across neighbors. Therefore, DCF utilizes both semantic and pixel-level filling for accurate inpainting. Following Fig. 1, the propose model produces high-fidelity and realistic images. Furthermore, in comparison with existing methods, our technique involves a dual-path network with a dynamic convolutional operation that modifies the convolution parameters based on different inputs, allowing to have strong generalization. A comprehensive set of experiments conducted on three challenging benchmark datasets (CelebA-HQ [6], Places2 [24], and Paris StreetView [4]), shows that our proposed method yields better qualitative and quantitative results than state-of-art methods. ## 2 Methodology Predictive filtering is a popular method for restoring images that is often used for image denoising tasks [14]. We define image completion as pixel-wise predictive filtering: \[I_{c}=I_{m}\vartriangle T, \tag{1}\] in which \(I_{c}\in\mathbb{R}^{(H\times W\times 3)}\) represents a complete image, \(I_{m}\in\mathbb{R}^{(H\times W\times 3)}\) denotes the input image with missing regions from the ground truth image \(I_{gr}\in\mathbb{R}^{(H\times W\times 3)}\). The tensor \(T\in\mathbb{R}^{(H\times W\times N^{2})}\) has \(HW\) kernels for filtering each pixel and the pixel-wise filtering operation is indicated by the operation \({}^{\prime}\vartriangle^{\prime}\). Rather than using image-level filtering, we perform the double-path feature-level filtering, to provides more context information. Our idea is that, even if a large portion of the image is destroyed, semantic information can be maintained. To accomplish semantic filtering, we initially use an auto-encoder network in which the encoder extracts features of the damaged image \(I_{m}\), and the decoder maps the extracted features to the complete image \(I_{c}\). Therefore, the encoder can be defined by: \[f_{L}=\rho(I_{m})=\rho_{L}(...\rho_{l}(...\rho_{2}(\rho_{1}(I_{m})))), \tag{2}\] in which \(\rho(.)\) denotes the encoder while \(f_{l}\) represents the feature taken from the deeper layers (\(l^{th}\)), \(f_{l}=\rho_{l}(f_{l-1})\). For instance, \(f_{l}\) shows the last layer's result of \(\rho(.)\). In our encoder network, to create remarkable textures and semantic structures within the missing image regions, we adopt Fast Fourier Convolutional Residual Blocks (FFC-Res) [19]. The FFC-Res shown in Fig. 2 (b) has two FFC layers. The channel-wise Fast Fourier Transform (FFT) [1] is the core of the FFC layer [3] to provide a whole image-wide receptive field. As shown in Fig. 2 (c), the FFC layer divides channels into two branches: a) a local branch, which utilizes standard convolutions to capture spatial information, and b) a global branch, which employs a Spectral Transform module to analyze global structure and capture long-range context. Figure 2: Overview of the proposed architecture. (a) Our proposed DCF inpainting network with (b) FFC residual block to have a larger receptive field. (c) and (d) show the architecture of the FFC and Spectral Transform layers, respectively. Outputs of the local and global branches are then combined. Two Fourier Units (FU) are used by the Spectral Transform layer (Fig. 2 (d)) in order to capture both global and semi-global features. The FU on the left represents the global context. In contrast, the Local Fourier Unit on the right side of the image takes in one-fourth of the channels and focuses on the semi-global image information. In a FU, the spatial structure is generally decomposed into image frequencies using a Real FFT2D operation, a frequency domain convolution operation, and ultimately recovering the structure via an Inverse FFT2D operation. Therefore, based on the encoder the network of our decoder is defined as: \[I_{c}=\rho^{-1}(f_{L}), \tag{3}\] in which \(\rho^{-1}(.)\) denotes the decoder. Then, similar to image-level filtering, we perform semantic filtering on extracted features according to: \[\hat{f}_{l}[r]=\sum_{s\in\mathcal{N}_{\kappa}}T_{\kappa}^{l}[s-r]f_{l}[s], \tag{4}\] in which \(r\) and \(s\) denote the image pixels' coordinates, whereas the \(\mathcal{N}_{\kappa}\) consist of \(N^{2}\) closest pixels. \(T_{\kappa}^{l}\) signifies the kernel for filtering the \(\kappa^{th}\) component of \(T_{l}\) through its neighbors \(\mathcal{N}_{\kappa}\). To incorporate every element-wise kernel, we use the matrix \(T_{l}\) as \(T_{\kappa}^{l}\). Following this, Eq. (2) is modified by substituting \(f_{l}\) with \(\hat{f}_{l}\). In addition, we use a predictive network to predict the kernels' behaviour in order to facilitate their adaptation for two different scenes. \[T_{l}=\varphi_{l}(I_{m}), \tag{5}\] in which \(\varphi_{l}(.)\) denotes the predictive network to generate \(T_{l}\). In Fig. 2(a) and Table 2, we illustrate our image completion network which consist of \(\rho(.),\rho^{-1},\) and \(\varphi_{l}(.)\). The proposed network is trained using the \(L_{1}\) loss, perceptual loss, adversarial loss, and style loss, similar to predictive filtering. ## 3 Experiments In this section, the performance of our DCF model is compared to state-of-the-art methods for image completion task. Experiments are carried out on three datasets, CelebA-HQ [6], Places2 [24], and Paris StreetView [4] at \(256\times 256\) resolution images. With all datasets, we use the standard training and testing splits. In both training and testing we use the diverse irregular mask (20%-40% of images occupied by holes) given by PConv [9] and regular center mask datasets. The code is provided at _DCF_. **Performance Measures:** The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and Frechet inception distance (FID) are used as the evaluation metrics. ### Implementation Details Our proposed model's framework is shown in Table 2. **Loss functions.** We follow [15] and train the networks using four loss functions, including \(L_{1}\) loss (\(\ell_{1}\)), adversarial loss (\(\ell_{A}\)), style loss (\(\ell_{S}\)), and perceptual loss (\(\ell_{P}\)), to obtain images with excellent fidelity in terms of quality as well as semantic levels. Therefore, we can write the reconstruction loss (\(\ell_{R}\)) as: \[\ell_{R}=\lambda_{1}\ell_{1}+\lambda_{a}\ell_{A}+\lambda_{p}\ell_{P}+\lambda_ {s}\ell_{S}. \tag{6}\] \begin{table} \begin{tabular}{l|c|c||c|c} \hline \multicolumn{4}{c||}{Feature extracting network} & \multicolumn{2}{c}{Predicting network} \\ \hline Layer & In. & Out/size & In. & Out/size \\ \hline \hline conv(7,3,64) & \(I_{m}\) & \(f_{1}\) / 256 & \(I_{m}\) & \(e_{1}\) / 256 \\ conv(4,64,128) & \(f_{1}\) & \(f_{2}\) / 128 & \(e_{1}\) & \(e_{2}\) / 128 \\ pooling & \(f_{2}\) & \(f_{2}\) / 64 & \(e_{2}\) & \(e_{2}\) / 64 \\ conv(4,128,256) & \(f_{2}\) & \(f_{3}\) / 64 & \([f_{2}^{\prime},e_{2}^{\prime}]\) & \(e_{3}\) / 64 \\ \(f_{3}\) \(\ in which \(\lambda_{1}=1\), \(\lambda_{a}=\lambda_{p}=0.1\), and \(\lambda_{s}=250\). More details on the loss functions can be found in [15]. **Training setting.** We use Adam as the optimizer with the learning rate of \(1e-4\) and the standard values for its hyperparameters. The network is trained for 500k iterations and the batch size is 8. The experiments are conducted on the same machine with two RTX-3090 GPUs. ### Comparisons to the Baselines **Qualitative Results.** The proposed DCF model is compared to relevant baselines such as RFRNet [7], JPGNet [5], and LaMa [19]. Fig. 3 and Fig. 4 show the results for the Places2 and CelebA-HQ datasets respectively. In comparison to JPGNet, our model preserves substantially better recurrent textures, as shown in Fig. 3. Since JPGNet lacks attention-related modules, high-frequency features cannot be successfully utilized due to the limited receptive field. Using FFC modules, our model expanded the receptive field and successfully project source textures on newly generated structures. Furthermore, our model generates superior object boundary and structural data compared to LaMa. Large missing regions over larger pixel ranges limit LaMa from hallucinating adequate structural information. However, ours uses the advantages of the coarse-to-fine generator to generate a more precise object with better boundary. Fig. 4 shows more qualitative evidence. While testing on facial images, RFRNet and LaMa produce faded forehead hairs and these models are not robust enough. The results of our model, nevertheless, have more realistic textures and plausible structures, such as forehead form and fine-grained hair. **Quantitative Results.** On three datasets, we compare our proposed model with other inpainting models. The results shown in Table 2 lead to the following conclusions: 1) Compared to other approaches, our method outperforms them in terms of PSNR, SSIM, and FID scores for the most of datasets and mask types. Specifically, we achieve 9% higher PNSR on the Places2 dataset's irregular masks than RFRNet. It indicates that our model has advantages over existing methods. 2) We observe similar results while analyzing the FID. On the CelebA-HQ dataset, our method achieves 2.5% relative lower FID than LaMa under the center mask. This result indicates our method's remarkable success in perceptual restoration. 3) The consistent advantages over several datasets and mask types illustrate that our model is highly generalizable. ## 4 Conclusion Dual-path cooperative filtering (DCF) was proposed in this paper for high-fidelity image inpainting. For predictive filtering at the image and deep feature levels, a predictive network is proposed. In particular, image-level filtering is used for details recovery, whereas deep feature-level filtering is used for semantic information completion. Moreover, in the image-level filtering the FFC residual blocks is adopted to recover semantic information and resulting in high-fidelity outputs. The experimental results demonstrate our model outperforms the state-of-art inpainting approaches. #### Acknowledgments This research was supported in part by NSFC China. The corresponding author is Masoumeh Zareapoor. \begin{table} \begin{tabular}{l|l|c c|c c|c c} \hline \hline \multirow{3}{*}{} & \multirow{3}{*}{Method} & \multicolumn{3}{c|}{CelebA-HQ} & \multicolumn{3}{c|}{Places2} & \multicolumn{3}{c}{Paris StreetView} \\ \cline{3-8} & & Irregular & Center & Irregular & Center & Irregular & Center \\ \hline \multirow{8}{*}{PSNR\(\uparrow\)} & RFRNet [7] & 26.63 & 21.32 & 22.58 & 18.27 & 23.81 & 19.26 \\ & JPGNet [5] & 25.54 & 22.71 & 23.93 & 19.22 & 24.79 & 20.63 \\ & TFill [23] & 26.84 & 23.65 & 24.32 & 20.49 & 25.46 & 21.85 \\ & LaMa [19] & 27.31 & 24.18 & **25.27** & 21.67 & 25.84 & 22.59 \\ & GLaMa [12] & 28.17 & 25.13 & 25.08 & 21.83 & 26.23 & 22.87 \\ & DCF (ours) & **28.34** & **25.62** & 25.19 & **22.30** & **26.57** & **23.41** \\ \hline \multirow{8}{*}{SSIM\(\uparrow\)} & RFRNet [7] & 0.934 & 0.912 & 0.819 & 0.801 & 0.862 & 0.849 \\ & JPGNet [5] & 0.927 & 0.904 & 0.825 & 0.812 & 0.873 & 0.857 \\ & TFill [23] & 0.933 & 0.907 & 0.826 & 0.814 & 0.870 & 0.857 \\ & LaMa [19] & 0.939 & 0.911 & 0.829 & 0.816 & 0.871 & 0.856 \\ & GLaMa [12] & 0.941 & 0.925 & **0.833** & 0.817 & 0.872 & 0.858 \\ & DCF (ours) & **0.943** & **0.928** & 0.832 & **0.819** & **0.876** & **0.861** \\ \hline \multirow{8}{*}{FID\(\downarrow\)} & RFRNet [7] & 17.07 & 17.83 & 15.56 & 16.47 & 40.23 & 41.08 \\ & JPGNet [5] & 13.92 & 15.71 & 15.14 & 16.23 & 37.61 & 39.24 \\ & TFill [23] & 13.18 & 13.87 & 15.48 & 16.24 & 33.29 & 34.41 \\ & LaMa [19] & 11.28 & 12.95 & 14.73 & 15.46 & 32.30 & 33.26 \\ & GLaMa [12] & 11.21 & 12.91 & 14.70 & 15.35 & 32.12 & 33.07 \\ \cline{2-8} & DCF w.o. Sem-Fil & 14.34 & 15.24 & 17.56 & 18.11 & 42.57 & 44.38 \\ & DCF w.o. FFC & 13.52 & 14.26 & 15.83 & 16.98 & 40.54 & 41.62 \\ & DCF (ours) & **11.13** & **12.63** & **14.52** & **15.09** & **31.96** & **32.85** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study and quantitative comparison of our proposed and state-of-art methods on center and free form masked images from the CelebA-HQ, Places2, and Paris StreetView datasets.
2307.16362
High Sensitivity Beamformed Observations of the Crab Pulsar's Radio Emission
We analyzed four epochs of beamformed EVN data of the Crab Pulsar at 1658.49 MHz. With the high sensitivity resulting from resolving out the Crab Nebula, we are able to detect even the faint high-frequency components in the folded profile. We also detect a total of 65951 giant pulses, which we use to investigate the rates, fluence, phase, and arrival time distributions. We find that for the main pulse component, our giant pulses represent about 80% of the total flux. This suggests we have a nearly complete giant pulse energy distribution, although it is not obvious how the observed distribution could be extended to cover the remaining 20% of the flux without invoking large numbers of faint bursts for every rotation. Looking at the difference in arrival time between subsequent bursts in single rotations, we confirm that the likelihood of finding giant pulses close to each other is increased beyond that expected for randomly occurring bursts - some giant pulses consist of causally related microbursts, with typical separations of $\sim\!30{\rm\;\mu s}$ - but also find evidence that at separations $\gtrsim\!100{\rm\;\mu s}$ the likelihood of finding another giant pulse is suppressed. In addition, our high sensitivity enabled us to detect weak echo features in the brightest pulses (at $\sim\!0.4\%$ of the peak giant pulse flux), which are delayed by up to $\sim\!300{\rm\;\mu s}$.
Rebecca Lin, Marten H. van Kerkwijk
2023-07-31T01:36:55Z
http://arxiv.org/abs/2307.16362v2
# High Sensitivity Beamformed Observations of the Crab Pulsar's Radio Emission ###### Abstract We analyzed four epochs of beamformed EVN data of the Crab Pulsar at \(1658.49\rm\,MHz\). With the high sensitivity resulting from resolving out the Crab Nebula, we are able to detect even the faint high-frequency components in the folded profile. We also detect a total of \(65951\) giant pulses, which we use to investigate the rates, fluence, phase, and arrival time distributions. We find that for the main pulse component, our giant pulses represent about 80% of the total flux. This suggests we have a nearly complete giant pulse energy distribution, although it is not obvious how the observed distribution could be extended to cover the remaining 20% of the flux without invoking large numbers of faint bursts for every rotation. Looking at the difference in arrival time between subsequent bursts in single rotations, we confirm that the likelihood of finding giant pulses close to each other is increased beyond that expected for randomly occurring bursts - some giant pulses consist of causally related microbursts, with typical separations of \(\sim 30\rm\ \mu s\) - but also find evidence that at separations \(\gtrsim\!100\rm\ \mu s\) the likelihood of finding another giant pulse is suppressed. In addition, our high sensitivity enabled us to detect weak echo features in the brightest pulses (at \(\sim\!0.4\%\) of the peak giant pulse flux), which are delayed by up to \(\sim\!300\rm\ \mu s\). Pulsars (1306) -- Radio bursts (1339) -- Very long baseline interferometry (1769) 0000-0002-4818-2886]Rebecca Lin 0000-0002-4882-0886]Marten H. van Kerkwijk 0000-0002-4882-0886]D.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A. Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A. Investigation of the emission from the Crab Pulsar is complicated by propagation effects along the line of sight, especially at lower frequencies, \(\lesssim 2\ \mathrm{GHz}\). While dispersion can be removed using coherent de-dispersion (either during recording, or afterwards with baseband data), scattering effects are difficult to remove. This includes echoes due to propagation in the Crab Nebula itself, which sometimes are bright and obvious (Backer et al., 2000; Lyne et al., 2001), but can also be quite faint (Driessen et al., 2019), making it difficult to disentangle them from microbursts without having a good pulse sample to look for repeating structure. Another complication in studying the emission of the Crab Pulsar is the radio-bright nebula in which the pulsar resides. This contributes noise and hence many previous studies relied on long integrations to observe both the weaker pulse components and echoes in the average profile. But the contribution to the noise can be reduced by resolving the nebula, using large dishes or arrays, such as the VLA, Arecibo, and Westerbork (Moffett & Hankins, 1996; Cordes et al., 2004; Karuppusamy et al., 2010; Lewandowska et al., 2022). In this paper, we use the European VLBI Network (EVN) to resolve out the Crab Nebula and obtain high sensitivity data. In Section 2, we describe our observations and data reduction, and in Section 3, we present the resulting pulse profiles and the components that are detectable at our high sensitivity. We turn to an analysis of GPs in Section 4, investigating their rates, fluence, phase, and arrival time distributions, as well as weak echoes seen in the brightest GPs. We summarize our findings in Section 5. ## 2 Observations and Data Reduction We analyze observations of the Crab Pulsar taken by the EVN, projects EK036 A-D, at four epochs between 2015 Oct and 2017 May (see Table 1). Throughout these observations, calibrator sources were also observed resulting in breaks in our data. While many dishes participated in these observations, for our analysis we only use telescope data that had relatively clean signals across the frequency range of \(1594.49-1722.49\ \mathrm{MHz}\) in both circular polarizations. At each single dish, real-sampled data were recorded in either 2 bit MARK 5B or VDIF format1, covering the frequency range in either eight contiguous \(16\ \mathrm{MHz}\) wide bands or four contiguous \(32\ \mathrm{MHz}\) wide bands. Footnote 1: For specifications of MARK5B and VDIF, see [https://www.haystack.mit.edu/haystack-memo-series/mark-5-memos/](https://www.haystack.mit.edu/haystack-memo-series/mark-5-memos/) and [https://vlbi.org/wp-content/uploads/2019/03/VDIF_specification_Release_1.1.1.pdf](https://vlbi.org/wp-content/uploads/2019/03/VDIF_specification_Release_1.1.1.pdf), respectively. For these datasets, single dish data were processed and then combined coherently to form a tied-array beam as described in Lin et al. (2023). The resulting RFI-removed, normalized, de-dispersed (using dispersion measures (DMs) listed in Table 1), parallactic angle corrected, and phased baseband data were squared to form intensity data. As in Lin et al. (2023), we estimate the system equivalent flux density (SEFD) for the phased EVN array as \((S_{\text{CN}}+\langle S_{\text{tel}}\rangle)/N_{\text{tel}}\approx 140-160\ \mathrm{ Jy}\), where \(S_{\text{CN}}\approx 833\ \mathrm{Jy}\) is the SEFD of the Crab Nebula at our observing frequency (Bietenholz et al., 1997), \(\langle S_{\text{tel}}\rangle\simeq 300\ \mathrm{Jy}\) is the average nominal SEFD of the telescopes2 and \(N_{\text{tel}}=7\ \mathrm{or}\ 8\) is the number of telescopes used. By combining the single dishes into a synthesized beam, we resolve out the radio-bright Crab Nebula and increase our sensitivity, thus allowing us to investigate the weaker radio emission of the Crab Pulsar. Footnote 2: [http://old.evlbi.org/cgi-bin/EVNcalc](http://old.evlbi.org/cgi-bin/EVNcalc). \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Observation & & \(t_{\text{sep}}\)a & & & DMc & & & Giant Pulsesd & & \\ & Date & (h) & Telescopes usedb & & & & Giant Pulsesd & \\ & Date & (h) & Telescopes usedb & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Observation and Giant Pulse Log. ## 3 Pulse Profiles For each of the phased EVN datasets, we create folded pulse profiles using polyco files generated with tempo2(Hobbs and Edwards, 2012) from the monthly Jodrell Bank Crab Pulsar ephemerides3(Lyne et al., 1993) and DM from Table 1. We averaged over all frequencies and used \(512\) phase bins, rotating in phase such that the MP is at phase \(0\). We show the resulting profiles in Figure 1, with each profile scaled to its maximum to ease comparison. With our high sensitivity, we can see all five pulse components expected from the multifrequency overview of Hankins et al. (2015), corresponding to the LFC, MP, IP, HFC1 and HFC2 (with the latter two detected at \(\sim\!1.66\ \mathrm{GHz}\) for the first time). Footnote 3: [http://www.jb.man.ac.uk/~pulsar/crab.html](http://www.jb.man.ac.uk/~pulsar/crab.html). We fit the pulse components in the EKO36 datasets with five Gaussians to look for possible changes, both between our epochs and relative to the compilation from Hankins et al. (2015). Our fitted parameters are presented in Table 2, together with the values inferred from Hankins et al. (2015). One sees that the results for our four observations are all consistent. At \(1.4\ \mathrm{GHz}\), Lyne et al. (2013) found that the separations between the MP and IP and between the MP and LFC increase at a rate of \(0\fdg 5\pm 0\fdg 2\) per century and \(11\arcdeg\pm 2\arcdeg\) per century, respectively. Using these rates, we expect pulse phase changes for the IP and LFC of \(\sim\!0\fdg 008\) and \(\sim\!0\fdg 17\), respectively, which are not detectable within our uncertainties. Comparing with Hankins et al. (2015), we find good agreement in pulse phase for all components (though now we do need to take into account the drift in pulse phase). We noticed, however, that while the widths of our LFC, HFC1 and HFC2 are consistent with those given by Hankins et al. (2015), the widths of the MP and IP seem smaller, even if they are still within the nominal, rather large uncertainties of Hankins et al. (2015). Looking in more detail at their Figure 3 with measurements, one sees considerable scatter for the MP and IP, even though those strong, narrow peaks should be the easiest to measure. This might suggest that some profiles were slightly smeared (e.g., because the data were not dedispersed to exactly the right DM, which is known to vary for the Crab Pulsar, or because of changes in scattering timescale at lower frequencies, see McKee et al., 2018). For a comparison with recent data, we estimated widths from the \(2-4\) and \(4-6\ \mathrm{GHz}\) pulse profiles in Figure 1 of Lewandowska et al. (2022), which were taken using the VLA in D configuration to resolve out the Crab Nebula and thus have high signal-to-noise ratio; we find these are all consistent with ours. Figure 1: Folded pulse profile of the Crab Pulsar at \(1658.49\ \mathrm{MHz}\) from EK036 observations in \(512\) phase bins centered on the MP. At this frequency, 5 components: LFC, MP, IP, HFC1 and HFC2 are visible. In the left panel, the profiles are normalized to their peak MP component. As the HFC1 and HFC2 components (indicated by arrows) are very faint, we show the grey region of the left panel zoomed in by a factor of \(15\) in the right panel, with vertical lines marking the peak of these components. At lower frequencies, the pulse profiles often show echo features (e.g., Driessen et al., 2019). At our frequencies, those are expected to be too weak at delays where they might be seen in the folded pulse profile, and indeed we see none. However, at frequencies like ours, echoes can still be seen in individual pulses. For instance, at \(1.4\;\mathrm{GHz}\), Crossley et al. (2004) saw that individual bright pulses all had an echo delayed at \(\sim\!50\;\mathrm{\mu s}\) (which had no counterpart at \(4.9\;\mathrm{GHz}\)). From aligning GPs before stacking them in our datasets, Lin et al. (2023) also saw hints of echo features within \(\sim\!25\;\mathrm{\mu s}\) of the peaks of GPs in EK036 B and D. In Section 4.6, we confirm echoes in our data using a more careful analysis, finding that for EK036 D faint echoes are visible out to to \(\sim\!300\;\mathrm{\mu s}\). ## 4 Giant Pulses ### Search In Lin et al. (2023), we searched for GPs by flagging peaks above \(8\sigma\) in a \(16\;\mathrm{\mu s}\) wide running average of the intensity time stream. While we reliably found GPs, the long time window meant we could not distinguish between bursts arriving in quick succession within that time window. Hence, the previous technique was unsuitable for one of our goals, of measuring arrival time differences between bursts, including between the microbursts that GPs sometimes are composed of. Below, we describe a revised technique, which allows us to more reliably identify multiple bursts (see Figure 2). Unsurprisingly, with our new technique we detected more multiple bursts than we had previously, as can be seen by comparing numbers listed in Section 6.3 of Lin et al. 2023) with those in Table 3. For every pulsar period in the EK036 dataset, we take \(2.0\;\mathrm{ms}\) snippets of baseband data centered at the MP and \begin{table} \begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{ \begin{tabular}{c} Pulse \\ Comp. \\ \end{tabular} } & Obs./ & Amplitude & Pulse Phase & FWHM \\ & Ref. & (\%) & (deg.) & (deg.) \\ \hline LFC\(\dots\) & A & 3.6(3) & \(-38.0(3)\) & 7.5(6) \\ & B & 3.35(17) & \(-37.67(19)\) & 7.7(4) \\ & C & 3.7(2) & \(-37.2(3)\) & 7.7(6) \\ & D & 3.9(2) & \(-37.8(2)\) & 8.1(5) \\ & H15 & \(\dots\) & \(-35.78(14)\) & 7.2(12) \\ MP \(\dots\) & A & & & 2.786(11) \\ & B & & & 2.708(7) \\ & C & & & 2.756(11) \\ & D & & & 2.836(9) \\ & H15 & & & 3.9(11) \\ IP\(\dots\) & A & 15.2(4) & 145.38(4) & 3.48(10) \\ & B & 15.2(2) & 145.28(3) & 3.59(7) \\ & C & 15.3(4) & 145.25(4) & 3.46(10) \\ & D & 14.4(3) & 145.28(4) & 3.59(8) \\ & H15 & \(\dots\) & 145.25(4) & 5.4(11) \\ HFC1\(\dots\) & A & 0.58(13) & 203(3) & 28(7) \\ & B & 0.88(9) & 198.4(13) & 25(3) \\ & C & 0.68(12) & 194(3) & 34(7) \\ & D & 0.94(11) & 196.2(15) & 36(5) \\ & H15 & \(\dots\) & 198.2(8) & 25(5) \\ HFC2\(\dots\) & A & 1.5(2) & 259.7(8) & 11.8(19) \\ & B & 1.19(14) & 259.2(7) & 11.7(16) \\ & C & 1.23(19) & 257.7(9) & 12(2) \\ & D & 1.51(15) & 259.8(7) & 14.8(16) \\ & H15 & \(\dots\) & 259.1(4) & 11.6(12) \\ \hline \end{tabular} Note. –Amplitudes and phases are relative to the MP. H15 refers to Hankins et al. (2015), and corresponding values are from evaluating the fits presented in his Tables 2 and 3 at our central observing frequency of \(1658.49\;\mathrm{MHz}\). The phases for the LFC and IP have been extrapolated to MJD 57607 (midway between EK036 A and D) using \(d\phi/dt\) values from Lyne et al. (2013). Numbers in parentheses are \(1\sigma\) uncertainties in the last digit. \end{table} Table 2: Properties of the Pulse Profile Components. Figure 2: Sample MP pulse rotations with GPs as detected by our algorithm (see Section 4.1 for details), shown at a time resolution of \(1.25\;\mathrm{\mu s}\). _Top_: Single pulse with scattering tail. _Middle_: Two pulses, each with their own scattering tail. _Bottom_: A profile showing the difficulties inherent in classifying pulses: our algorithm found three pulses, but if another algorithm were to classify this as two or four pulses, that would also seem reasonable. IP component phase windows (roughly \(2\) times the size of the pulse component determined from the folded pulse profile) and create pulse intensity stacks for each component4. We average these stack across the eight frequency bands and bin over 10 time samples, or \(0.625~{}\mu\)s, a value chosen to be large enough for a reliable GP detection yet well less than the scattering timescale of \(\sim\)\(5~{}\mu\)s during these observations (Lin et al., 2023). To detect GPs, we first subtract the off-pulse region (determined from the \(0.5~{}\mathrm{ms}\) region on either side of each pulse stack), then filter with a uniform filter of size \(5\) (\(3.125~{}\mu\)s), and finally record all samples above a detection threshold of \(5\sigma\). Footnote 4: We only search for GPs inside these windows since Lin et al. (2023) found none outside for the same dataset. To turn these sets of above-the-noise locations into detections of individual GPs, we use the following three-step process5. First, we connect detections within \(8\) samples (\(5~{}\mu\)s, i.e., of order the scattering time), since those are likely related. Second, we remove detections spanning \(4\) samples (\(2.5~{}\mu\)s) or less, since these are likely spurious. Third, we increase the width of a detection by \(4\) samples (\(2.5~{}\mu\)s) on either side, mostly to ensure that if we integrate over the mask, we will capture most of the flux independent of pulse strength. With this procedure, the minimum final pulse width is \(8.125~{}\mu\)s, slightly larger than the scattering timescale, and we confidently detect pulses above a threshold of \(\sim\)\(0.15~{}\mathrm{kJy}~{}\mu\)s. The brightest GP we detect has a fluence of \(\sim 560~{}\mathrm{kJy}~{}\mu\)s. With our relatively high initial detection threshold, we do not find any GPs outside our pulse windows, suggesting that we have no false detections in our sample. Nevertheless, as can be seen from the overall pulse statistics in Table 1, we find many GPs, about \(2-3\) per second or about one for every dozen pulsar rotations. Footnote 5: Using the binary_closing, binary_opening and binary_dilation functions, respectively, from scipy’s multidimensional image processing functions (Virtanen et al., 2020). In some pulse rotations, we detect more than one distinct GP, where "distinct" means that the pulse is separated by at least \(5~{}\mu\)s (roughly the scattering timescale) from another pulse at our detection threshold. Here, we note that whether or not a GP is detected as single or multiple depends on the detection threshold: a GP classified as a single one at our threshold might be classified as separated at a higher threshold if it has two bright peaks with some flux in between (e.g., because the scattering tail of the first peak overlaps with the start of the next one, or a weaker burst fills in the space in between). This dependence on detection threshold may explain why Bhat et al. (2008) found no pulses wider than \(10~{}\mu\)s, as they took a high detection cutoff, of \(3~{}\mathrm{kJy}~{}\mu\)s. This kind of arbitrariness seems unavoidable given the variety in pulse shapes that we see; it often is a rather subjective decision on what to take as a single bursts. To give a sense, we show in Figure 2 an example of a pulse rotation with a single burst as well as two examples of rotations with multiple bursts. In Section 4.5, we estimate the fraction of multiple bursts that is causally related from the statistics of pulse separations. ### Rates With the high sensitivity of the phased EVN array, we detected a total of \(65951\) GPs over \(7.32~{}\mathrm{hr}\), implying an average detection rate of \(2.5~{}\mathrm{s}^{-1}\). From Table 1, one sees that the rates are not the same for each epoch. Comparable detection rates are seen for both MP and IP GPs in EK036 A and C, but those are about a factor \(2\) smaller than the rates for EK036 B and D (which are comparable to each other). Similar changes in detection rate were found for bright pulses by Lundgren et al. (1995) at \(800~{}\mathrm{MHz}\), Bera & Chengalur (2019) at \(1330~{}\mathrm{GHz}\) and by Kazantsev et al. (2019) at \(111~{}\mathrm{MHz}\). Lundgren et al. (1995) suggests that almost Figure 3: GP pulse detection rates in each EK036 observation. Times when the telescope was not observing the Crab Pulsar are shaded grey. The MP (blue) and IP (orange) detection rates appear to scale together and are relatively constant across each observation. certainly, these are due to changes in the scattering screen, which are known to cause changes in the scattering time on similar timescales and are expected to cause changes in magnification as well. To verify that there are no variations at shorter timescales, we calculated rates at roughly \(5\,\mathrm{min}\) intervals. As can be seen in Figure 3, we find that in a given epoch, the rates are indeed steady. ### Fluences The fluence distribution of the Crab Pulsar's GPs is typically described by power-law approximations to the reverse cumulative distribution, \[N_{\mathrm{GP}}(E>E_{0})=CE_{0}^{\alpha}, \tag{1}\] where \(\alpha\) is the power-law index, \(C\) a proportionality constant, and \(E_{0}\) the GP fluence such that \(N_{\mathrm{GP}}(E>E_{0})\) is the occurrence rate of GPs above \(E_{0}\). For our data, one sees in Figure 4, that for all observations the distributions indeed appear power-law like at high fluence, with \(\alpha\approx-2.0\) and \(-1.6\) for MP and IP, respectively. These values are roughly consistent with values found at similar frequencies: e.g., Popov & Stappers (2007) find \(-1.7\) to \(-3.2\) for MP GPs and \(-1.6\) for IP GPs at \(1197\,\mathrm{MHz}\), and Majid et al. (2011) finds \(\alpha=-1.9\) for the combined MP and IP distribution at \(1664\,\mathrm{MHz}\). However, as noted by Hankins et al. (2015) already, the power-law indices show large scatter and should be taken as roughly indicative only, showing, e.g., that at higher frequencies, very bright pulses are relatively rare. Indeed, in our data, like in more sensitive previous studies (e.g., Lundgren et al., 1995; Popov & Stappers, 2007; Bhat et al., 2008; Karuppusamy et al., 2010), the fluence distribution clearly flattens at lower fluences. At the very low end, this is because our detection method misses more pulses, but the changes above \(\sim 0.2\,\mathrm{kJy}\,\mathrm{\mu s}\) are real. This turnover may at least partially explain why a variety of power-law indices was found previously, as the measured index will depend on what part of the fluence distribution is fit (which will depend also on the magnification by scattering), as well as why for very high fluences, well away from the turn-over, the power-law index seems fairly stable (Bera & Chengalur, 2019). Comparing the distributions for the different epochs, one sees that they are very similar except for a shift left or right in the figure. This confirms that the differences in rates seen between the epochs are due differences in magnification due to scintillation (and not due to the Crab Pulsar varying the rate at which pulses are emitted, which would, to first order, shift the distributions up and down). As the fluence distributions looked roughly parabolic in log-log space, we also show cumulative log-normal distributions in Figure 4, of the form, \[N_{\mathrm{GP}}(E>E_{0})=\frac{A}{2}\left[\mathrm{erfc}\left(\frac{\ln E_{0}- \mu}{\sigma\sqrt{2}}\right)\right], \tag{2}\] where \(A\) is a scale factor, \(\mu\) and \(\sigma\) are the mean and standard deviation of \(\ln E_{0}\), and \(\mathrm{erfc}\) is the complementary error function. One sees that these describe the observed cumulative distributions quite well. Figure 4: Reverse cumulative GP fluence distribution showing the occurrence rates of GPs. For comparison, power-law distributions (solid black lines) and log-normal distributions (dashed black line) are shown, with indices \(\alpha\) and widths \(\sigma\) as listed in the legend. If the intrinsic distributions were log-normal, it would imply that especially for the MP, most of the flux is already captured and that the total rate of GPs is not much larger than our detection rate. For the log-normal distribution shown in Figure 4, for the MP, \(A=2.7\ \mathrm{s}^{-1}\) and the mean GP fluence is \(\langle E\rangle=\exp(\mu+\frac{1}{2}\sigma^{2})=1.2\ \mathrm{kJy\,\mu s}\) and only 1.5% of the total flux is below \(0.15\ \mathrm{kJy\,\mu s}\), while for the IP, \(A=1.6\ \mathrm{s}^{-1}\) and \(\langle E\rangle=0.24\ \mathrm{kJy\,\mu s}\), and 13% of the flux is below. We can verify whether our MP GPs account for most of the flux by calculating pulse profiles with and without removing pulse rotations where GPs are detected. As can be seen in Figure 5, significant flux remains in both MP and IP. For the MP, even though the remaining signal is brighter in epochs B and D, the fraction is lower: about 18% in B and D, in comparison with 23% in A and C. This again can be understood if the larger detection rate is due to an overall magnification: a larger fraction of the pulses - and hence of the total flux - is detected. Our result is similar (but more constraining) than that of Majid et al. (2011), who showed that at least \(54\%\) of overall pulsed energy flux for the Crab Pulsar is emitted in the form of GPs. But it is in contrast for what is seen by Abbate et al. (2020) for PSR J1823\(-\)3021A, where the detected GPs make up only a small fraction of the integrated pulse emission (\(4\%\) and \(2\%\) for their C1 and C2 components, respectively), and by Geyer et al. (2021) for PSR J0540\(-\)6919, where the detected GPs only make up \(7\%\) of the total flux. This might indicate a difference in the emission process. As these authors noted, however, a larger population of undetected GPs may still be hidden below their detection threshold. For our observations, for both MP and IP, the residual flux is much larger than expected based on the log-normal distribution, thus indicating that the true fluence distribution has more pulses at low fluence (many more for the IP); if additional pulses were emitted also in rotations that we do not detect them, their typical fluence would be the residual flux integrated over one cycle, which is \(\sim 25\ \mathrm{Jy\,\mu s}\) for MP and a little less for IP. This is well below our detection limit, so consistent in that sense, but from the distributions shown in Figure 4, one would expect a much smaller rate than once per pulse period at \(25\ \mathrm{Jy\,\mu s}\). This might suggest that there are even more but typically fainter bursts (note that it cannot be fainter bursts accompanying the GPs we already detect, since we excluded the full rotations in calculating the resid Figure 5: Mean and median MP and IP pulse profiles obtained using all pulse rotations (in blue and orange, respectively) and using only those in which no GPs were detected (green and red, respectively) in \(6.25\ \mathrm{\mu s}\) bins. Note that because the noise in an individual profile is not normally distributed, but rather follows a \(\chi_{k}^{2}\) distribution, the median is slightly below zero in the off-pulse region, by \((1-2/3k)^{3}-1\simeq-6/9k\simeq-0.0002\) of the SEFD of \(\sim\!150\ \mathrm{Jy}\) (Section 2), or \(\sim\!-0.03\ \mathrm{Jy}\) given \(k=3200\) degrees of freedom (complex dedispersed timestream squared, averaged over 2 polarizations, 8 bands, and 100 time bins). ual emission), or that there is some steady underlying emission. It would be worthwhile to test this with more sensitive future observations. ### Pulse Phases Defining the time of arrival of a GP as the time when an increase in flux is first detected, the longitude windows where MP and IP GPs occur have total widths of \(\sim 680\)\(\mu\)s and \(860\)\(\mu\)s (or \(\sim\!7\fdg 3\) and \(\sim\!9\fdg 2\)), respectively (averaged over the four epoch). As can be seen in Figure 6, the majority of GPs occur within much narrower windows: the root-mean-square deviations around the mean arrival phases are \(\sim\!100\)\(\mu\)s and \(\sim\!130\)\(\mu\)s (or \(\sim\!1\fdg 1\) and \(\sim\!1\fdg 4\)), respectively. The number distribution is roughly Gaussian, with a slightly negative skewness (i.e., a longer tail toward earlier phases and thus with a mode towards later phases). This was also observed by Majid et al. (2011) at a similar frequency of \(1664\)\(\mathrm{MHz}\). In EKO36 D, a few MP pulses are detected beyond the range found in the other epochs. As we will discuss in Section 4.6, these "outlier" detections are due to echoes (hence, they are are omitted in our determinations of widths above). In Figure 6, we also show the flux distributions as a function of pulse phase, including the median flux of the GPs detected in any given phase bin. One sees no obvious variation, i.e., no hint of, e.g., brighter pulses having an intrinsically narrower phase distribution. This suggests that only the probability of seeing a pulse depends on pulse phase. In our earlier work on these data, where we studied how the pulse spectra and their correlations are affected by scattering (Lin et al., 2023), we concluded that we resolved the regions from which the nanoshots that comprise individual GPs are emitted, and that this is most easily understood if the emitting plasma is ejected highly relativistically, with \(\gamma\simeq 10^{4}\) (as was already suggested by Bij et al., 2021). If so, the emission would be beamed to angles much smaller than the width of the phase windows, and the range of phases over which we observe GPs would reflect the range of angles over which plasma is ejected. ### Arrival Times Several studies (e.g., Karuppusamy et al., 2010; Majid et al., 2011) have found that GPs in different rotations are not correlated, and that there is no correlation between MP and IP GPs, but that instead the distribution of the time delays between successive GPs follows an exponential distribution, as expected for a Poissonian process. Within a given cycle, though, multiple correlated microbursts can occur (Sallmen et al., 1999; Hankins and Eilek, 2007). With our high sensitivity, we can investigate this in more detail. In Table 3 we show the number of rotations in which we detect multiple MP or IP bursts (i.e., double, triple etc.), as well as the number expected (listed only where larger than 0) for the case where all events are independent, \[N_{n}=p_{n}N_{r}=\begin{pmatrix}N_{\mathrm{p}}\\ n\end{pmatrix}\left(\frac{1}{N_{r}}\right)^{n}\left(1-\frac{1}{N_{r}}\right)^{ N_{\mathrm{p}}-n}N_{r}, \tag{3}\] where \(p_{n}\) is the probability of a given rotation to have \(n\) bursts (assuming a binomial distribution), \(N_{r}\) is the total number of rotations observed, and \(N_{\mathrm{p}}\) is the total number of bursts found (and where for numerical values we inserted numbers from Table 1: \(N_{\mathrm{p}}=N_{\mathrm{MP}}\) or \(N_{\mathrm{IP}}\) and \(N_{r}=t_{\mathrm{exp}}/P_{\mathrm{Crab}}\), where \(P_{\mathrm{Crab}}=33.7\)\(\mathrm{ms}\) is the rotation period of the pulsar). One sees that we detect significantly more multiples than expected by chance6, i.e., some of the detected pulses are composed of multiple, causally related microbursts. Footnote 6: In Lin et al. (2023), we wrongly concluded the multiples were consistent with arising by chance. Sadly, we used incorrect estimates of \(N_{n}\). In principle, one could estimate the number of independent bursts, \(N_{\mathrm{p}}^{\mathrm{ind}}\), in each epoch by subtracting from \(N_{\mathrm{p}}\) the excess pulses from Table 3, but this would not be quite correct since the excess would be relative to estimates made using the total number of observed pulses \(N_{\mathrm{p}}\), not the (lower) number of independent pulses \(N_{\mathrm{p}}^{\mathrm{ind}}\). One could iterate, but an easier, unbiased estimate of \(N_{\mathrm{p}}^{\mathrm{ind}}\) can be made using the observed fraction of rotations in which we do not see any bursts, which should equal \(N_{0}/N_{r}=p_{0}=\left(1-1/N_{r}\right)^{N_{\mathrm{p}}^{\mathrm{ind}}}\). Solving for \(N_{\mathrm{p}}^{\mathrm{ind}}\), we find that \(N_{\mathrm{p}}^{\mathrm{ind}}=fN_{\mathrm{p}}\) with fractions \(f\) that are consistent between all epochs, at \(91.8\pm 0.2\) and \(95.2\pm 0.5\)% for MP and IP, respectively. Hence, about 8 and 5% of the detected MP and IP pulses, respectively, are extra components. Or, as fractions of independent MP and IP pulses, \((6,1,0.12)\) and \((4,0.3,0.0)\%\), respectively, are causally related double, triple, or quadruple microbursts. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Observation & \multicolumn{3}{c}{MP} & \multicolumn{3}{c}{\(\dots\)} & IP & \multicolumn{3}{c}{\(\dots\)} \\ Code & 2 & 3 & 4 & 5 & 6 & 2 & 3 & 4 \\ \hline \hline EK036 A & 1820(599) & 200(12) & 24 & 0 & 0 & 144(17) & 4 & 2 \\ EK036 B & 1431(611) & 170(18) & 22 & 3 & 1 & 237(43) & 16 & 2 \\ EK036 C & 611(213) & 67 (4) & 6 & 0 & 0 & 54( 7) & 4 & 0 \\ EK036 D & 934(395) & 117(10) & 23 & 6 & 1 & 116(19) & 9 & 0 \\ \hline \end{tabular} Note. – Numbers in parentheses are those expected if bursts occur randomly; for that case, one does not expect to find any rotations with 4 or more MP bursts or 3 or more IP bursts. Note that our GP detection method does not differentiate between microbursts and echoes, which becomes important for a few very bright pulses in EKO36 D, for which echoes were present. In addition, we are not able to distinguish microbursts that occur very close together in time. The number of detections differ from Lin et al. (2023) as a different, more robust, search algorithm is implemented here (see Section 4.1). \end{table} Table 3: Number of Rotations with Multiple Bursts. To investigate the distributions further, we show histograms of the time delay between pulses in Figure 7. Overdrawn are expectations for randomly arriving, independent pulses. We constructed these by bootstrapping, where we repeatedly reassign new random pulse cycles to our observed sets of pulses, and then recalculate the time delay distributions. Note that in our bootstraps, we do not randomize pulse phase, so that the observed phase distribution is correctly reflected in the time delays. One sees that as a function of pulse cycle (right column panels for MP and IP GPs in Fig. 7), the time delay distributions are not well defined. Figure 6: MP GP and IP GP fluence and count distributions as a function of pulse phase for each EK036 observation. We used pulse phase bins of \(0.1\%\) and fluence bins of \(0.1\ \mathrm{dex}\). The light purple line in the fluence panels show the median for bins with more than \(2\) detected pulses. ure 7), the observed histograms follow the expected exponential distribution (although the observed counts are slightly lower than the expected ones because not all pulses are independent, as is implicitly assumed in the bootstraps). For the time delays between pulses that occur in the same cycle (left column panels for MP and IP GPs in Figure 7), the observed distributions are very different from those expected for randomly occurring bursts. One sees a large peak at short delays, representing the excess microbursts from Table 3, following a roughly exponential distribution with a mean time between bursts of \(\sim 30\;\mu\)s or so. Intriguingly, at somewhat larger time difference, there seem to be fewer bursts than expected for independent events. This suggests that while a given detection has an enhanced probability of being in a group of causally related microbursts, the occurrence of a burst also suppresses the likelihood of another, independent, burst being produced in the same rotation. Thus, our results confirm that GPs are often composed of multiple microbursts, and they indicate that another, independent GP is less likely to occur right after. ### Scattering Features In Figure 6, one sees that in EK036 D, several MP GPs were detected at pulse phases quite far from the median phase. To investigate this, we looked at the arrival times of all GPs detected in EK036 D (see left panel of Figure 8). We found that the outliers occurred in two pulse rotations, which turned out to contain the brightest GPs in EK036 D. Looking at the pulse profiles of these brightest GPs, one sees that they are very similar (see right panels of Figure 8). In fact, closer Figure 7: Time delays between successive GPs for the MP (in blue) and IP (in orange) components for each EK036 observation. On the left MP and IP columns, time delays within a pulse rotation are shown with bins of \(10\;\mu\)s and \(20\;\mu\)s for the MP and IP respectively; the low counts in the first bin reflect the minimum separation of \(8.75\;\mu\)s between detected pulses. On the right MP and IP columns, time delays in pulse rotations are shown with bins of \(1\) rotation and \(4\) rotations for the MP and IP respectively. The red lines show the average time delay histograms for \(1000\) bootstrap iterations, in which we randomized the rotation in which a pulse was seen (but not the phase, to keep the observed phase distribution). examination reveals that all of the brightest GPs detected in EK036 D show similar pulse profiles. This implies that the pulses far from the median pulse phase arrive late because they are actually weak echoes of the main burst, with amplitudes down to \(\sim 0.4\%\) of the peak flux and delays up to \(\sim 300~{}\mu\)s. In Figure 9, we show singular value decomposition (SVD) approximations of the average MP GP profile for each epoch (for the IP, too few bright pulses were available). This was created from MP GP rotations with peak intensities greater than \(200~{}\mathrm{Jy}\) and seemingly single peaks, aligned using time offsets found by correlation with a reference pulse. To avoid giving too much weight to the brightest pulses, and thus risking that remaining substructure enters the average profile, we normalized each rotation by the intensity at the correlation maximum before doing the SVD. One sees that all profiles are fairly sharply peaked, but sit on top of a base, which has the expected asymmetric part extending to later time due to scattering, as well as a more symmetric component, likely resulting from the collective effect of faint microbursts. Comparing the epochs, one sees that for EK036 A-C, the profile dropoff is relatively smooth and becomes undetectable after \(\sim\!200~{}\mu\)s, while in EK036 D, the tail is much longer, extending to \(\sim\!400~{}\mu\)s, and is much more bumpy. Almost certainly, all bumps are echoes, including those at shorter delay in EK036 B (more clearly seen in the linear-scale plots in Lin et al.2023), Indeed, looking carefully at the stack of profiles in Figure 9, one sees that the echoes in EK036 D drift in time, moving slightly further away from the MP during the observation, with perhaps even a hint that echoes further away from the main bursts drift faster than those closer in. (Note that this stack is not completely linear in time, although given that the GP detection rate is roughly constant throughout, it is not far off.) This change in time is expected for echoes off a structure with changing distance from the line of sight, and indeed has been seen for a very prominent echo by Backer et al. (2000); Lyne et al. (2001). Overall, our observations suggests echoes are common, as also concluded from daily monitoring at \(600~{}\mathrm{MHz}\) by Serafin-Nadeau et al. (2023, in prep.). Figure 8: _Left_: MP GPs and IP GPs detected in the EK036 D data. The gray shaded regions indicate when the telescope was not observing the Crab Pulsar and the black vertical lines mark our MP GP and IP GP windows. In the inset, we show two pulse rotations containing the brightest GPs “A” and “B”, in red and orange respectively. _Right, Top_: Waterfalls of the two brightest pulses in EK036 D with \(1~{}\mu\)s time resolution and \(1~{}\mathrm{MHz}\) frequency resolution. _Right, Bottom_: Pulse profile of the two brightest pulses in EK036 D with \(1~{}\mu\)s time resolution scaled to the peak of each pulse. Pulses “A” and “B” show similar features and we conclude that during the EK036 D observations, weak echoes were present at large delays. ## 5 Summary of Conclusions The fine time resolution and high sensitivity in our beam-formed EVN data allowed us to confidently detect \(65951\) GPs with fluences above \(\sim 150\ \mathrm{Jy\ \mu s}\) over a short period of \(7.32\mathrm{hr}\). Within each of our four observations, we found that the GP detection rates are fairly constant, but that between epochs they differ by a factor of \(\sim\!2\). Similar changes were seen previously, and were suggested by Lundgren et al. (1995) to reflect changes in overall magnification of the scattering screens along the line of sight. The changes in magnification are consistent with the pulse fluence distributions, which are power-law like at high fluence, but with a flattening at lower fluences; the distributions from the different epochs can be shifted to each other with a change in fluence scale. We noted that the fluence distributions are similar to what is expected for log-normal distributions, but found that the residual signals seen in the GP phase windows after removing the GPs we detected were larger than expected if the log-normal distribution continued also below our detection limit. Nevertheless, it suggests that with only somewhat more sensitive observations, it should be possible to get a fairly complete sampling of all GPs that contribute to the average flux, at least for the MP component. Analyzing the pulse phase distributions, we confirm previous observations showing that the majority of GPs occur within very narrow phase windows. Furthermore, we observe no significant variations in the median flux distributions as a function of pulse phase. This suggests that it is the probability of observing a pulse that depends on pulse phase, not its energy, implying that the angle within which a pulse is emitted is much narrower than the rotational phase window, as expected if the plasma causing them is travelling highly relativistically (Bij et al., 2021; Lin et al., 2023). With our high detection rates, we were able to investigate the distribution of time delays between successive bursts within the same pulse rotation. We detect a larger number than expected if all bursts were due to a Poissonian process, and infer that \(\sim\!5\%\) of bursts come in groups of 2 or 3 causally related microbursts, with a typical separation in time of \(\sim\!30\ \mu\)s. Additionally, our high sensitivity revealed weak echo features for individual bright pulses, which drift slightly but sig Figure 9: _Line plots_: SVD approximation of the MP pulse profile for all observations. In EK036 B, echoes are seen close to the profile’s peak (see Lin et al., 2023 for more details). The profile for EK036 D shows multiple weak echoes up to \(\sim\!300\ \mu\)s. _Image_: The MP pulse stack for EK036 D, using a logarithmic colour scale to bring out faint features. Each pulse is aligned by correlating with the rotation with the brightest pulse in EK036 D (which is appears to be a simple single microburst) and then normalized by the intensity at time \(0\) (the black dashed line). The echoes appear to move out over time, as one can see by comparing the location of the most prominent faint echo with the dashed white vertical line near it (time is increasing both upwards and to the right in this image). nificantly even over our timescales of just a few hours. We infer that echo events are not rare. Given our findings, we believe even more sensitive follow-up studies of the Crab Pulsar would be very useful. This would be possible using more small dishes (spaced sufficiently far apart that the Crab Nebula is well-resolved) and by recording a larger bandwidth. ## Acknowledgements We thank the anonymous referee for their comments, which improved the clarity of this manuscript. We thank the Toronto Scintillometry group, and in particular Nikhil Mahajan, for useful discussion on GP statistics. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium (Loken et al., 2010; Ponce et al., 2019). SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. M.Hv.K. is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) via discovery and accelerator grants, and by a Killam Fellowship. The European VLBI Network (EVN) is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. Scientific results from data presented in this publication are derived from the following EVN project codes: EK036 A-D. astropy (Astropy Collaboration et al., 2013, 2018, 2022), Baseband (Van Kerkwijk et al., 2020), CALC10 (Ryan & Vandenberg, 1980), numpy (Harris et al., 2020), matplotlib (Hunter, 2007), pulsarbat (Mahajan & Lin, 2023), scipy (Virtanen et al., 2020), tempo2 (Hobbs & Edwards, 2012).
2301.07687
Maybe, Maybe Not: A Survey on Uncertainty in Visualization
Understanding and evaluating uncertainty play a key role in decision-making. When a viewer studies a visualization that demands inference, it is necessary that uncertainty is portrayed in it. This paper showcases the importance of representing uncertainty in visualizations. It provides an overview of uncertainty visualization and the challenges authors and viewers face when working with such charts. I divide the visualization pipeline into four parts, namely data collection, preprocessing, visualization, and inference, to evaluate how uncertainty impacts them. Next, I investigate the authors' methodologies to process and design uncertainty. Finally, I contribute by exploring future paths for uncertainty visualization.
Krisha Mehta
2022-12-14T00:07:06Z
http://arxiv.org/abs/2301.07687v1
# Maybe, Maybe Not: A Survey on Uncertainty in Visualization ###### Abstract Understanding and evaluating uncertainty play a key role in decision-making. When a viewer studies a visualization that demands inference, it is necessary that uncertainty is portrayed in it. This paper showcases the importance of representing uncertainty in visualizations. It provides an overview of uncertainty visualization and the challenges authors and viewers face when working with such charts. I divide the visualization pipeline into four parts, namely data collection, preprocessing, visualization, and inference, to evaluate how uncertainty impacts them. Next, I investigate the authors' methodologies to process and design uncertainty. Finally, I contribute by exploring future paths for uncertainty visualization. ## 1 Introduction With a rise in the complexity and dimensionality of data, analyzing and modeling data becomes more challenging. When most of our decisions are data-driven, it becomes imperative that we know the nature of the data and the patterns it contains. As a result, analyzing the inherent uncertainty in the data is gaining more significance. In various fields, uncertainty can signify different things. For instance, data bias, random or systematic error, and statistical variance are all factors that contribute to data uncertainty. Without understanding the underlying uncertainty in our data, we cannot make accurate predictions. Similarly, to observe the true structure of our data and as well as identify patterns in it, we need to visualize it. Today, we can no longer undermine the significance of uncertainty nor ignore the importance of visualizations for data analysis. As mentioned before, uncertainty is bound to exist whenever there is data. Therefore representation of uncertainty in data visualizations is crucial. Consider the example of hurricane path maps, as shown in Figure 1. The increase in the width of the predicted path with time is not due to an increase in the size of the hurricane. Instead, it is representing the inherent uncertainty in the data. In other words, the visualization indicates that compared to Friday, Sunday's hurricane path is more difficult to predict with any degree of accuracy. Information tends to be withheld from the viewer when one does not portray uncertainty in the visualization. Therefore the viewer might occasionally be ignorant of this exclusion. This breach of trust can have significant consequences for both the author and the viewer. Given this significance, it is reasonable to assume that visualizations frequently include uncertainty. But how often do we encounter charts that represent uncertainty? How frequently do we check for bias in graphs that represent public surveys? As it turns out, not frequently. In a recent study [9], 121 journalism articles, social science surveys, and economic estimates were examined. Out of 449 visualizations created for inference, the study demonstrates that only 14 accurately depict uncertainty. "What's Going on in This Graph?" is a New York Times (NYT) initiative to increase graphical literacy, especially among students. Different categories of charts, such as maps, parts-to-whole, and associations, are published for students to explore and analyze. When I looked into the distribution of these charts, I found that only 6 out of the 136 charts show uncertainty. The question I ask is, do we actually examine uncertainty representations when we come across them in order to make decisions, or do we simply ignore them? Does uncertainty offer value or just clutter these visualizations? I try to investigate these questions in this paper. Visualizations are an integral part of newspapers, government bills, and business earnings reports to name a few. The public uses them to gain insights, spot trends, and make decisions. Hence, when we visualize data, it becomes critical to support those visualizations with information about uncertainty. People frequently use visualizations to examine data and make observations. A lack of uncertainty representation could result in incorrect and erroneous interpretations. However, it can be challenging to visualize uncertainty. There are limited standard guidelines or protocols that authors can follow when they create such charts. Given these drawbacks, uncertainty visualization is considered one of the top research problems in data visualization [13]. With the help of a few uncertainty visualization examples, this survey studies how uncertainty contributes to every phase in visualization. Most research in this area focuses on creating charts with uncertainty and how viewers may perceive them. However, uncertainty is also influential in the other parts of the data visualization process, such as during data collection and preprocessing. **The objectives of this paper are as follows:** * Provide an entry point for anyone who wants to learn about uncertainty visualization * Delineate the significance of uncertainty visualizations * Explore how uncertainty influences every phase of the data visualization process Figure 1: An example chart for Matthew showing its five-day forecast track [5] * Understand the challenges authors and viewers face when interacting with it * Discuss the open problems and future research directions in the field This work is divided into the following sections. Section 2 defines uncertainty and describes the relationship between uncertainty and visualization. In Section 3, I classify the data visualization pipeline into four phases, analyzing the involvement of uncertainty in each phase. The classification helps look at each phase individually, focusing on the challenges and bottlenecks authors and viewers face when working with uncertainty visualization. Finally, I study some state-of-the-art methods to visualize uncertainty and discuss future directions for research. I conclude the paper in Section 4. ## 2 Uncertainty and Visualization Visualizations are incredibly important for examining, analyzing, and interpreting data in the era of big data. Visualizations are evidence that a picture really does say a thousand words. They aid viewers in seeing trends, background noise, and outliers. Asking the correct questions can be quite challenging when there is an abundance of data. Through visualizations, viewers can determine what questions the data can help answer. With improvements in hardware, software, and graphics theory, data visualizations are adopted more frequently and widely [26]. Viewers use visualizations to make decisions. However, making decisions and drawing observations by looking at visualizations can be complex due to the statistical variance and uncertainty present in these visualizations. As mentioned previously, uncertainty can have different definitions based on different scenarios [3]. Broadly speaking, uncertainty is classified into two types, aleatory and epistemic. Aleatory uncertainty rises from random fluctuation and unknown outcomes when an experiment is run multiple times in a consistent environment. For example, in a drug trial, a participant's blood pressure can vary due to stress and anxiety. There might also be measurement errors in the sphygmomanometer. Aleatory uncertainty can be minimized by controlling individual factors and increasing the number of readings. Epistemic uncertainty, on the other hand, rises from a lack of knowledge, like predicting the outcome of the same experiment in a completely different, unknown environment. For example, predicting the effect of a drug on a new disease. Uncertainty can be measured, like risks but can also be unquantified, like bias. While aleatory uncertainty is more widely represented in the visualizations [25], both types can be represented with distribution graphs. Uncertainty and visualizations are interweaved, and working with one often requires working with the other. In 1644, Michael Florent van Langren was one of the first researchers to use visualization for statistical analysis [25]. He used a 1D line graph to present the 12 known estimated longitudinal distances between Toledo and Rome, as shown in Figure 2. Instead of using a table to show this data, Langren used this graph to showcase the wide range of variation. Even though all the distances were over-estimated (actual distance, in longitude, is shown using the arrow), the graph remains classic in demonstrating the power of visualization. The popular Anscombe's quartet [1] is a perfect example of how data with similar statistics might have a very different distribution which is observed when visualized. The quartet consists of four datasets with 11 points having nearly the same mean, sample variance, correlation, linear regression, and coefficient of determination. The four datasets may appear very similar to viewers looking at the data and the descriptive statistics. However, when one visualizes them, the difference in their distribution is very evident, as shown in Figure 3. Looking at data in tabular form may hide insightful observations and can lead to erroneous conclusions. Today, researchers across all domains use extensive libraries such as [12, 19, 22, 4, 11] to analyze data uncertainty. Using visualizations to represent and study uncertainty in data is widely adopted. However, uncertainty in visualizations is often not communicated [9]. One of the earliest instances of uncertainty being presented can be traced back to the 18th century. Joseph Priestley, a British scientist, created "A Chart of Biography" to present the lifespans of famous people as shown in Figure 4. He used horizontal lines to portray the lifetime of about 2000 people and used dots before or after the lines to communicate uncertainty. Visualizations of uncertainty, however, are not common. Numerous factors influence why authors decide against visualizing uncertainty. Since they do not know all the information about the dataset, viewers may draw inaccurate conclusions in the absence of uncertainty representation. Nevertheless, introducing more uncertainty could also make the audience feel too overwhelmed to pay attention to it. The study of why visualizing uncertainty is rare is Figure 4: Priestley’s Chart of Biography [21] Figure 3: Anscombe’s quartet represents four datasets with similar statistics but very different distributions. Figure 2: Langren’s line graph is one of the first visualizations to present uncertainty still in its early stages. In the section that follows, I go through each of these issues in more detail and look at how uncertainty affects every stage of data visualization. ## 3 Uncertainty in Visualization Previous works in the field have attempted to classify the data visualization process differently. [14] considers sampling, modeling, visualization, and decision-making as the primary sources of uncertainty. This paper follows a similar classification. I divide the visualization pipeline into **data collection, preprocessing, visualization and inference** as shown in Figure 5. Pang et al. [18] classify the process into data collection, derivation, and visualization and discuss how uncertainty is introduced in each stage. Under the data collection phase, the paper mainly discusses the uncertainty added due to measurement errors. However, there are other sources, such as bias and sampling error, that the paper fails to describe. I investigate these uncertainties in Section 3.3.1. The authors then discuss the change data undergoes when it is preprocessed. These changes include converting one unit to another, rescaling, and resampling. However, they do not mention other vital issues such as missing data, approximation, and interpolation that I examine in Section 3.3.2. Next, the authors highlight how uncertainty also influences the data visualization stage itself. They mainly focus on radiosity and volume rendering, while this paper delves more into 2D visualizations. Finally, I explore how viewers infer these visualizations and the challenges they face while making a decision from these charts. Uncertainty is presented at every phase of this classification. However, understanding and evaluating uncertainty in each of these phases is unique. Therefore, authors are required to approach these uncertainties based on their type and complexity, understand their abstraction, and then present them in visualizations in a way that is easy to grasp. Given the interdisciplinary nature of visualizations, the format, quantity, and type of data used to create them vary immensely. Different data implies different data collection processes and uncertainties. Uncertainty is intertwined with data acquisition and can arise from random variables and modeling errors [14]. Pang et al. [18] explain how almost all acquired data has statistical variation. Collected data can have errors, bias, and variance. [23] study how bias can be introduced during the process of collecting data. Datasets are prone to various biases that include but are not limited to selection bias, volunteer bias, admission bias, survivor bias, and misclassification bias. It is imperative that datasets resemble the true population as closely as possible. Data can also contain different types of errors, such as coverage error, sampling error, nonresponse error, and measurement error [7]. Missing data points is another common challenge researchers face during data collection. Correcting these errors is not always possible, but they can be mentioned in the visualization to inform the viewer. However, uncertainty is often ignored when authors create visualizations. Other times this uncertainty in data is not communicated to them [9]. For example, when I analyze a piece called "Free Speech" (as shown in Figure 6) published in the What's Going On in This Graph section of the NYT. [16], we can see how information about uncertainty from the data source is not mentioned directly in the graph. The bars of the graph do not sum to 100 percent since they are missing the no-response segment. The article mentions that the margin of error for the sample is +/- 3.1%, but the graph makes no mention of it. Efforts are being made by researchers to improve the way uncertainty in the data collection phase is captured, processed, and communicated. Athawale et al. [2] propose using statistical summary maps to represent uncertainty in scalar field data caused by data acquisition. ### _Data Preprocessing_ Raw data is imperfect and can consist of noise and error. Once data is collected, it undergoes processing for accuracy and standardization. However, this phase adds uncertainty to the data that may not be immediately evident. For example, fundamental transformations like rounding off values, converting data from one unit to another, rescaling, resampling, and quantizing can add uncertainty [1]. Even though this might seem minor, the impact can be significant. For example, based on whether we take the value of pi as 22/7(3.14285) or 3.14159, the area of the Sun can vary by a difference of 239x106 sq. miles. A significant setback that most datasets suffer from is missing data. Data can have missing values for many reasons, such as instrument malfunction, incomplete observations, and lost data. Missing values leave a gap in the dataset, which makes room for uncertainty. Working with such uncertainty requires the authors to take extra measures during preprocessing. Authors attempt to find close estimates of the missing values to provide the viewers with a complete picture. One way to tackle this problem is by deleting the complete entry that has the missing value. This leads to a loss of data and insights. Another option is to make an educated guess about the missing value. However, this is highly unreliable and often not recommended. Using interpolation, imputation, or other techniques can induce errors [3]. Sometimes, authors choose to encode these estimated values differently in their designs to inform the viewer about the gap in the dataset. However, how authors choose to visualize this encoding becomes very influential in how viewers perceive these graphs. Whether authors highlight, downplay, annotate or remove the missing values determines how much confidence and credibility the Figure 5: The data visualization process divided into four stages to show how uncertainty affects each stage Figure 6: Free Speech, a graph by the New York Times based on a national poll including 1,507 U.S residents [16] viewer shows in the visualization [24]. ### Visualization Creation Since uncertainty isgrained in different parts of the data collection process, it is not easy to identify and control it. However, once the data is cleaned and processed, the authors face a new problem. Creating visualizations requires authors to make various decisions on behalf of the viewer. Authors are expected to choose the type of visualization based on data type, which may lead them to choose the scaling, sorting, ordering, and aesthetics [27]. Compelling visualizations are accurate and suggest an understanding and interpretation of data. Hence, it is the author's responsibility to analyze data correctly before creating any visualizations. Midway [15] describes ten design principles authors can follow to create charts. However, none of those principles discuss how uncertainty can be presented. Creating effective visualizations is hard. However, when we add uncertainty representation, the task becomes much more complex [17]. The data visualization community of researchers, designers, journalists, etc., has been reluctant to add uncertainty to their charts. Authors are aware of how significant uncertainty visualization is. Yet, they choose to exclude uncertainty when they design their charts for various reasons discussed below. #### 3.2.1 Uncertainty is hard to represent Though data is replete with uncertainty, the difficulty lies in determining if it should be represented and how. If the uncertainty has no direct relationship to the goal of the visualization, then it may not be included in the visualization. But this is not a conclusion that authors can quickly draw. The rise in techniques of visualizing uncertainty can make it harder for authors to decide which one to choose from. One of the biggest challenges in visualizing uncertainty is discovering and communicating the relationship and impact that the uncertainty has on the data. Data visualization is often a preferred choice for analysis due to its ability to present high-dimensional data. However, uncertainty also has dimensions, generally classified into scalar, vector, and tensor [20]. While scalar and vector fields of uncertainty are depicted in charts, tensor fields are often avoided. Mapping these dimensions of uncertainty along with the dimensions of data is challenging and often overlooked when creating charts. Instead, authors tend to simplify uncertainty to align with the dimensionality of the data. #### 3.2.2 Uncertainty is hard to calculate and verify Another reason why authors choose to exclude uncertainty from their charts is that calculating uncertainty is complex [9]. It is well known that even mathematicians and statisticians sometimes find it challenging to calculate the error or variance in a dataset. Verifying if the presented uncertainty is correct is challenging. Moreover, if the authors make an error while designing their charts, they end up providing wrong information to the viewers and losing their trust. #### 3.2.3 Viewers may be overwhelmed [9] explains why the inclusion of uncertainty in graphs is not widely adopted. Authors believe that uncertainty can be challenging for the viewers to perceive and understand. As a result, viewers may choose to either look at an alternative graph that does not contain any uncertainty representation or overlook the uncertainty in their graph altogether. #### 3.2.4 Uncertainty can add clutter to the visualization Authors can be unsure of how effective communicating uncertainty is. They also worry about adding more information to an already visually complex visualization. For many authors, the goal of a chart is to express a signal [9] that can be useful to their viewers. This signal tends to present a single point or a single source of truth. Uncertainty tends to challenge that notion by obfuscating the signal. Additionally, expressing the intricacy of uncertainty through a visual abstraction is challenging. The dimensionality of the data also plays a vital role in deciding whether uncertainty should be represented or not. An increase in the dimensionality of data makes it harder for the human visual system to perceive it effectively. Sometimes even two-dimensional charts can be overwhelming for the viewer. In such a case, representing uncertainty adds visual overload [20]. ### Visualization Inference Uncertainty is hard to understand and analyze. When faced with perceiving an uncertain visualization, viewers can get confused or derive inaccurate information from it. One easy method viewers tend to use is to ignore the uncertainty in the graph altogether. Another way is to substitute tricky calculations with easy ones or use heuristics to make decisions. However, this may not always give a correct observation. The most common approach to show uncertainty is by using box plots and error bars. Though widely used, viewers may find them challenging to analyze [6]. Sometimes visualizing uncertainty as frequency instead of distribution provide a better understanding. Currently, research is being done to create visualizations that help understand uncertainty more intuitively. For example, hypothetical outcome plots (HOPs) represent uncertainty by animating a finite set of individual draws [10]. This approach expects no prior knowledge of the domain from the viewer. However, using HOPs in physical media might be challenging. Bubble treemaps [8] are another approach for visualizing uncertainty. These circular treemaps encode additional information about uncertainty by allocating additional space for visuals. While uncertainty is still underrepresented in visualizations, more researchers are slowly adding it to their designs. One of the significant setbacks in uncertainty visualizations for authors is calculating uncertainty, while for viewers, it is graphical literacy. Efforts can be taken to increase this literacy through different programs gradually. Furthermore, work should be done to understand what visualization type best suits a given uncertainty type. This relationship can also depend on the type of data being represented and the target audience viewing the graph. For example, it is necessary for graphs published in newspapers and reports to be easily understandable by the public. Hence, studies focusing on visualizing uncertainty with no prior knowledge or information can be very insightful. ## 4 Conclusion Uncertainty visualization is one of the most complex research areas in data visualization today. This work provided an overview of uncertainty visualization and the relationship between uncertainty and visualization. I divided the visualization pipeline into four phases and surveyed papers to study how uncertainty interacts with each phase of the process. The work also investigated why the representation of uncertainty is not widely practiced by the data visualization community and the challenges viewers face when inferring from such a graph. Lastly, I discussed a few state-of-the-art methods to design uncertainty visualization and offered a glance into the interesting future research this field has to offer.
2309.09088
Enhancing GAN-Based Vocoders with Contrastive Learning Under Data-limited Condition
Vocoder models have recently achieved substantial progress in generating authentic audio comparable to human quality while significantly reducing memory requirement and inference time. However, these data-hungry generative models require large-scale audio data for learning good representations. In this paper, we apply contrastive learning methods in training the vocoder to improve the perceptual quality of the vocoder without modifying its architecture or adding more data. We design an auxiliary task with mel-spectrogram contrastive learning to enhance the utterance-level quality of the vocoder model under data-limited conditions. We also extend the task to include waveforms to improve the multi-modality comprehension of the model and address the discriminator overfitting problem. We optimize the additional task simultaneously with GAN training objectives. Our results show that the tasks improve model performance substantially in data-limited settings.
Haoming Guo, Seth Z. Zhao, Jiachen Lian, Gopala Anumanchipalli, Gerald Friedland
2023-09-16T20:04:16Z
http://arxiv.org/abs/2309.09088v2
# Enhancing Gan-Based Vocoders with Contrastive Learning Under Data-Limited Condition ###### Abstract Vocoder models have recently achieved substantial progress in generating authentic audio comparable to human quality while significantly reducing memory requirement and inference time. However, these data-hungry generative models require large-scale audio data for learning good representations. In this paper, we apply contrastive learning methods in training the vocoder to improve the perceptual quality of the vocoder without modifying its architecture or adding more data. We design an auxiliary task with mel-spectrogram contrastive learning to enhance the utterance-level quality of the vocoder model under data-limited conditions. We also extend the task to include waveforms to improve the multi-modality comprehension of the model and address the discriminator overfitting problem. We optimize the additional task simultaneously with GAN training objectives. Our result shows that the tasks improve model performance substantially in data-limited settings. Our analysis based on the result indicates that the proposed design successfully alleviates discriminator overfitting and produces audio of higher fidelity. Haoming Guo, Seth Z. Zhao, Jiachen Lian, Gopala Anumanchipalli, Gerald Friedland University of California, Berkeley + Footnote †: This paper is based on Haoming’s thesis [1] at University of California, Berkeley. **Index Terms**: GAN, self-supervised learning, vocoder ## 1 Introduction Generative Adversarial Networks (GANs) [2] have been widely used in vocoders and have achieved the state-of-the-art in the domain [3, 4, 5]. However, training GAN vocoders still meets two challenges, data insufficiency and discriminator overfitting. In the realm of single-speaker speech synthesis, the limited size of available datasets poses a significant challenge. To enhance the performance of vocoders operating under such constraints, we propose the use of unsupervised learning techniques to extract additional self-supervised signals for training. Self-supervised learning (SSL) methods have demonstrated efficacy in a diverse array of speech domains, including representation learning [6, 7, 8, 9, 10], synthesis [11, 12, 13, 14], and multi-modality [15, 16]. Drawing on the exceptional transfer learning capabilities of SSL, we seek to harness this power in the realm of Vocoder modeling, focusing specifically on the application of contrastive learning. Although contrastive learning has been explored in the context of speech recognition [6], we are unaware of any previous efforts to apply this approach to Vocoder modeling. In this work, our aim is to leverage contrastive learning as an auxiliary task to enhance the vocoding performance of GAN generators under data-limited conditions. The second challenge, discriminator overfitting, is also shown to be crucial, especially on small dataset [17, 18, 19], and the convergence of GAN also critically depends on the quality of discriminators [20]. Contrastive learning on the discriminator has been proved to alleviate this problem in image generation [21], and the method, in general, is also shown to increase model's performance and robustness on vision and language tasks [22, 23, 24, 25]. However, in speech synthesis, a naive approach of mel-spectrogram contrastive learning will only involve the generator, which encodes mel-spectrograms, but not the discriminator, which encodes the waveform. Therefore, we propose to extend the training to the discriminator by using a multi-modal contrastive task between mel-spectrograms and waveforms. Our contributions can be summarized as the following. 1. We propose a contrastive learning task with masked mel-spectrograms to improve the performance on limited data. 2. We design a novel contrastive learning task of matching mel-spectrogram to waveforms to regularize the discriminator and improve the perceptual quality of the generator. 3. We implement a framework for integrating contrastive learning into the GAN training pipeline. 4. We provide experimental results and in-depth analysis of the methods' effectiveness compared to the baseline. ## 2 Methods In this section, we first introduce the auxiliary contrastive task that we have designed for the GAN vocoder model. Subsequently, we explicate the details of how we modified the task to train both the generator and the discriminator of the vocoder model. Finally, we illustrate our proposed training framework, which synergizes the contrastive task with GAN objectives. It is worth noting that we have utilized the same model architecture as HiFi-GAN [4]. However, it is pertinent to mention that our method can be applied to other GAN frameworks for vocoders as well. ### Mel-spectrogram Contrastive Learning In our GAN model, the generator takes a mel-spectrogram as input and outputs a raw waveform through a stack of convolutional layers. We use a learnable feed-forward layer to project the features of the convolutional layers onto a latent space \(R^{D}\), where elements of similar semantics are close to each other through contrastive learning. For each anchor in a batch of \(N\) samples, we apply masking on randomly selected intervals in time and frequency to create a positive sample, while all other \((N-1)\) input samples and \((N-1)\) masked samples are used as negative samples. Together, the method results in \(1\) positive pair and \(2(N-1)\) negative pairs in the batch. We then adapt the InfoNCE loss [26] used in CLIP [27] for our loss function as follows: \[\mathcal{L}_{cl}=-\frac{1}{N}\sum_{i=1}^{N}\left(\log\frac{\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{v}_{k})}{\sum_{j=1;i\neq j}^{2N}\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{v}_{j}))}\right) \tag{1}\] where \(\mathbf{v}_{k}\in R^{D}\) is the masked sample from \(\mathbf{v}_{i}\in R^{D}\) and \(\tau\) is a temperature parameter. This method is shown in Fig. 1. ### Mel-spectrogram Waveform Contrastive Learning In addition to training solely the generator, we propose a novel task that involves contrastive spectrogram-waveform matching. This task serves to train both the generator and the discriminators, promoting rich semantic representation and preventing overfitting of the discriminators to the real or fake classification. The method is illustrated in Fig. 2. For a batch of pairs of mel-spectrograms and waveforms, we assign the labels of the true pairs to be positive and those of the other pairs to be negative, resulting in \(N\) positive pairs and \(N(N-1)\) negative pairs in a batch of \(N\) samples. We use the backbone of the generator to encode the mel-spectrogram and the backbone of the discriminator to encode the waveform. Similar to the method in section 2.1, we use two separate feed-forward layers to project each encoded feature to the same latent dimension \(R^{D}\). Then, we perform the modified loss function \[\mathcal{L}_{cl}=-\frac{1}{N}\sum_{i=1}^{N}\left(\log\frac{\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{w}_{i})}{\sum_{j=1;i\neq j}^{N}\text{exp}(\tau \mathbf{v}_{i}\cdot\mathbf{w}_{j}))}\right) \tag{2}\] where \(\mathbf{w}_{i}\in R^{D}\) is the latent embedding of the waveform corresponding to the \(i\)th mel-spectrogram, \(\mathbf{v}_{i}\in R^{D}\) is the latent embedding of the \(i\)th mel-spectrogram, and \(\tau\) is a temperature parameter. HiFi-GAN contains multiple discriminators, so we calculate a contrastive loss between the mel-spectrogram embedding and each of the waveform embeddings and sum them up. For simplicity, we refer them as one discriminator in this paper unless otherwise mentioned. ### Multi-tasking Framework To integrate contrastive learning with GAN tasks, we adopt a multi-tasking framework that makes auxiliary tasks a joint optimization objective with original learning goals [28]. As illustrated in Fig. 3, we create additional heads for the training Figure 1: **Illustration of Mel-spectrogram Contrastive Learning.** The Mel Encoder is the backbone of the generator. This method only trains the generator in a GAN framework. Figure 2: **Illustration of Mel-spectrogram & Waveform Contrastive Learning.** The Mel Encoder is the backbone of the generator, and the Wave Encoder is the backbone of the discriminator. Therefore, this method trains both the generator and discriminator. generator and discriminator with auxiliary tasks. The total loss for training the vocoder model thus becomes: \[\mathcal{L}_{G}=\mathcal{L}_{adv}+\lambda_{fm}\mathcal{L}_{fm}+\lambda_{mel} \mathcal{L}_{mel}+\lambda_{el}\mathcal{L}_{cl} \tag{3}\] \[\mathcal{L}_{D}=\mathcal{L}_{adv}+\mathcal{I}_{disc}\lambda_{cl}\mathcal{L}_{cl} \tag{4}\] where \(\mathcal{L}_{G}\) is the total loss for the generator and \(\mathcal{L}_{D}\) is the total loss for the discriminator. \(\mathcal{L}_{adv}\) is the adversarial loss, \(\mathcal{L}_{fm}\) is the feature matching loss, and \(\mathcal{L}_{mel}\) is the mel-spectrogram reconstruction loss in the original HiFi-GAN training pipeline. \(\mathcal{L}_{mel}\) can be either of the contrastive loss described in section 2.1 or 2.2, and \(\mathcal{I}_{disc}\) is an indicator of whether the latter is used. Each loss is weighted with a \(\lambda\) coefficient which can be set as hyperparameters. We use a \(\lambda_{fm}\) of 2, \(\lambda_{mel}\) of 45 from the HiFi-GAN setting [4] and a \(\lambda_{cl}\) of 1. ## 3 Experiments ### Experimental Setting In this section, we describe the details of our experimental settings including the dataset, model choice, hyperparameters and evaluation metrics. #### 3.1.1 Dataset In order to have a fair comparison with other vocoder models, we train the model on the LJSpeech dataset [29] which is also used in other vocoder works like HiFi-GAN [4]. LJSpeech is a public single-speaker dataset with 13100 short English audio clips whose durations span from 1 second to 10 seconds. We use the default data split with 12950 training samples and 150 validation samples. We use the same preprocessing configurations with HiFi-GAN, including 80 bands of mel-spectrograms as input and FFT size of 1024, window size of 1024, and hop size of 256 for conversion from waveform to mel-spectrograms.[4] #### 3.1.2 Implementation details For experimental comparison on audio quality, we choose the most powerful HiFi-GAN V1 and the most lightweight HiFi-GAN V3 as the baseline methods, and we use the same model architecture as the backbone to apply the contrastive tasks described in section 2.1 and 2.2. Under the multi-tasking framework, we train HiFi-GAN along with the contrastive learning methods with a batch size of 16, an AdamW optimizer, and a learning rate of 0.0002. For the following experiments on the full dataset, all models are trained for 400k steps (about 96 hours) on one Nvidia TITAN RTX GPU. The experiments on 20% of the dataset train for 300k steps (about 72 hours) on the same device, and those on 4% of the dataset train for 200k steps. The model inference time on GPU is about 70ms for V1 models and 32ms for V3 models. #### 3.1.3 Evaluation metrics To objectively evaluate our models compared to the baseline, we measure the mean average error (MAE) and mel-cepstral distortion (MCD) [30] on mel-spectrograms. On both metrics, lower scores indicate closer alignment with the ground truth. We also include a 5-scale mean opinion score (MOS) on audio quality as a subjective evaluation performed on 50 samples excluded from the training set. \begin{table} \begin{tabular}{l|c c|c} \hline \hline Model & MAE & MCD & MOS (CI) \\ \hline Ground Truth & - & - & 4.32 (\(\pm 0.05\)) \\ \hline HiFi-GAN V1 & **0.111** & **4.203** & **4.21** (\(\pm 0.05\)) \\ + Mel CL & 0.114 & 4.289 & 4.18 (\(\pm 0.06\)) \\ + Mel-Wave CL & 0.113 & 4.228 & 4.20 (\(\pm 0.05\)) \\ \hline HiFi-GAN V3 & **0.203** & 7.786 & 4.10 (\(\pm 0.05\)) \\ + Mel CL & 0.204 & 7.766 & **4.13** (\(\pm 0.07\)) \\ + Mel-Wave CL & **0.203** & **7.723** & 4.09 (\(\pm 0.06\)) \\ \hline \hline \end{tabular} \end{table} Table 1: Objective and subjective evaluation results for models with mel-spectrogram contrastive loss (Mel CL) and mel-spectrogram contrastive loss (Mel-Wave CL). Models are trained on the full training set. CI is 95% confidence interval of the MOS score. Figure 3: **Illustration of our multi-tasking frameworks.** GAN-based Vocoder models [3, 4] follow an adversarial network (**top**) consisting of a generator that generates raw waveforms from mel-spectrograms and a discriminator that aims to distinguish real from generated waveform samples. To incorporate the auxiliary contrastive learning task, we propose a multi-tasking (**bottom**) framework, which we set the contrastive task as additional learning objectives along with the original GAN optimization objectives. This framework applies to both contrastive learning methods described in section 2.1 and 2.2. ### Results We present the results of models trained on full data with the multi-tasking framework in Table 1. Below, we refer Mel CL as the mel-spectrogram contrastive learning in section 2.1, and Mel-Wave CL as the mel-spectrogram waveform contrastive learning in section 2.2. For V1 models, the baseline performs slightly better than the proposed methods by margins of 0.02 on MAE, 0.025 on MCD, and 0.01 on MOS. For V3 models, on the objective tests, we observe that the model trained with mel-spectrogram contrastive loss has comparable performance with the baseline, while the one trained with mel-spectrogram waveform contrastive loss achieves the highest scores on both metrics. The results show that our proposed methods have at least comparable performance to the baseline HiFi-GAN when training on the full dataset. On the subjective tests, the V3 model with Mel CL achieves the highest MOS score, 0.03 above the V3 baseline. The model with Mel-Wave CL has a similar MOS score with the baseline on the full dataset. Overall, when trained on the full dataset, the proposed methods have limited gains on top of the baseline. To investigate how each model performs under data limitation, we train the three models on 20% of the dataset and evaluate them with the same validation set. We present the results in Table 2. With less data, the baseline HiFi-GAN V3 suffers a significant performance degradation across all metrics, including 0.371 on MCD and 0.22 on MOS. Meanwhile, the V3 model trained with Mel CL experiences an increase of 0.194 on MCD and a drop of 0.18 on MOS. The V3 model trained with Mel-Wave CL has an increase of 0.251 on MCD and a drop of only 0.05 on MOS. It suggests Mel-Wave CL is most resistant to data insufficiency. The two proposed methods have comparable scores on the objective evaluation, but the model with Mel-Wave CL obtains a significantly higher score on the subjective test, 0.16 higher than the V3 baseline. The findings align with our hypothesized alleviation of discriminator overfitting by Mel-Wave CL, which is a more severe problem on the small training dataset. Both of the proposed methods perform substantially better than the baseline by 0.07 and 0.16 respectively. A similar trend exists in the HiFi-GAN V1 experiments, where Mel-Wave CL achieves the best scores and the least performance drop on all metrics. One slightly surprising finding is that the larger model V1 often experiences a smaller performance drop compared to the smaller model V3 when trained on 20% data. Typically, a larger model is expected to be more prone to overfitting when trained on less data, which should lead to a larger performance drop. In this specific case, however, HiFi-GAN V1 has a larger generator but the same discriminator as HiFi-GAN V3 [4], which is our suspected reason for the finding. Overall, the results show the benefits of additional supervision signals from contrastive learning in data-limited situations and the superior performance of Mel-Wave CL on a small dataset. ## 4 Conclusion This paper describes our proposed contrastive learning framework to improve GAN vocoders. Our results show the legacy of using contrastive learning as an auxiliary task that facilitates vocoder training without adding more data or modifying model architecture. We demonstrate that the proposed framework is significant especially when training on limited data by extracting additional supervision signals and reducing discriminator overfitting. For future work, we plan to repeat the experiments on different model architectures and datasets to test our method's generalizability. In particular, we want to test its extension to multi-speaker datasets, another domain where data insufficiency is critical. We will also explore other metrics to evaluate the discriminator overfitting problem more holistically.
2307.16404
Nonvolatile Magneto-Thermal Switching in MgB2
Ongoing research explores thermal switching materials to control heat flow. Specifically, there has been interest in magneto-thermal switching (MTS) materials based on superconductors, which only exhibited switching behavior when a magnetic field was applied. However, a recent report highlighted nonvolatile MTS in commercial Sn-Pb solders, attributed to magnetic flux trapping. In this study, we focused on flux trapping in a type-II superconductor MgB2. Magnetization and thermal conductivity measurements under magnetic fields were conducted on polycrystalline MgB2. We confirmed that magnetic flux was indeed trapped in MgB2 even after demagnetization. Additionally, we observed nonvolatile MTS in MgB2 as well as Sn-Pb solders. These results suggest that the nonvolatile MTS may be a widespread characteristic of superconducting materials with flux trapping.
Hiroto Arima, Yoshikazu Mizuguchi
2023-07-31T04:59:19Z
http://arxiv.org/abs/2307.16404v1
# Nonvolatile Magneto-Thermal Switching in MgB\({}_{2}\) ###### Abstract Ongoing research explores thermal switching materials to control heat flow. Specifically, there has been interest in magneto-thermal switching (MTS) materials based on superconductors, which only exhibited switching behavior when a magnetic field was applied. However, a recent report highlighted nonvolatile MTS in commercial Sn-Pb solders, attributed to magnetic flux trapping. In this study, we focused on flux trapping in a type-II superconductor MgB\({}_{2}\). Magnetization and thermal conductivity measurements under magnetic fields were conducted on polycrystalline MgB\({}_{2}\). We confirmed that magnetic flux was indeed trapped in MgB\({}_{2}\) even after demagnetization. Additionally, we observed nonvolatile MTS in MgB\({}_{2}\) as well as Sn-Pb solders. These results suggest that the nonvolatile MTS may be a widespread characteristic of superconducting materials with flux trapping. The recent advancements in electronic device technology have spurred research into thermal switching materials, which enable control of heat flow through external parameters[1; 2]. Recent progress has been made in the development of thermal switching materials, where the control of thermal conductivity (\(\kappa\)) is achieved through the application of electric[3] and magnetic fields[4; 5]. Among these materials, superconductors have received particular attention in magneto-thermal switching (MTS) research [6; 7]. Here, we introduce an index to assess the effectiveness of MTS known as the MTS ratio (MTSR). The MTSR is calculated as the ratio of the change in \(\kappa\) between the presence and absence of a magnetic field. The MTSR is expressed as [\(\kappa(H)\) - \(\kappa(0\) Oe)] / \(\kappa(0\) Oe). It is widely recognized that, in the normal state, heat is carried by charge carriers, whereas in the superconducting state, heat transport by Cooper pairs is negligible. Consequently, the phase transition from the superconducting state to the normal state results in an increase in \(\kappa\). Recent studies reported MTSR of 650 % for Nb[6] and over 1000 % for high purity 5N-Pb[7]. However, previously reported MTS using superconductors had a limitation, \(\kappa(H)\) returned to its initial value \(\kappa(0\) Oe) when the magnetic field was reduced to zero, indicating that MTS was effective only in the presence of a magnetic field. In the most recent discovery reported in arXiv: 2307.05957 (preprint)[8], a nonvolatile MTS, which retains the altered \(\kappa(H)\) even when the magnetic field is completely removed, has been identified. Surprisingly, this nonvolatile MTS material was discovered in commercially available Sn-Pb solders. The nonvolatile MTSR is defined as [\(\kappa\) (0 Oe, demagnetized) - \(\kappa(0\) Oe, initial)]/\(\kappa\) (0 Oe, initial), and it has been determined that the nonvolatile MTSR of flux-core-free Sn45-Pb55 solder was 150 %. The origin of nonvolatile MTS in Sn-Pb solders is attributed to the presence of magnetic flux trapped in the solder even after the applied magnetic field is removed, resulting in a partial loss of superconducting bulkiness at \(H=0\) Oe. While magnetic flux trapping in Sn-Pb solders is relatively rare due to both Sn and Pb being type-I superconductors, the magnetic flux trap after demagnetization is commonly observed in type-II superconductor samples. In this study, our primary focus is on exploring the occurrence of nonvolatile MTS in type-II superconductors, with particular emphasis on MgB\({}_{2}\), which has been studied for its flux trapping properties[9; 10]. MgB\({}_{2}\) was discovered in 2001 and stands out among intermetallic superconductors for having the highest superconducting transition temperature \(T_{\rm SC}\sim 39\) K under ambient pressure [11]. This compound exhibits a unique characteristic as a multi-gap superconductor, with multiple conduction bands and independent superconducting gaps present on the Fermi surface[12; 13]. Shortly after its discovery, it was observed that grain boundaries in MgB\({}_{2}\) could serve as effective pinning centers, contributing to high critical current density (\(J_{\rm c}\)) in superconducting materials[14; 15; 16; 17]. Consequently, extensive research has been conducted to investigate the relationship between magnetic flux trapping at grain boundaries and \(J_{\rm c}\). Until now, the association between magnetic flux trapping and nonvolatile MTS has solely been reported in Sn-Pb solders. To gain a deeper understanding of this phenomenon, it is essential to explore other materials. MgB\({}_{2}\) presents an appealing platform for investigating nonvolatile MTS due to the existing body of research on flux trapping effects at grain boundaries[9]. While previous studies have conducted thermal conductivity measurements under magnetic field on MgB\({}_{2}\)[18; 19], there has been no specific focus on nonvolatile MTS. In this study, magnetization measurements and thermal conductivity measurements under magnetic fields were conducted for commercial MgB\({}_{2}\). Notably, nonvolatile MTS was also observed in MgB\({}_{2}\). Polycrystalline MgB\({}_{2}\) used in this experiment was a commercially available powder sample (99%, KOJUNDO). Before the measurements, the powder sample underwent a high-pressure sintering process. In this experiment, high-pressure sintering was performed at relatively low temperatures to suppress grain growth. The specific conditions for this high-pressure sintering entailed a pressure of 3 GPa and a temperature of 400 \({}^{\circ}\)C, sustained around 30 minutes. The crystal structure was examined through powder X-ray diffraction employing the Cu-K\(\alpha\) radiation using the \(\theta\)-2\(\theta\) method (Miniflex-600 RIGAKU). The Rietveld refinement of the XRD data was performed using the RIETAN-FP package[20]. The scanning electron microscope (SEM, TM3030, Hitachi High-Tech) was used for microstructure observation. The thermal conductivity was measured using a Physical Property Measurement System (PPMS, Quantum Design) equipped with a thermal transport option (TTO). The measurement employed a four-probe steady-state method, incorporating a heater, two thermometers, and a base-temperature terminal. For the thermal conductivity measurements of MgB\({}_{2}\), a cylindrical sample with a diameter of 4.61 mm and a height of 4.10 mm was employed. The magnetization measurements were carried out using a superconducting quantum interference device (SQUID) magnetometry technique, employing the Magnetic Property Measurement System (MPMS3, Quantum Design) in a VSM (vibrating sample magnetometry) mode. In this experiment, thermal conductivity measurements were conducted on a high-pressure sintered MgB\({}_{2}\) sample within a week. Subsequently, the sample was crushed, and further analyses including XRD and magnetization measurements, and SEM imaging were performed. All the experiments were carried out using the same batch of sample. Figure 1 illustrates the XRD patterns obtained from the high-pressure sintered MgB\({}_{2}\) sample. In the high-pressure sintered sample, the presence of MgB\({}_{4}\) and MgO were detected as an impurity, alongside the main MgB\({}_{2}\) peaks. The reliability factor, denoted as \(R_{\rm wp}\), was determined to be \(R_{\rm wp}=3.7\) %, and the goodness-of-fit indicator, represented by \(S\), was calculated as \(S=1.8\). The results of Rietveld refinement indicated that the sample composition consisted of approximately 90 % MgB\({}_{2}\), 5 % MgB\({}_{4}\), and 5% MgO. The as-purchased MgB\({}_{2}\) powder contained a similar amount of MgB\({}_{4}\) and MgO. The discrepancy with the nominal purity of 99% MgB\({}_{2}\) is likely a result of certain compounds not being accounted for in the chemical analysis. Furthermore, the XRD profile exhibited broadening, implying lattice strain induced by the high-pressure sintering process. Figure 2 shows the SEM image of the high-pressure sintered MgB\({}_{2}\). Numerous granular grains were observed in the structure of the high-pressure sintered MgB\({}_{2}\), with the majority of the grain sizes measuring less than approximately 5 \(\mu\)m. Figure 3 (a) illustrates the temperature dependence of the magnetization \(4\pi M\) measured at 10 Oe under both zero-field-cooling (ZFC) and field-cooling (FC) conditions. The magnetization measurement under ZFC demonstrates a large shielding signal below \(T_{\rm SC}\sim 39\) K. The difference between ZFC and FC measurements is a characteristic behavior commonly observed in type-II superconductors. The temperature dependence of \(4\pi M\) exhibited broadening, which has also been reported in previous studies on high-pressure sintered MgB\({}_{2}\)[17]. The exact cause of this broadening is not yet clear, but the inhomogeneity of the crystals likely plays a role, as suggested by the broad profile observed in the XRD measurement. Figure 3 (b) depicts the temperature dependence of \(4\pi M\) measured at 10 Oe after FC at three different fields : 1000 Oe, 10000 Oe, and 70000 Oe. In all cases, \(4\pi M\) exhibited ferromagnetic-like behavior below \(T_{\rm SC}\), similar to the findings of previously reported hydrogen-rich superconductors[21] and Sn-Pb solders[8], implying the presence of trapped magnetic flux at grain boundaries of MgB\({}_{2}\). The value of magnetization at 1.8 K increased as the field increased from 1000 Oe to 10000 Oe, but it did not change further with the application of a higher magnetic field. This suggests that the amount of trapped magnetic flux increases with the applied magnetic field, but there is a threshold where the trapped magnetic flux saturates. To further discuss, we show the \(4\pi M\)-\(H\) curves at 2.5 K and 4.0 K in Figs. 3(c) and 3(e), respectively. These curves display the distinct shape commonly observed in type-II superconductors, which signifies the presence of flux trapping in the material. As depicted in Figures 3(d) and 3(f), the inner magnetic flux density (\(B\)) given by \(B=H+4\pi M\) near 0 Oe is displayed at 2.5 K and 4.0 K. The results at 2.5 K and 4.0 K showed similarities: immediately after the zero-field-cooling, the initial magnetic flux density of MgB\({}_{2}\) was \(B=0\). However, upon applying a magnetic field to MgB\({}_{2}\), \(B\) did not return to its initial value when the applied field reached \(H\) = 0, due to the magnet flux trapping. The magnetic flux density trapped at \(H\) = 0 Oe was 500 G for both temperatures. Figure 4 (a) depicts the temperature dependence of \(\kappa\) in both a zero magnetic field and a magnetic field of 10000 Oe. In the absence of a magnetic field, \(\kappa\) decreased as the temperature decreased. The observed variation in the slope of \(\kappa\) at approximately 10 K was consistent with previous measurements on polycrystalline MgB\({}_{2}\)[22]. Furthermore, \(\kappa\) at 50 K in this experiment was approximately 3.5 W/Km, which aligns with the order of magnitude reported in previous studies, where values ranged from 5 W/Km[23] to 9 W/Km[22]. It is noted that thermal conductivity is a sensitive indicator of grain boundaries, and therefore, the discrepancy with previous studies is attributed to the sample dependence. When a magnetic field of 10000 Oe was applied, a similar trend in \(\kappa\) was observed, but the decrease in \(\kappa\) was suppressed. This can be attributed to the suppression of the superconducting state in MgB\({}_{2}\) under the magnetic field. Figures 4(b) and 4(c) illustrate the magnetic field dependence of \(\kappa\) at 2.5 K and 4 K, respectively. When the MgB\({}_{2}\) was zero-field-cooled to 2.5 K, the initial \(\kappa\) in the absence of magnetic field was 6.9 mW/Km. When a magnetic field was applied, \(\kappa\) increased and reached a value of 14.0 mW/Km at 10000 Oe. As the magnetic field gradually decreased from 10000 Oe, \(\kappa\) showed a decrease. However, the value at 0 Oe deviated from the initial value, indicating nonvolatile MTS. Upon further reduction of the magnetic field, a minimum value of \(\kappa\) was observed, followed by an increase in \(\kappa\). Similar trends were observed when the magnetic field was increased from -10000 Oe. As mentioned earlier, the presence of approximately 500 G of trapped magnetic flux in MgB\({}_{2}\) after demagnetization partially suppresses the superconducting state and prevented \(\kappa\) from returning to its initial value. The nonvolatile MTSR observed in MgB\({}_{2}\) at 2.5 K in this experiment was 18 %, which is smaller than to that of flux-core-free Sn45-Pb55 solder[8]. Furthermore, nonvolatile MTS was also observed at 4.0 K, although the nonvolatile MTSR decreased to that at 2.5 K, reaching 15 %. The primary discovery of this study is the confirmation of nonvolatile MTS occurring in the magnetic flux trapped at the grain boundaries of the type-II superconductor MgB\({}_{2}\). This finding diverges from prior research, which predominantly focused on composites such as Sn-Pb solders. Notably, the phenomenon of flux trapping at grain boundaries has been observed not only in MgB\({}_{2}\) but also in other type-II superconductors, including cuprate superconductors and iron-based superconductors [24]. This suggests that the trapping of flux at grain boundaries is a widespread occurrence in various types of type-II superconducting materials. In this study, the maximum value of the nonvolatile MTSR achieved for MgB\({}_{2}\) remained relatively small at 18 % at 2.5 K. To further enhance the nonvolatile MTSR, potential methods include controlling the grain boundary size to increase the trapped magnetic flux and regulating the thermal conductivity in the normal conducting region. However, further systematic investigations are required in this regard. Recent advancements in machine learning have contributed to the elucidation of heat conduction mechanisms in grain boundaries and nanopolycrystals [25]. Given that nonvolatile MTS is a relatively new phenomenon, it is crucial to not only investigate the thermal conductivity under magnetic field in various materials but also consider theoretical approaches that utilize machine learning to gain a deeper understanding of nonvolatile MTS. The motivation for this study was derived from the discovery of nonvolatile MTS induced by magnetic flux trapping in Sn-Pb solders. Drawing inspiration from this phenomenon, our research focused on investigating the magnetic field dependence of thermal conductivity in type-II superconductor MgB\({}_{2}\), a material renowned for its ability to trap magnetic flux at grain boundaries. Through our experiments, we successfully observed nonvolatile MTS in MgB\({}_{2}\) and identified magnetic flux trapping as the underlying mechanism. Moving forward, it is imperative to extend this research to encompass other type-II superconductors with effective pinning centers. Such endeavors will contribute to a deeper understanding of nonvolatile MTS at a fundamental level and facilitate improvements in both the nonvolatile MTSR and the operational temperature range, thereby paving the way for potential engineering applications. ## Acknowledgment We thank O. Miura and K. Uchida for supports in experiments and fruitful discussion on the results. This work was partly supported by JST-ERATO (JPMJER2201), TMU Research Project for Emergent Future Society, and Tokyo Government-Advanced Research (H31-1).
2307.16410
HiREN: Towards Higher Supervision Quality for Better Scene Text Image Super-Resolution
Scene text image super-resolution (STISR) is an important pre-processing technique for text recognition from low-resolution scene images. Nowadays, various methods have been proposed to extract text-specific information from high-resolution (HR) images to supervise STISR model training. However, due to uncontrollable factors (e.g. shooting equipment, focus, and environment) in manually photographing HR images, the quality of HR images cannot be guaranteed, which unavoidably impacts STISR performance. Observing the quality issue of HR images, in this paper we propose a novel idea to boost STISR by first enhancing the quality of HR images and then using the enhanced HR images as supervision to do STISR. Concretely, we develop a new STISR framework, called High-Resolution ENhancement (HiREN) that consists of two branches and a quality estimation module. The first branch is developed to recover the low-resolution (LR) images, and the other is an HR quality enhancement branch aiming at generating high-quality (HQ) text images based on the HR images to provide more accurate supervision to the LR images. As the degradation from HQ to HR may be diverse, and there is no pixel-level supervision for HQ image generation, we design a kernel-guided enhancement network to handle various degradation, and exploit the feedback from a recognizer and text-level annotations as weak supervision signal to train the HR enhancement branch. Then, a quality estimation module is employed to evaluate the qualities of HQ images, which are used to suppress the erroneous supervision information by weighting the loss of each image. Extensive experiments on TextZoom show that HiREN can work well with most existing STISR methods and significantly boost their performances.
Minyi Zhao, Yi Xu, Bingjia Li, Jie Wang, Jihong Guan, Shuigeng Zhou
2023-07-31T05:32:57Z
http://arxiv.org/abs/2307.16410v1
# HiREN: Towards Higher Supervision Quality for Better Scene Text Image Super-Resolution ###### Abstract Scene text image super-resolution (STISR) is an important pre-processing technique for text recognition from low-resolution scene images. Nowadays, various methods have been proposed to extract text-specific information from high-resolution (HR) images to supervise STISR model training. However, due to uncontrollable factors (_e.g._ shooting equipment, focus, and environment) in manually photographing HR images, the quality of HR images cannot be guaranteed, which unavoidably impacts STISR performance. Observing the quality issue of HR images, in this paper we propose a novel idea to boost STISR by first enhancing the quality of HR images and then using the enhanced HR images as supervision to do STISR. Concretely, we develop a new STISR framework, called High-Resolution ENhancement (HiREN) that consists of two branches and a quality estimation module. The first branch is developed to recover the low-resolution (LR) images, and the other is an _HR quality enhancement_ branch aiming at generating high-quality (HQ) text images based on the HR images to provide more accurate supervision to the LR images. As the degradation from HQ to HR may be diverse, and there is no pixel-level supervision for HQ image generation, we design a kernel-guided enhancement network to handle various degradation, and exploit the feedback from a recognizer and text-level annotations as weak supervision signal to train the HR enhancement branch. Then, a _quality estimation module_ is employed to evaluate the qualities of HQ images, which are used to suppress the erroneous supervision information by weighting the loss of each image. Extensive experiments on Text/Zoom show that HiREN can work well with most existing STISR methods and significantly boost their performances. Scene text image super-resolution, scene text recognition, super-resolution, resolution enhancement ## I Introduction Scene text recognition [1, 2] (STR), which aims at recognizing texts from scene images has wide applications in scene text based image understanding (_e.g._ auto-driving [3], TextVQA [4], Doc-VQA [5], and ViteVQA [6]). Despite the fact that STR has made great progress with the rapid blossom of deep learning in recent years, performance of text recognition from low-resolution (LR) text images is still unsatisfactory [7]. Therefore, scene text image super-resolution (STISR) [8, 9, 7] is gaining popularity as a pre-processing technique to recover the missing details in LR images for boosting text recognition performance as well as the visual quality of the scene texts. As shown in Fig. 1(a), recent STISR works usually try to directly capture pixel-level (via \(L1\) or \(L2\) loss) or text-specific information from high-resolution (HR) text images to supervise the training of STISR models. For instance, Gradient profile loss [7] calculates the gradient fields of HR images as ground truth for sharpening the boundaries of the super-resolution (SR) images. PCAN [10] is proposed to learn sequence-dependent features and high-frequency information of the HR images to better reconstruct SR text images. STT [8] exploits character-level attention maps from HR images to assist the recovery. [11] and TG [9] extract stroke-level information from HR images through specific networks to provide more fine-grained supervision information. [12, 13, 14] additionally introduce external modules to extract various text-specific clues to facilitate the recovery and use the supervision from HR images to finetune their modules. Although various techniques that extract information from the HR images have been proposed to improve the recognition accuracy, they all assume that the HR images are completely trustworthy, which is actually not true, due to the uncontrollable factors (e.g. shooting equipment, focus, and environment) in manually photographing the HR images. As shown in Fig. 1(c), the HR images may suffer from blurring (the 1st and 2nd cases) and low-contrast (the 3rd case), which unavoidably impacts the performance of STISR. In the worst case, these quality issues may cause the failure of recognition on HR images and lead to wrong supervision information. What is worse, the HR quality problem in real world is absolutely not negligible, as the recognition accuracy on HR images can be as low as 72.4% (see Tab. II). Considering the fact that improving the photographing of LR/HR images and eliminating environmental impacts are extremely expensive (if not impossible) in the wild, and applying huge models for extracting more accurate information is also time-consuming and costly, in this paper we propose a novel solution to advance STISR by first enhancing the quality of HR images and then using the enhanced HR images as supervision to perform STISR. To this end, we develop a new, general and easy-to-use STISR framework called **H**igh-**R**esolution **EN**chancement (HiREN) to improve STISR by providing more accurate supervision. In particular, as shown in Fig. 1(b), besides the typical LR recovery branch, HiREN additionally introduces an HR enhancement branch that aims at improving the quality of HR images and a quality estimation (QE) module to conduct a quality-aware supervision. Here, the resulting high-quality (HQ) images, instead of the HR images as in existing works, are used to supervise the LR recovery branch. Note that the degradation from HQ to HR is unknown, and there is no explicit supervision for HR enhancement, existing STISR approaches are not able to solve the task of HR enhancement. To tackle these problems, on the one hand, we introduce a degradation kernel predictor to generate the degradation kernel and then use this kernel as a clue to enhance various degraded HR images. On the other hand, we exploit the feedback of a scene text recognizer and text-level annotations as weak supervision signal to train the HR enhancement branch. What is more, to suppress the erroneous supervision information, a quality estimation (QE) module is proposed to evaluate the quality of the HQ images through the normalized Levenshtein similarity [15] of the recognized text and the ground truth, and then use this quality estimation to weight the loss of each HQ image. Such design above offers our method four-fold advantages: * _General_. Our framework can work with most existing STISR approaches in a plug-and-play manner. * _Easy-to-use_. After training the HR enhancement branch, our method can be plugged online to the training of existing techniques easily. * _Efficient_. HiREN does not introduce additional cost during inference. What is more, HiREN can also be deployed offline by caching all the enhanced HR images. This offline deployment does not introduce any additional training cost. * _High-performance_. Our method can significantly boost the performances of existing methods. Contributions of this paper are summarized as follows: * We propose a novel approach for STISR. To the best of our knowledge, this is the first work to consider and exploit the quality of HR images in STISR. That is, different from existing approaches that extract various text-specific information, Our work pioneers the exploration of the quality issue of HR images. * We develop a general, efficient and easy-to-use **H**igh-**R**esolution **EN**hancement (HiREN) framework to boost STISR by improving the supervision information from the HR images. * We conduct extensive experiments on TextZoom, which show that HiREN is compatible with most existing STISR methods and can significantly lift their performances. The rest of this paper is organized as follows: Section II surveys related works and highlights the differences between our method and the existing ones; Section III presents our method in detail; Section IV introduce the experimental results of our method and performance comparisons with existing methods; Section V further discusses the quality issues of HR images, error cases and limitations of the proposed method; Section VI concludes the paper while pinpointing some issues of future study. ## II Related Work In this section, we briefly review the super-resolution techniques and some typical scene text recognizers. According to whether exploiting text-specific information from HR images, recent STISR methods can be roughly divided into two groups: generic super-resolution approaches and scene text image super-resolution approaches. ### _Generic Image Super-Resolution_ Generic image super-resolution methods [16, 17, 18, 19] usually recover LR images through pixel information Fig. 1: Overview of existing STISR approaches and our method, and examples illustrating the quality problem of HR images. (a) The framework of existing STISR methods; (b) The HiREN framework; (c) Some examples of low-quality HR images and their enhanced results (HQ) by our method, as well as the recognized results. For each case, the 1st row shows HR and HQ images, the 2nd row presents the normalized HR and HQ images to highlight their visual differences, and the 3rd row gives the recognized characters: red indicates incorrectly recognized, and black means correctly recognized. from HR images captured by pixel loss functions. In particular, SRCNN [20] is a three-layer convolutional neural network. [21] and SRResNet [22] adopt generative adversarial networks to generate distinguishable images. [23] employs convolutional layers, transposed convolution and sub-pixel convolution layers to extract and upscale features. RCAN [24] and SAN [25] introduce attention mechanisms to boost the recovery. Nowadays, transformer-structured approaches [26, 27, 28] are proposed to further advance the task of generic image super-resolution. Nevertheless, these approaches ignore text-specific properties of the scene text images, which leads to low recognition performance when applied to STISR. ### _Scene Text Image Super-Resolution_ Recent approaches focus on extracting various text-specific information from the HR images, which is then utilized to supervise model training. Specifically, [29, 30] calculate text-specific losses to boost performance. [31] proposes a multi-task framework that jointly optimizes recognition and super-resolution branches. [7] introduces TSRN and gradient profile loss to capture sequential information of text images and gradient fields of HR images for sharpening the texts. PCAN [10] is proposed to learn sequence-dependent and high-frequency information of the reconstruction. STT [8] makes use of character-level information from HR images extracted by a pre-trained transformer recognizer to conduct a text-focused super-resolution. [32] proposes a content perceptual loss to extract multi-scale text recognition features to conduct a content aware supervision. TPGSR [12], TATT [13], and C3-STISR [14] extract text-specific clues to guide the super-resolution. In particular, TPGSR is the first method that additionally introduces a scene text recognizer to provide text priors. Then, the extracted priors are fed into the super-resolution to iteratively benefit the super-resolution. TATT [13] introduces a transformer-based module, which leverages global attention mechanism, to exert the semantic guidance of text prior to the text reconstruction process. C3-STISR [14] is proposed to learn triple clues, including recognition clue from a STR, linguistical clue from a language model, and a visual clue from a skeleton painter to rich the representation of the text-specific clue. TG [9] and [11] exploit stroke-level information from HR images via stroke-focused module and skeleton loss for more fine-grained super-resolution. Compared with generic image super-resolution approaches, these methods greatly advance the recognition accuracy through various text-specific information extraction techniques. Nevertheless, they all assume that HR images are completely trustable, which is actually not true in practice. As a result, their extracted supervision information may be erroneous, which impacts the STISR performance. Since HiREN applies these methods to implement the LR recovery branch, to elaborate the differences among various super-resolution techniques in this paper, we give a summary of these methods in Tab. I on three major aspects: how their super-resolution blocks and loss functions are designed, and whether they use iterative super-resolution technique to boost the performance. ### _Scene Text Recognition_ Scene text recognition (STR) [33, 1, 2, 34, 35] has made great progress in recent years. Specifically, CRNN [36] takes CNN and RNN as the encoder and employs a CTC-based [37] decoder to maximize the probabilities of paths that can reach the ground truth. ASTER [38] introduces a spatial transformer network (STN) [39] to rectify irregular text images. MORAN [40] proposes a multi-object rectification network. [41, 42, 43] propose novel attention mechanisms. AutoSTR [44] searches backbone via neural architecture search (NAS) [45]. More recently, semantic-aware [46, 43], transformer-based [47], linguistics-aware [48, 49], and efficient [50, 51] approaches are proposed to further boost the performance. Although these methods are able to handle irregular, occluded, and incomplete text images, they still have difficulty in recognizing low-resolution images. For example, as can be seen in Sec. IV-C, CRNN, MORAN, and ASTER only achieve the recognition accuracy of 27.3%, 41.1% and 47.2% respectively when directly using LR images as input. What is more, finetuning these recognizers is insufficient to accurately recognize texts from LR images, as reported in [7]. Therefore, a pre-processor is required for recovering the details of low-resolution images. ### _Difference between Our Method and Existing STISR Works_ The motivation of HiREN is totally different from that of existing STISR approaches. As described above, existing methods focus on extracting text-specific information from HR images to supervise STISR. On the contrary, HiREN first lifts the quality of HR images, then uses the enhanced images to supervise STISR. This allows HiREN to work with most existing STISR approaches and boost their recognition performances in a general, economic and easy-to-use way. ## III Method Here, we first give an overview of our framework HiREN, then briefly introduce the LR recovery branch. Subsequently, we present the HR enhancement branch and the quality estimation module in detail, followed by the usage of HiREN. ### _Overview_ Given a low-resolution (LR) image \(I_{LR}\in\mathbb{R}^{C\times N}\). Here, \(C\) is the number of channels of the image, \(N=H\times W\) is the collapsed spatial dimension, \(H\) and \(W\) are the height and width of image \(I_{LR}\). Our aim is to produce a super-resolution (SR) \begin{table} \begin{tabular}{c|c c c} \hline Method & Super-resolution block & Loss function \(\mathcal{L}_{LR}\) & Iterative \\ \hline SRCNN [20] & SRCNN [20] & MSE & \(\times\) \\ SRResNet [22] & SRResNet [22] & MSE & \(\times\) \\ TSRN [7] & SSB [7] & Gradient profile loss [7] & \(\times\) \\ PCAN [10] & PCA [10] & Edge guidance loss [10] & \(\times\) \\ STT [8] & TBSRN [8] & Text-focused loss [8] & \(\times\) \\ TPGSR [12] & SRN [7] & Gradient profile loss [7] & \(\checkmark\) \\ TG [9] & SSB [7] & Stroke-focused loss [9] & \(\times\) \\ \hline \end{tabular} \end{table} TABLE I: Differences between typical STISR methods from three aspects: super-resolution block, loss function, and whether this method is iterative or not. image \(I_{SR}\in\mathbb{R}^{C\times(4\times N)}\) with the magnification factor of \(\times 2\). Fig. 2 shows the architecture of our framework HiREN, which is composed of two major branches: the _LR recovery branch_\(f_{LR}\) that takes \(I_{LR}\) as input to generate a super-resolution image \(I_{SR}=f_{LR}(I_{LR})\) and a corresponding loss \(\mathcal{L}_{o}\), and the _HR enhancement branch_\(f_{HR}\) that takes \(I_{HR}\) as input to generate a high-quality (HQ) image \(I_{HQ}=f_{HR}(I_{HR})\) where \(I_{HQ}\in\mathbb{R}^{C\times(4\times N)}\), and a _quality estimation module_\(f_{QE}\) that takes \(I_{HQ}\) and \(\mathcal{L}_{o}\) as input to compute a quality-aware loss \(\mathcal{L}_{LR}\) to supervie the LR branch: \[\mathcal{L}_{LR}=f_{QE}(I_{HQ},\mathcal{L}_{o}). \tag{1}\] During inference, \(f_{HR}\) and \(f_{QE}\) are removed. Thus, HiREN does not introduce extra inference cost. ### _LR Recovery Branch_ In HiREN, the LR recovery branch can be one of the existing STISR approaches. As shown in Fig. 2, these methods usually work in the following way: 1) Start with a spatial transformer network (STN) [39] since in the TextZoom dataset [7] the HR-LR pairs are manually cropped and matched by humans, which may incur several pixel-level offsets. 2) Several super-resolution blocks are used to learn sequence-dependent information of text images. 3) A pixel shuffle module is employed to reshape the super-resolved image. 4) Various loss functions are served as \(\mathcal{L}_{o}\) to extract text-specific information from ground truth (\(I_{HR}\) in existing works, \(I_{HQ}\) in HiREN) to provide the supervision. To elaborate the differences among various LR branches tested in this paper, we give a summary of these methods in Tab. I. As the motivation of HiREN is totally different from that of the existing methods, our method can work with most of them and significantly improve their performances. ### _HR Enhancement Branch_ #### Iii-C1 Overall introduction. The enhancement of HR images is a challenging task, where the challenges lie in two aspects that will be detailed in the sequel. Formally, the HR image \(I_{HR}\) and the corresponding HQ image \(I_{HQ}\) we are pursuing are connected by a degradation model as follows: \[I_{HR}=k\otimes I_{HQ}+n, \tag{2}\] where \(\otimes\) denotes the convolution operation, \(k\) is the degradation kernel, and \(n\) is the additive noise that follows Gaussian distribution in real world applications [52, 53]. Different from the degradation from \(I_{HR}\) to \(I_{LR}\) where the kernel is determined by lens zooming, unfortunately, the degradation \(k\) of \(I_{HQ}\) is unknown. As shown in Fig. 1(c), such degradation can be but not limited to blurring (the 1st and 2nd cases) and low-contrast (the 3rd case). What is more, we also lack pixel-level supervision information of \(I_{HQ}\). These two challenges make existing STISR methods unable to enhance \(I_{HR}\). To cope with the first challenge, here we adopt blind image deblurring techniques [54, 55, 53, 52] to boost the recovery of \(I_{HR}\). Specifically, as shown in Fig. 2, our HR enhancement branch consists of two components: a _kernel predictor_\(P\) and a _kernel-guided enhancement network_\(f_{ke}\). The kernel predictor aims at estimating the degradation kernel \(k\) (_i.e.,_\(k=P(I_{HR})\) where \(k\in\mathbb{R}^{d}\), and \(d\) is the size of the kernel), while the kernel-guided enhancement network takes the predicted kernel and \(I_{HR}\) as input to conduct a kernel-guided enhancement: \(I_{HQ}=f_{ke}(I_{HR},k)\). The predicted kernel is utilized as a clue to strengthen the model's ability to handle various degradation and boost the recovery of HR images. As for the second challenge, we introduce a pre-trained scene text recognizer \(R\) to provide the supervision for generating more recognizable HQ images. And after training the HR enhancement branch \(f_{HR}\), HiREN uses the trained \(f_{HR}\) to generate HQ images, which are exploited for training the LR recovery branch. #### Iii-C2 The kernel predictor. As shown in Fig. 3, to generate a prediction of the degradation kernel, we first utilize convolution layers to obtain a spatial estimation of the kernel. Then, we employ global average pooling [56] to output the global prediction by evaluating the spatial mean value. Thus, we can Fig. 2: The framework of HiREN. Red lines are valid only during training. get the prediction of the kernel of size \(\mathbb{R}^{d}\), in a simple yet effective way. #### Iii-C3 The kernel-guided enhancement network. As shown in Fig. 3, our kernel-guided enhancement network is designed in the following way: 1) Start with an input convolution to change the channel number from \(C\) to \(C^{\prime}\). 2) Repeat \(N\) modified SRB blocks [7]. Each block consists of two convolution layers and one Bi-directional GRU [57] (BGRU) to handle sequential text images. At this step, we first stretch the predicted kernel \(k\) to pixel shape, then concatenate the pixel kernel with the feature map extracted by convolution layers at channel dimension. 3) An output convolution is applied to getting the final enhanced HQ image \(I_{HQ}\). #### Iii-C4 Loss functions. Here, we design the loss functions of the HR enhancement branch \(f_{HR}\). As shown in Fig. 2, there are two loss functions in \(f_{HR}\). The first one is the recognition loss \(\mathcal{L}_{rec}\) that is used to make the enhanced image \(I_{HQ}\) to be more easily recognized than \(I_{HR}\). It is provided by a pre-trained recognizer \(R\) and the text-level annotation of \(I_{HR}\). Suppose the encoded text-level annotation is \(p_{GT}\in\mathbb{R}^{L\times|\mathcal{A}|}\), where \(L\) is the max prediction length of recognizer \(R\), and \(|\mathcal{A}|\) denotes the length of the alphabet \(\mathcal{A}\). Then, the recognition loss can be evaluated by \[\mathcal{L}_{rec}=-\sum_{j=0}^{L}p_{GT}^{j}log(R(I_{HQ})^{j}), \tag{3}\] which is the cross entropy of \(p_{GT}\) and \(R(I_{HQ})\). Beside the recognition loss, it is essential to keep the style of the enhanced images, which has also been pointed out in a recent work [8]. Though HR images are not trustworthy, pixel information from HR images can help the model to enhance the input images, rather than totally regenerate them, which is a much more challenging and uncontrollable task. In HiREN, we use mean-squared-error (MSE) to compute pixel loss to keep the style unchanged. Formally, we have \[\mathcal{L}_{sty}=||I_{HQ},I_{HR}||_{2}. \tag{4}\] With the recognition loss Eq. (3) and the style loss Eq. (4), the whole loss function of the HR enhancement branch can be written as follows: \[\mathcal{L}_{HR}=\alpha\mathcal{L}_{rec}+\mathcal{L}_{sty}, \tag{5}\] where \(\alpha\) is a hyper-parameter to trade-off the two losses. ### _Quality Estimation Module_ Though we can improve the quality of supervision information with the help of the HR enhancement branch, we cannot guarantee the correctness of the supervision information. Therefore, to suppress wrong supervision information, we design a quality estimation module \(f_{QE}\) to evaluate the qualities of HQ images and weight the losses of HQ images according to their qualities. Let the original loss of the LR branch be \(\mathcal{L}_{o}\in\mathbb{R}^{B}\), where \(B\) denotes the batch size. We adopt the Levenshtein similarity [15] between the \(i\)-th HQ image's recognition result \(pred_{i}\) of a recognizer \(R\) and the corresponding ground truth \(gt_{i}\) to measure its quality, and then utilize the quality values of all HQ images to compute the final loss: \[\mathcal{L}_{LR}=\mathcal{L}_{o}[NS(pred_{1},gt_{1}),...,NS(pred_{B},gt_{B})] ^{\top}/B, \tag{6}\] where \(NS(\cdot,\cdot)\) denotes the Levenshtein similarity, which has the following two advantages: 1) its value falls between 0 and 1; 2) it has a smooth response, thus can gracefully capture character-level errors [58]. These advantages make it suitable to weight the losses of HQ images. ### _The Usage of HiREN_ In this section, we introduce the usage of HiREN. As mentioned above, there are two ways to deploy it. One way is called "online", which can be easily implemented by plugged the HR enhancement branch to the training procedure of the LR recovery branch. The online installation algorithm of HiREN is given in Alg. 1. As shown in Alg. 1, the first thing we should do is to develop the HR enhancement branch (_i.e.,_ L4\(\sim\)L10). Specifically, given a STISR dataset \(\mathcal{D}\), we Fig. 3: The structure of the HR enhancement branch, which consists of two components: (a) the kernel predictor \(P\), and (b) the kernel-guided enhancement network \(f_{ke}\). first sample HR images and their corresponding text-level annotations from \(\mathcal{D}\) (L5), then generate the enhanced images \(I_{HQ}\) (L6). Finally, recognition loss and style loss described in Sec. III-C4 are computed to optimize the loss \(f_{HR}\). After that, we plug the developed HR enhancement branch to the training procedure of the LR recover branch (L11\(\sim\)L16). In particular, after sampling LR and HR images from the dataset \(\mathcal{D}\) (L12), we use the HR enhancement branch to generate the HQ image \(I_{HQ}\) (L13). Finally, the HQ image, rather than the HR image used in typical works, and the SR image are utilized to compute the text-specific loss \(\mathcal{L}_{l}\) to supervise the LR recovery branch (L11\(\sim\)L12). The other way is called "offline", which can be implemented by caching all the enhanced HQ images. As can be checked in Alg. 2, after developing the HR enhancement branch \(f_{HR}\), we sample all the LR-HR image pairs in the old dataset \(\mathcal{D}\). Then, the corresponding HQ images are generated and then add to the new dataset \(\mathcal{\tilde{D}}\) (L6). During training the LR recovery branch, what we need to do is to sample LR-HQ image pairs to compute the loss \(L_{o}\) for the optimization of the model. Such an installation does not introduce any additional training cost to the LR recovery branch. It is worth mentioning that the HR enhancement branch is removed during inference. That is, HiREN does not introduce any additional inference cost. ``` 1:Input: Training dataset \(\mathcal{D}\) and the developed HR enhancement branch \(f_{HR}\) 2:Initialize \(f_{LR}\) 3:\(\mathcal{\hat{D}}=\emptyset\) 4:for\(I_{LR},I_{HR}\sim\mathcal{D}\)do 5:\(I_{HQ}=f_{HR}(I_{HR})\) 6: Add \((I_{HQ},I_{LR})\) to \(\mathcal{\hat{D}}\) 7:while\(f_{LR}\) is not converged do 8:\(I_{HQ},I_{LR}\sim\mathcal{\hat{D}}\) 9:\(I_{SR}=f_{LR}(I_{LR})\) 10: Compute \(\mathcal{L}_{o}\) according to \(I_{SR}\) and \(I_{HQ}\) 11: Optimize \(f_{LR}\) with respect to \(\mathcal{L}_{o}\) 12:return\(f_{LR}\) ``` **Algorithm 2** The offline usage of HiREN. ## IV Performance Evaluation In this section, we first introduce the dataset and metrics used in the experiments and the implementation details. Then, we evaluate HiREN and compare it with several state-of-the-art techniques to show its effectiveness and superiority. Finally, we conduct extensive ablation studies to validate the design of our method. ### _Dataset and Metrics_ Two groups of datasets are evaluated in this paper: low-resolution scene text dataset TextZoom and regular scene text recognition datasets. #### Iv-A1 Low-resolution scene text dataset The **TextZoom**[7] dataset consists of 21,740 LR-HR text image pairs collected by lens zooming of the camera in real-world scenarios. The training set has 17,367 pairs, while the test set is divided into three settings based on the camera focal length: easy (1,619 samples), medium (1,411 samples), and hard (1,343 samples). #### Iv-A2 Regular STR datasets These datasets are used to check the generalization power of our model trained on TextZoom when being adapted to other datasets. In particular, three regular STR datasets are evaluated in our paper to further check the advantage of HiREN: IC15-352 [8], SVT [59], and SVTP [60]. In what follows, we give brief introductions on these datasets. The **IC15-352** dataset is first divided in [8]. This dataset consists of 352 low-resolution images collected from the IC15 [61] dataset. Street View Text (**SVT**) [59] is collected from the Google Street View. The test set contains 647 images. Many images in SVT are severely suffered from noise, blur, and low-resolution. SVT-Perspective (**SVTP**) [60] is proposed for evaluating the performance of reading perspective texts. Images in SVTP are picked from the side-view images in Google Street View. Many of them are heavily distorted by the non-frontal view angle. This dataset contains 639 images for evaluation. The major metric used in this paper is word-level recognition accuracy that evaluates the recognition performance of STISR methods. Following the settings of previous works [9], we remove punctuation and convert uppercase letters to low-crease letters for calculating recognition accuracy. Besides, _Floating-point **O**perations **P**er **S**econd_ (FLOPS) is used to evaluate the computational cost of various methods. Following [9, 32], we only report _Peak Signal-to-Noise Ratio_ (PSNR) and _Structure Similarity Index Measure_ (SSIM) [62] as the auxiliary metrics to evaluate the fidelity performance because of the quality issue of the HR images. ### _Implementation Details_ All experiments are conducted on 2 NVIDIA Tesla V100 GPUs with 32GB memory. The PyTorch version is 1.8. The HR enhancement branch is trained using Adam [63] optimizer with a learning rate of 0.0001. The batch size \(B\) is set to 48. The LR recovery branch is trained with the same optimizer and batch size but a higher learning rate of 0.001, which is suggested in [12]. The recognizer \(R\) used in our method is proposed in [8]. The hyper-parameters in HiREN are set as follows: \(\alpha\) is set to 0.1, which is determined through grid search. The number of SRB blocks is set to 5 (_i.e.,_\(N=5\)) and \(C^{\prime}\) is set to 32, which is the same as in [7]. The size of kernel \(k\) is set to 32 (_i.e.,_\(d=32\)), which is similar to that suggested in [52]. Our training and evaluation are based on the following protocol: save the averagely best model during training with CRNN as the recognizer, and use this model to evaluate the other recognizers (MORAN, ASTER) and the three settings (easy, medium, hard). ### _Performance Improvement on SOTA Approaches_ #### Iv-C1 Recognition performance improvement Here, we evaluate our method on **TextZoom**. Since HiREN is a framework that can work with most existing methods, we plug HiREN to the training of several typical super-resolution methods to check the universality and effectiveness of HiREN, including one generic method SRCNN [20], two recent proposed STISR methods TSRN [7], TG [9], and one iterative-based and clue-guided STISR method TPGSR [12]. To show that HiREN can support various recognizers, we follow previous works [12, 8, 9] and evaluate the recognition accuracy on three recognizers: CRNN [36], MORAN [40] and ASTER [38]. We re-implement these methods to unify hardware, software, and evaluation protocols for fair comparison. Generally, our results are higher than those in the original papers. For example, with CRNN the averaged accuracy of TG is boosted from 48.9% to 49.6%. All the results are presented in Tab. II. We first check the universality of HiREN. As can be seen in Tab. II, HiREN significantly boosts the recognition performance in almost all the cases, except for one case on TPGSR, which means that HiREN can work well with various existing techniques. As for the performance improvement of HiREN, taking a non-iterative method for example. The state-of-the-art TG [9] achieves 49.6%, 57.6% and 61.2% averaged accuracy respectively with the three recognizers (see the 9th row). After equipping our method HiREN, the accuracy is lifted to 51.1%, 58.6% and 61.7% (increasing by 1.5%, 1.0%, and 0.5%) respectively (see the 10th row). This demonstrates the effectiveness of our method. Results on more datasets and recognizers are given in the supplementary materials to demonstrate its universality. It is worth mentioning that our HR enhancement branch can also be applied to weakly supervising the enhancement of LR and HR images to lift their recognition accuracies, as shown in the 3rd and 5th rows of Tab. II. This further supports the universality of our technique. Results above show the promising application potential of our method -- not only work with STISR methods, but also pioneer weakly supervised enhancement of LR and HR text images. Furthermore, to better demonstrate the universality of HiREN, we conduct more experiments on more STR datasets and recently proposed STR datasets. We first evaluate our method on three STR datasets, including IC15-352, SVT, and SVTP. We use the STISR models (TSRN, TG, TPGSR, and our techniques performed on them) developed on the TextZoom dataset to evaluate these datasets. The experimental results on IC15-352, SVT, and SVTP are given in Tab. III. As shown in Tab. III, HiREN also works well on them and achieve lifted performance in almost all the cases. In particular, the performance of TPGSR on three datasets are lifted from 66.2%, 77.4%, 62.8% to 66.8%, 78.7%, and 63.6%, respectively, which demonstrates the advantage of HiREN. Apart from that, we also give the experimental results on more recently proposed recognizers including SEED [46] and ABINet [48]. The experimental results are given in Tab. IV. As can be checked in Tab. IV, these recent recognizers still find difficulty in recognizing low-resolution text images. For example, SEED and ABINet can only correctly read 45.8% and 61.0% of LR images, which are inferior to performance of reading HR images (_i.e._, 84.8% and 89.8%). Our method HiREN can also achieve boosted performance on these recognizers in almost all the cases. #### Iv-B2 Fidelity improvement We also report the results of fidelity improvement (PSNR and SSIM) on major existing methods in Tab. V. Notice that these fidelity metrics have the following limitations. On the one hand, PSNR and SSIM globally measure the similarity between SR image and the ground truth image, including both characters and background. With the goal of lifting the recognition ability and readability of the scene text images, STISR should put more emphasis on recovering characters rather than the background [9, 32]. On the other hand, as pointed out by our paper, HR images are suffering various quality issues. Ergo, it is inappropriate to measure the pixel similarity between erroneous HR images \begin{table} \begin{tabular}{c||c c c c|c c c c|c c c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{CRNN [36]} & \multicolumn{3}{c}{MORAN [40]} & \multicolumn{3}{c}{ASTER [38]} \\ \cline{2-13} & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average & Easy & Medium & Hard & Average \\ \hline \hline LR & 37.5\% & 21.4\% & 21.1\% & 27.3\% & 56.2\% & 35.9\% & 28.2\% & 41.1\% & 64.0\% & 42.0\% & 31.7\% & 47.2\% \\ +HiREN & 37.7\% & **27.9\%** & **23.5\%** & **30.2\%** & **57.9\%** & **38.2\%** & **28.7\%** & **42.6\%** & **66.4\%** & **43.4\%** & **32.3\%** & **48.5\%** \\ \hline HR & 76.4\% & 75.1\% & 64.6\% & 72.4\% & **89.0\%** & 83.1\% & 71.1\% & 81.6\% & 93.4\% & 87.0\% & 75.7\% & 85.9\% \\ +HiREN & **77.5\%** & **75.4\%** & **65.0\%** & **72.9\%** & 88.8\% & **83.7\%** & **71.9\%** & **82.0\%** & **93.5\%** & **87.5\%** & **76.2\%** & **86.3\%** \\ \hline \hline SRCNN & 39.8\% & 23.4\% & 21.7\% & 29.0\% & 57.7\% & 36.1\% & 28.5\% & 41.8\% & 65.5\% & 41.9\% & 31.7\% & 47.5\% \\ +HiREN & 41.6\% & **24.0\%** & **23.7\%** & **30.4\%** & **61.1\%** & **38.6\%** & **29.3\%** & **44.0\%** & **67.5\%** & **44.7\%** & **32.8\%** & **49.5\%** \\ \hline TSRN & 52.8\% & 39.8\% & 31.6\% & 42.1\% & 64.5\% & 49.3\% & 36.7\% & 51.1\% & 69.7\% & 54.8\% & 41.3\% & 56.2\% \\ +HiREN & **56.5\%** & **44.1\%** & **32.2\%** & **45.0\%** & **68.5\%** & **52.5\%** & **38.6\%** & **54.2\%** & **73.5\%** & **56.3\%** & **39.2\%** & **57.4\%** \\ \hline TG & 60.5\% & 49.0\% & 37.1\% & 49.6\% & 72.0\% & 57.6\% & 40.0\% & 57.6\% & 76.0\% & 61.4\% & 42.9\% & 61.2\% \\ +HiREN & **62.4\%** & **51.2\%** & **37.5\%** & **51.1\%** & **73.4\%** & **58.4\%** & **41.0\%** & **58.6\%** & **77.5\%** & **61.5\%** & **43.0\%** & 61.7\% \\ \hline TPGSR & 63.1\% & 52.0\% & 38.6\% & 51.8\% & **74.9\%** & 60.5\% & 44.1\% & **60.5\%** & **78.9\%** & 62.7\% & 44.5\% & 62.8\% \\ +HiREN & **63.5\%** & **52.7\%** & **38.8\%** & **52.4\%** & 74.7\% & **60.9\%** & **44.1\%** & **60.5\%** & 78.3\% & **63.5\%** & **45.6\%** & **63.5\%** \\ \hline \end{tabular} \end{table} TABLE II: Performance (recognition accuracy) improvement on TextZoom. \begin{table} \begin{tabular}{c|c c} \hline Method & SEED [46] & ABINet [48] \\ \hline LR & 45.8\% & 61.0\% \\ HR & 84.8\% & 89.8\% \\ \hline TSRN & 56.3\% & **64.0\%** \\ +HiREN & **56.5\%** & 63.8\% \\ \hline TG & 60.7\% & **66.0\%** \\ +HiREN & **60.9\%** & 65.9\% \\ \hline TPGSR & 61.7\% & 67.5\% \\ +HiREN & **62.2\%** & **68.1\%** \\ \hline \end{tabular} \end{table} TABLE IV: Performance of recent recognizers on TextZoom. \begin{table} \begin{tabular}{c||c c c} \hline Method & IC15-352 & SVT \\ \hline LR & 49.4\% & 74.8\% & 60.8\% \\ \hline TSRN & 48.9\% & 72.6\% & **61.4\%** \\ +HiREN & **52.3\%** & **74.8\%** & 60.3\% \\ \hline TG & 59.1\% & 74.2\% & 60.2\% \\ +HiREN & **61.7\%** & **76.5\%** & **68.5\%** \\ \hline TPGSR & 66.2\% & 77.4\% & 62.8\% \\ +HiREN & **66.8\%** & **78.7\%** & **63.6\%** \\ \hline \end{tabular} \end{table} TABLE III: Performance comparison on three STR datasets with CRNN as recognizer. whose pixels are not trustworthy. Therefore, we only present PSNR and SSIM as auxiliary metrics to roughly draw some conclusions. Notice that existing methods utilize SR-HR image pairs to calculate PSNR and SSIM. However, as mentioned above, the HR images are suffering from quality issues. Hence, we additionally provide the fidelity results of calculating PSNR and SSIM between SR and HQ images. The experimental results are given in Tab. V. As can be seen in Tab. V, 1) A higher PSNR does not means a higher recognition accuracy. For example, the PSNR of TG in SR-HR is inferior to that of TSRN (_i.e.,_ 21.47 v.s. 21.84) but TG performs better on recognition accuracy (_i.e.,_ 49.6% v.s. 42.1%). The reason lies in that TG is a stroke-focused technique, focusing on recovering fine-grained stroke details rather than the whole image quality including background that is minor to recognition. This is consistent with the results in [9]. 2) Comparing with the original models, after applying HiREN, the SR-HQ fidelity performance of the new models are boosted in almost all cases. 3) HiREN gets a low performance on the PSNR and SSIM of SR-HR images but obtains an improved recognition performance, which supports the quality issue of HR images. #### Iv-B3 Visualization Here, we visualize several examples in Fig. 4 to better demonstrate the performance of our technique. We can see that HiREN can help the existing methods to recover the blurry pixels better (see the 2nd \(\sim\) 6th cases). In particular, a better "ee" in the 2nd and 3rd cases,'m' in the 4th case, 'f' in the 5th case, and 'e' in the 6th case are obtained by our technique. Besides, in some extremely tough cases where even with the HR images the recognition is hard, HiREN can still achieve better recovery (see the 7th case). These results show the power of HiREN. #### Iv-B4 Training and inference cost We have discussed the high performance of our technique above. In this section, we provide the results of training and inference costs to show the efficiency of HiREN. Specifically, We take TG and TPGSR \begin{table} \begin{tabular}{c||c|c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Metrics} \\ \cline{2-5} & \multicolumn{2}{c|}{SR-HR} & \multicolumn{2}{c|}{SR-HQ} & \multicolumn{2}{c}{Avg} \\ \cline{2-5} & PSNR & SSIM(\(\times 10^{-2}\)) & PSNR & SSIM(\(\times 10^{-2}\)) & Acc \\ \hline \hline LR & 20.35 & 69.61 & 20.73 & 68.76 & 27.3\% \\ \hline TSRN & 21.84 & 76.34 & 21.08 & 74.76 & 42.1\% \\ \hline \(\star\)HiREN & **22.01** & **76.60** & **21.46** & **76.23** & **45.0\%** \\ \hline TG & **21.47** & **73.57** & **20.89** & 72.59 & 49.6\% \\ \(\star\)HiREN & 21.12 & 73.43 & 20.84 & **73.78** & **51.1\%** \\ \hline TPGSR & **22.05** & **76.71** & 21.05 & **76.77** & 51.8\% \\ \(\star\)HiREN & 21.69 & 75.97 & **21.15** & 76.44 & **52.4\%** \\ \hline \end{tabular} \end{table} TABLE V: Fidelity and recognition results on major existing methods. The results are obtained by averaging three settings (easy, medium and hard). Fig. 4: Examples of generated images. Here, GT indicates ground truth. We use CRNN as the recognizer. Red/black characters indicate incorrectly/correctly recognized. \begin{table} \begin{tabular}{c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Metrics} \\ \cline{2-3} & Training cost & Inference cost \\ \hline TG & 19.60 & 0.91 \\ +HiREN(Online) & 20.59 & 0.91 \\ +HiREN(Offline) & 19.60 & 0.91 \\ \hline TPGSR & 7.20 & 7.20 \\ +HiREN(Online) & 8.19 & 7.20 \\ +HiREN(Offline) & 7.20 & 7.20 \\ \hline \end{tabular} \end{table} TABLE VI: The training and inference costs of our method. The cost is measured by the FLOPs(G). as baselines and add HiREN to them and count their FLOPS during training and inference. The experimental results are presented in Tab. VI. In terms of training cost, we can see that the offline deployment of HiREN does not incur any additional cost. As for online version, we can see that the additional computational cost caused by HiREN is negligible (_e.g,_ from 19.60G to 20.59G, only 0.99G). What is more, neither of the two variants introduce any additional inference cost. In conclusion, the offline deployment not only saves training and inference cost, but also significantly boosts the performance. These results validate the efficiency of our method. ### _Ablation Study_ We conduct extensive ablation studies to validate the design of our method. Since our method is designed to enhance HR images during training, the metric used in this section is the recognition accuracy measured by the average accuracy of CRNN on training set, denoted as \(Acc_{train}\). #### Iv-D1 Design of the HR enhancement branch Here, we check the design of the HR enhancement branch. As mentioned above, two techniques are developed to promote the enhancement of HR images: kernel-guided enhancement network \(f_{ke}\) and the loss \(\mathcal{L}_{HR}\). We conduct experiments to check their effects. The experimental results are presented in Tab. VII. Visualization of the effect of the HR enhancement branch is given in the supplementary materials. _The effect of the HR enhancement branch._ Comparing the results in the 1st and 7th rows of Tab. VII, we can see that the HR enhancement branch lifts the accuracy from 66.9% to 74.1%, which proves the effect of the branch as a whole. _The effect of kernel-guided enhancement network._ To check the power of the kernel-guided enhancement network, we design a variant that removes the kernel predictor. Comparing the results of the 2nd and 7th rows in Tab. VII, we can see that the variant without the kernel predictor is inferior to that with the kernel predictor (72.7% v.s. 74.1%). This demonstrates the effectiveness of the proposed kernel-guided enhancement network. _The design of loss function._ Here, we check the design of the loss function used in the HR enhancement branch. We first remove the recognition loss \(\mathcal{L}_{rec}\) and the style loss \(\mathcal{L}_{sty}\) separately. As can be seen in the 3rd, 4th, and 7th rows in Tab. VII, comparing with the combined one, the performance of using only one single loss is degraded. Next, we check the selection of style loss. Specifically, we consider three candidates (MSE, Charbonnier and L1) for the style loss function. As can be seen in the 5th, 6th, and 7th rows of Tab. VII, MSE loss outperforms Charbonnier loss [64] and L1 loss. The reason lies in that MSE penalizes large errors and is more tolerant to small errors, which is more suitable for HiREN to enhance the blurry or missed character details and keep the style unchanged [65]. Ergo, MSE is selected as the style loss in HiREN. #### Iv-D2 Hyper-parameter study Here, we provide the grid search results of the hyper-parameter \(\alpha\) introduced in HiREN for balancing the two losses. The results are presented in Tab. VIII. As can be seen in Tab. VIII, the best performance is achieved when \(\alpha\)=0.1 and 0.05. #### Iv-D3 The effect of loss quality estimation module Here, we compare the performances of different models w/o the quality estimation module. As can be seen in Tab. IX, without \(f_{QE}\), all methods are degraded, which demonstrates the effect of the quality estimation module. ## V Discussion In this section, we discuss some issues to better demonstrate the advantages of HiREN and point out some limitations of the proposed method. ### _Which kind of quality issues do HR images have?_ We conduct a visualization study to demonstrate the quality issues of HR images. As can be checked in Fig. 5, HR images are suffering from including but not limited to low-contrast (1st, 2nd and 6th cases), blurry (3rd and 4th cases) and motion blur (5th case). These unknown degradations obviously threaten the recognition of HR images and subsequently provide erroneous supervision to the recovery of the LR images. ### _How does HiREN lift the quality of supervision information?_ To cope with various quality problems of HR images, HiREN generates HQ images through different strategies. In particular, HiREN makes the texts more prominent to solve low-contrast (e.g. the 1st and 2nd cases in Fig. 5). With respect to the blurry issue, HiREN makes the incorrectly recognized texts more distinguishable (e.g. "e" in the 3rd case and "ri" in the 4th case in Fig. 5). HiREN also tries to reduce the motion blur in the 5th case of Fig. 5. Although in some tough cases, HiREN fails to generate a correct HQ image (e.g. the 6th case in Fig. 5), our quality estimation module weights its loss to a small value to suppress the erroneous supervision information. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Method & SRCNN & TSRN & TG & TFGSR \\ \hline without \(f_{QE}\) & 30.2\% & 44.2\% & 51.0 & 51.9\% \\ with \(f_{QE}\) & **30.4**\% & **45.0**\% & **51.1** & **52.4**\% \\ \hline \hline \end{tabular} \end{table} TABLE IX: Ablation study on the quality estimation module. The metric is the recognition accuracy of CRNN on the test set of TextZoom. \begin{table} \begin{tabular}{c|c c c|c} \hline \hline \multirow{2}{*}{ID} & \multirow{2}{*}{Kernel-guided} & \multicolumn{3}{c|}{Loss functions} & \multirow{2}{*}{\(Acc_{train}\)} \\ \cline{2-2} \cline{4-5} & & \(\mathcal{L}_{rec}\) & & \(\mathcal{L}_{sty}\) \\ \hline \hline 1 & ✗ & ✗ & ✗ & 66.9 \\ \hline 2 & ✗ & ✓ & MSE & 72.7 \\ 3 & ✓ & ✓ & ✗ & 66.1 \\ 4 & ✓ & ✗ & MSE & 67.4 \\ 5 & ✓ & ✓ & Charb & 67.5 \\ 6 & ✓ & ✓ & L1 & 67.3 \\ 7 & ✓ & ✓ & MSE & 74.1 \\ \hline \hline \end{tabular} \end{table} TABLE VII: The ablation studies of the HR enhancement branch. Here, ✗ means the corresponding module is not applied, and Charbonnier Loss [64]. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{6}{c}{\(\alpha\)} \\ \cline{2-7} & 0.5 & 0.2 & 0.1 & 0.05 & 0.025 & 0.01 & 0.005 \\ \hline \(Acc_{train}\) & 73.6 & 73.4 & **74.1** & **74.1** & 72.3 & 72.2 & 71.2 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: The determination of \(\alpha\). The metric is \(Acc_{train}\). ### _Error Analysis_ In this section, we perform an error analysis of HiREN to provide possible research directions for further works. Concretely, we provide some error cases in Fig. 6 to illustrate the limitations of recent works and HiREN. As can be seen in the 1st\(\sim\)2nd cases, recent methods usually rely on a vocabulary [66], which makes the models guess the blurry pixels via the corpus that can be learned from the training dataset. This degrades the models' ability to recover numbers and punctuation. As a result, although HiREN recovers more characters than the original TPGSR, the word-level recovery still fails. Besides, as shown in the 3rd case, in some tough cases where the LR and HR images are extremely difficult to read, TPGSR and HiREN also fail to effectively do the recovery. This indicates the challenge of STISR. ### _Limitations of HiREN_ On the one hand, HiREN may introduce some noise to the HR images and worsen their quality. However, such noise is very minor compared to the advantage brought by HiREN. Specifically, we find that 9,565 erroneously recognized images in TextZoom dataset are successfully enhanced by HiREN, which leads to correct recognition results, while only 128 images are deteriorated from correct to wrong. On the other hand, the training of the HR enhancement branch requires the feedback of a scene text recognizer and text-level annotations. This indicates that HiREN still needs some weak supervision information for supervision. ## VI Conclusion In this paper, we present a novel framework called HiREN to boost STISR performance. Different from existing works, HiREN aims at generating high-quality text images based on high-resolution images to provide more accurate supervision information for STISR. Concretely, recognizing the difficulty in catching the degradation from HQ to HR and obtaining the supervision information from HR images, we explore degradation kernel-guided super-resolution and the feedback of a recognizer as well as text-level annotations as weak supervision to train a HR enhancement branch. What is more, to suppress erroneous supervision information, a novel quality estimation module is designed to evaluate the qualities of images, which are used to weight their losses. Extensive experiments demonstrate the universality, high-performance and efficiency of HiREN. Our work provides a new solution for the STISR task. In the future, we will try to explore more advanced models to further advance the proposed technique. One the one hand, we will try to further improve the recovery ability of the HR enhancement branch or address the vocabulary reliance issue. On the other hand, we plan to apply HiREN to self-supervised or unsupervised settings when the recognizer and text-level annotations are not trustworthy or text-level annotations are lack during training. Last but not least, we will extend the idea of the proposed quality enhancement branch to build a new noisy learning algorithm for STISR.
2304.00044
On The Theory of Ring Afterglows
Synchrotron and inverse Compton emission successfully explain the observed spectra of gamma-ray burst (GRB) afterglows. It is thought that most GRBs are products of extremely relativistic outflows and the afterglow marks the interaction of that ejecta with the surrounding matter. Faster decay of afterglow light curves at late times is indicative of non-spherical geometries, and are usually interpreted as evidence for jet geometry. Recent numerical simulations have shown that ring-like geometries are also permissible for relativistic outflows. We therefore extend the standard theory of afterglow evolution to ring geometries. An analytic prescription for the light curves and spectra produced by relativistic toroidal blast waves is presented. We compare these to their spherical and jet-like counterparts, and show that ring afterglows decay faster than spherical outflows but not as fast as jets.
Marcus DuPont, Andrew MacFadyen, Re'em Sari
2023-03-31T18:02:12Z
http://arxiv.org/abs/2304.00044v1
# On The Theory of Ring Afterglows ###### Abstract Synchrotron and inverse Compton emission successfully explain the observed spectra of gamma-ray burst (GRB) afterglows. It is thought that most GRBs are products of extremely relativistic outflows and the afterglow marks the interaction of that ejecta with the surrounding matter. Faster decay of afterglow light curves at late times is indicative of non-spherical geometries, and are usually interpreted as evidence for jet geometry. Recent numerical simulations have shown that ring-like geometries are also permissible for relativistic outflows. We therefore extend the standard theory of afterglow evolution to ring geometries. An analytic prescription for the light curves and spectra produced by relativistic toroidal blast waves is presented. We compare these to their spherical and jet-like counterparts, and show that ring afterglows decay faster than spherical outflows but not as fast as jets. Gamma-Ray Bursts (629) -- Light curves (918) -- Relativistic Fluid Dynamics (1389) + Footnote †: journal: ApJL 0000-0002-8861-7885]Marcus DuPont 0000-0002-4880-0885]Andrew MacFadyen 0000-0002-0788-0885]Re'em Sari ## 1 Introduction The physics accounting for the variability and wide range in observed luminosities of gamma-ray bursts (GRBs), and the nature of their central engine are topics of deep debate. However, it is widely accepted that the dominant processes responsible for the X-ray, optical, and radio afterglow radiation are the synchrotron and inverse Compton mechanisms operating behind the blast wave that the GRB launches into the surrounding medium. Such radiative processes are expected to be applicable to just about any sufficiently relativistic outflow. This paved the way for the success of using the Blandford & McKee (1976) (BM) solution for modelling GRB afterglows and for distinguishing between isotropic and jet-like asymmetric outflows modelled as BM solutions truncated to within polar angle \(\theta_{0}\)(see Piran, 2004, and references therein). Thus far, only afterglows for spherical and jet-like outflows have been considered and it is generally believed that most GRBs are caused by jetted relativistic outflows. Currently, the key indicators cited as evidence for GRB jets are: (a) the existence of an achromatic break in the afterglow light curve either due to lateral jet spreading (Rhoads, 1999; Sari et al., 1999) or an off-axis viewing of universal structured jets (e.g., Zhang & Meszaros, 2002; Rossi et al., 2002); (b) observed net polarizations that arise from asymmetric, relativistically beamed outflows (e.g., Gruzinov & Waxman, 1999; Sari, 1999; Yonetoku et al., 2011; Mandarakas et al., 2023); (c) extremely large energetics which require the outflow to be sufficiently collimated since the average _isotropic_ energy of \(10^{55}\) erg released by GRBs is much larger than what is physically allowed by a spherical explosion of a massive star (Taylor et al., 2004; Kumar & Zhang, 2015); (d) and measurements of proper motion of the flux centroid (Czerny et al., 1997; Taylor et al., 2004; Mooley et al., 2018). Insofar as shown by the the previous conditions and observations, many GRBs are only constrained to be _asymmetric_ outflows, but we argue they are not necessarily jet-like. This stance is valid since the current GRB afterglow catalogue is quite varied and many of them show breaks which do not fit the jet theory. Recently, it has been shown that relativistic outflows can have ring-like geometries, e.g. from the "ellipsar" mechanism (DuPont et al., 2022). Motivated by the result of DuPont et al. (2022), we consider in this Letter the dynamics and observational signatures of expanding relativistic rings, though we remain agnostic about the source and ener gies of said rings. Our work on ring afterglows is motivated by the many time-domain surveys in progress or being planned (Barthelmy et al., 2005; Shappee et al., 2014; Chambers et al., 2016; Kochanek et al., 2017; Ivezic et al., 2019; Bellm et al., 2019), which observe a wide array of astrophysical transients -- outside of just GRBs -- that expanding relativistic rings might help explain. These transients might include X-ray flashes (XRFs), Super Luminous Supernovae (SLSNe), trans-relativistic supernovae, and Fast Blue Optical Transients (FBOTs). Therefore, we are motivated to ask how the afterglow of expanding relativistic rings differs from their spherical and jet-like counterparts. In this Letter, we calculate the light curves and spectra due to expanding relativistic rings. We invoke the same recipes described in Sari et al. (1998) and Sari et al. (1999), which have been successful at modeling many observed GRB afterglows. We derive temporal scalings for the relevant frequencies and spectral flux and comment on their differences from the spherical and jet-like afterglows. This Letter is organized as follows: Section 2 describes the mathematical formalism for the dynamics and synchrotron radiation from the relativistic ring, Section 3 describes the resultant light curves of the ring-like outflows, and Section 4 discusses the relevance of our work. ## 2 Formalism ### Blast wave evolution In the early phase of evolution before the expanding blast wave begins to decelerate, if it is expanding in a medium with density obeying \(\rho=Ar^{-k}\), it has kinetic energy \[E\approx\Gamma^{2}M=\frac{A}{3-k}\Gamma^{2}r^{3-k}\Omega, \tag{1}\] where \(M\) is the swept up mass, \(\Gamma=(1-\beta^{2})^{-1/2}\) is the Lorentz factor of the bulk flow, \(\beta\) is velocity in units of \(c\), \(A\) is the mass-loading parameter, and \(\Omega\) is the solid angle of the blast wave which obeys \[\Omega=\begin{cases}4\pi\sin(\theta_{0})&\text{ring},\\ 8\pi\sin^{2}(\theta_{0}/2)&\text{jets},\\ 4\pi&\text{sphere},\end{cases} \tag{2}\] where \(\theta_{0}\) is the half-opening angle of the blast wave(s) such that \(\Omega\to 4\pi\) as \(\theta_{0}\to\pi/2\). For small opening angles, \(\Omega_{\text{ring}}\approx 4\pi\theta_{0}\), which is a factor \(2/\theta_{0}\) larger than its double-sided jet-like counterpart, making relativistic rings more likely to be observed, as compared to a jet with the same opening angle. An illustration of the asymmetric geometries considered is shown in Figure 1. As evident from Figure 1 and from Equation 2, the solid angle for a ring complements the solid angle of a jet to \(4\pi\) if \(\theta_{\text{ring}}=\pi/2-\theta_{\text{jet}}\). Using conservation of energy, as the relativistic ring slows down such that \(\Gamma\sim\theta_{0}^{-1}\), one finds \(\Gamma\propto r^{-(3-k)}\). This happens after an observer time of: \[t_{\text{b}}\approx[\zeta E_{\text{iso}}/4\pi A]^{1/\zeta}(\theta_{0}+\theta_{ \text{obs}})^{2(1+\zeta)/\zeta}, \tag{3}\] where \(E_{\text{iso}}\) is the isotropic-equivalent energy and \(\zeta\equiv 3-k\). Before this break time, the afterglow from rings and from jets are identical due to a lack of causal connectivity. The crux of this Letter is that after this break time, light curves from jets and from rings diverge and their distinguishing features are discernible in the current GRB catalogue. We will explore the previous point in later sections. As the blast wave evolves, an observer sees photons at a time \[t=t^{\prime}(1-\vec{\beta}\cdot\hat{n})=t^{\prime}(1-\beta\mu), \tag{4}\] where \(t^{\prime}\) is the time in the emitter frame, \(\hat{n}\) is a unit vector pointing from the observer to the emitting patch, and \(\mu\equiv\cos\theta\). Hereafter, all primed quantities signify values in the emitter frame. Assuming \(\Gamma\gg 1\) and the observer is nearly perfectly oriented with the emitting patch (i.e., \(\mu\approx 1-\theta^{2}/2\)), we have \[t\approx\frac{t^{\prime}}{2\Gamma^{2}}[1+(\Gamma\theta)^{2}]\approx\frac{r}{2 \Gamma^{2}}[1+(\Gamma\theta)^{2}], \tag{5}\] where we have used \(t^{\prime}\approx r\) for sufficiently relativistic flows which lie on the light sphere. Since the radiation is beamed into a typical angle \(1/\Gamma\), the quantity \(\Gamma\theta\) is of order unity, simplifying the observer time to \(t\approx r/\Gamma^{2}\). From this, we arrive at the Lorentz factor as a function of observer time for the ring, \(\Gamma\propto t^{-\zeta/(1+2\zeta)}\). Furthermore, the relativistic ring's radial evolution obeys \(r\propto t^{1/(1+2\zeta)}\) after spreading begins. ### Synchrotron Spectrum In the observer frame, the characteristic peak frequency of the electrons is \[\nu_{m}=\Gamma\gamma_{e}^{2}\frac{3eB^{\prime}}{16m_{e}}\propto\Gamma^{4} \propto t^{-4\zeta/(1+2\zeta)}, \tag{6}\] where \(\gamma_{e}\) is the electron Lorentz factor, \(e\) is elementary charge, \(B^{\prime}\) is the magnetic field in the fluid frame, and \(m_{e}\) is the electron mass. Note that we have used the fact that the magnetic field in the down stream transforms from the usual jump condition \(B^{\prime}=\Gamma\sqrt{32\pi\rho\epsilon_{B}}\), where \(\epsilon_{B}\) is fraction of total energy density due to magnetic fields, and the minimum Lorentz factor of the electrons obeys \(\gamma_{e}\propto\Gamma\). In a time \(t^{\prime}\), the electrons cool at a rate \[\langle P(\gamma_{e})\rangle=\frac{4}{3}\sigma_{T}u^{2}\gamma_{e}^{2}U_{b}. \tag{7}\] In the above equation, \(\sigma_{T}\) is the Thompson cross section, \([u^{\mu}]=\Gamma(1,\vec{\beta})\) is the four-velocity in units where \(c=1\), and \(U_{b}=B^{2}/8\pi\) is the magnetic energy density. By inverting Equation 7, we solve for the cooling Lorentz factor, \[\gamma_{c}=\frac{6\pi m_{e}}{\Gamma t^{\prime}\sigma_{T}B^{2}}=\frac{6\pi m_{ e}}{\Gamma^{3}t\sigma_{T}B^{2}}. \tag{8}\] It then immediately follows that the cooling frequency obeys \[\nu_{c}=\Gamma\gamma_{c}^{2}\frac{3eB^{\prime}}{16m_{e}}\propto\Gamma^{-4}t^{ -2}\propto t^{-2/(1+2\zeta)}. \tag{9}\] The spectral flux from a radiating blast wave is given by \[F_{\nu}=\frac{1+z}{4\pi d_{L}^{2}}\int_{V}\delta^{2}j^{\prime}_{\nu}d^{3}\vec{ x}, \tag{10}\] where \(z\) is redshift, \(d_{L}\) is luminosity distance, \(\delta=1/\Gamma(1-\vec{\beta}\cdot\hat{n})\) is the Doppler beaming factor with respect to the observer, and \(j^{\prime}_{\nu}\) is the frequency-dependent emissivity. At peak emission, the emissivity is independent of \(\Gamma\) and a highly relativistic flow along the line of sight to the observer gives \(\delta=2\Gamma\), so the peak spectral flux has the scaling \[F_{\nu,\rm max}\propto r^{3}\Gamma^{2}\propto t^{(3-2\zeta)/(1+2\zeta)}. \tag{11}\] For completeness, we do not assume that all synchrotron photons escape the plasma on their way to the observer, meaning some are self absorbed. Moreover, the self-absorption frequency is a difficult calculation, but by extrapolating from the Granot et al. (1999a) solution we can arrive at the simple scaling, \[\nu_{a}\propto E^{1/5}\propto\Gamma^{2/5}r^{\zeta/5}\propto t^{-\zeta/5(1+2 \zeta)}. \tag{12}\] From this, we now have the necessary ingredients to compute light curves produced by relativistic rings. ## 3 Light curves of relativistic rings With the necessary constraints derived in the previous section, we now turn to explicit light curve calculations. Hereafter, we compute light curves for a constant density medium (i.e., \(\zeta=3\)) to easily compare with the spherical and jet-like geometries derived in Sari et al. (1999). Figure 1: Cartoon illustrations of the two types of asymmetric geometries considered in this Letter. The left shows the conical, jet-like outflow along the poles of the source while the right shows the ring-like outflow in the equatorial plane of the source. The half-opening angle, \(\theta_{0}\), is depicted for both geometries as well. We start with the flux at low enough frequencies, such that some photons are self absorbed. Assuming that the time-averaged source emits at the characteristic \(\nu_{m}\), if \(\nu_{a}\ll\nu_{m}\), then because most of the electrons are emitting at typical synchrotron frequencies much larger than \(\nu_{a}\), the spectral flux is proportional to \(\nu^{2}\) as opposed to \(\nu^{5/2}\)(Katz, 1994). Thus, we have \[F_{\nu<\nu_{a}}\propto\left(\frac{\nu}{\nu_{a}}\right)^{2}\left( \frac{\nu_{a}}{\nu_{m}}\right)^{1/3}F_{\nu,\max}\propto r^{2}\propto\begin{cases} t^{2/7}&\text{ring,}\\ \text{constant}&\text{jet,}\\ t^{1/2}&\text{spherical,}\end{cases} \tag{13}\] \[F_{\nu_{a}<\nu<\nu_{m}}\propto\left(\frac{\nu}{\nu_{m}}\right)^{ 1/3}F_{\nu,\max}\propto r^{3}\Gamma^{2/3}\propto\begin{cases}t^{1/7}&\text{ring,} \\ t^{-1/3}&\text{jet,}\\ t^{1/2}&\text{spherical,}\end{cases} \tag{14}\] for the flux below the self-absorption frequency and the intermediate flux between the self-absorption and characteristic frequency, respectively. This indicates that slopes would rise as long as the evolution were spherical or ring-like, but the slopes are different enough to perfectly distinguish between the two geometries. Moreover, there is a stark contrast from the latter geometries when compared with the \(t^{-1/3}\) decay of the jet once it begins spreading. At high frequencies, the light curves follow \[F_{\nu_{m}<\nu_{c}<\nu}\propto\Gamma^{2}r^{3}\left(\frac{\nu_{c} }{\nu_{m}}\right)^{-(p-1)/2}\left(\frac{\nu}{\nu_{c}}\right)^{-p/2}\propto \begin{cases}t^{-2(3p-1)/7}&\text{ring,}\\ t^{-p}&\text{jet,}\\ t^{-(3p-2)/4}&\text{spherical,}\end{cases} \tag{15}\] \[F_{\nu_{m}<\nu<\nu_{c}}\propto\Gamma^{2}r^{3}\left(\frac{\nu}{ \nu_{m}}\right)^{-(p-1)/2}\propto\begin{cases}t^{-3(2p-1)/7}&\text{ring,}\\ t^{-p}&\text{jet,}\\ t^{-3(p-1)/4}&\text{spherical,}\end{cases} \tag{16}\] for cooling electrons and for non-cooling electrons, respectively. In Equations 15 & 16, \(p\) is the electron distribution power-law index. Here we witness that ring afterglows possess two distinct cooling breaks analogous to whats been calculated for spherical outflows. Furthermore, our calculation evidences a very clear distinction between afterglows produced by relativistic rings and jets. A graphical depiction of this distinction is shown in Figure 2. We've shown _very_ distinct features such as differences in cooling breaks, and, more importantly, ring afterglows have shallower decay slopes than jets throughout the entirety of their evolution. The consequences of these revelations are discussed in the next section. ## 4 Discussion We have demonstrated that temporal evolution of ring afterglows is clearly distinct from their spherical and jet-like counterparts. While it is likely that classical GRBs are products of very energetic asymmetric flows, the geometry of said outflow is not well constrained. The jet model has been instrumental in its explanations of steep decays as resulting from highly collimated outflows. Yet, there exist observations which cannot be fit using the jet framework. Some light curves -- such as those produced by GRB 030329 (Stanek et al., 2003) or the more recent GRB 221009A (Williams et al., 2023) -- have very shallow breaks, which are hard to reconcile using top-hat jet models. In particular, GRB 221009A was reported by Williams et al. (2023) to favor a broken power-law model in the X-ray with flux decay slopes steepening from \(t^{-1.498\pm 0.004}\) to \(t^{-1.672\pm 0.008}\) with a jet break time of \(t_{b,\rm X-ray}\sim 8\times 10^{4}\,\rm s\). The timing of such steepening might be due to a jet with half-opening angle of \(3.5^{\circ}\)(D'Avanzo et al., 2022). However, the light curve does not steepen beyond the decay index \(\alpha\approx 1.7\) -- where \(F_{\nu}\propto t^{-\alpha}\) -- after the break, and observers cannot match this shallow X-ray decay index with what is predicted using a simple on-axis top-hat jet. For typical values \(p\cong 2.4\), the top-hat jets predict \(\alpha>2\), but rings predict \(1.63<\alpha<1.77\), well within the required range for GRB 221009A. Therefore, one can interpret this GRB as stemming from either a more structured jet configuration or an expanding relativistic ring. The notion of some astrophysical transients being sourced from expanding relativistic rings, rather than jets, have the following implications: (a) the probability of viewing ring afterglows is larger than that of jets by a factor \(2/\theta_{0}\). A blast wave with half-opening angle 0.1 radians, if oriented as a jet, would cover 0.5% of the sky while an expanding ring covers 10%, larger by a factor of 20. As a result, ring geometries, as compared to jet geometries, bring down the required source rates significantly; (b) as demonstrated by DuPont et al. (2022) relativistic rings can be born purely from geometrical and hydrodynamic effects as opposed to the more complex central engines required for producing classical jets; (c) the late-time evolution of the relativistic ring is much more stable than the jet since the spreading of the relativistic ring is effectively one dimensional and is therefore a candidate for light curves with shallower breaks (d) around the time of the ring break, when the emitting patch is no longer locally spherical, the specific intensity (surface brightness) is no longer symmetric about the line of sight to the observer for a general viewing angle (Granot et al., 1999, 2007; van Eerten et al., 2010). This fact can act as useful probe for the underlying dynamics of rings which can be further detailed by direct hydrodynamic simulations in the near future. Detailed analysis of scintillation patterns may be sensitive to the surface brightness distribution (Goodman, 1997) and may help distinguish jets from rings. This Letter has considered synchrotron emission which is more readily observed in radio and optical frequencies. At higher frequencies, inverse Compton emission may dominate. Adding the inverse Compton component could be done in a similar way to Sari and Esin (2001), or its extension by Nakar et al. (2009) if Klein Nishina corrections are important. In summary, we've considered the dynamics and observational signatures of expanding relativistic rings which preserve the notion of beaming, can account for shallow breaks as observed in many GRB light curves, and do not require a complex central engine to achieve their geometric configuration. Our investigation is inspired by the work of DuPont et al. (2022), where rings arise naturally, albeit with lower energy than needed to explain cosmological afterglows for the conditions considered in that work. Moreover, while the main focus of this work has been GRBs, we emphasize that the importance of our calculations are the unique features presented by the ring geometry, while the energetics can Figure 2: Pictorial light curves for the spherical, ring-like, and jet-like blast waves, respectively. The left and right panels show the typical light curve behavior at low (\(\sim\) radio) and high (\(\sim\) optical and X-ray) observed frequencies, respectively. The slopes are segmented in time between the break time, \(t_{b}\), and the times when the break frequencies \(\nu_{m}\) and \(\nu_{c}\) cross the observed frequency \(\nu\). The vertical dashed line for \(t_{c}\) is broken at the jet curve in the left panel since \(\nu_{c}\) is constant for that geometry and it therefore has no corresponding \(t_{c}\). In both frequency bands, we show the divergence in flux decay rate once the break time is reached with the low frequency band showing the clearest separation between the various phases of evolution. be scaled appropriately and applied to a broader scope of astrophysical transients. Therefore, we suggest that ring-like outflows should be considered when interpreting observations of non-spherical explosions.
2309.12494
Evidential uncertainty sampling for active learning
Recent studies in active learning, particularly in uncertainty sampling, have focused on the decomposition of model uncertainty into reducible and irreducible uncertainties. In this paper, the aim is to simplify the computational process while eliminating the dependence on observations. Crucially, the inherent uncertainty in the labels is considered, the uncertainty of the oracles. Two strategies are proposed, sampling by Klir uncertainty, which tackles the exploration-exploitation dilemma, and sampling by evidential epistemic uncertainty, which extends the concept of reducible uncertainty within the evidential framework, both using the theory of belief functions. Experimental results in active learning demonstrate that our proposed method can outperform uncertainty sampling.
Arthur Hoarau, Vincent Lemaire, Arnaud Martin, Jean-Christophe Dubois, Yolande Le Gall
2023-09-21T21:26:50Z
http://arxiv.org/abs/2309.12494v2
# Evidential uncertainties on rich labels ###### Abstract Recent research in active learning, and more precisely in uncertainty sampling, has focused on the decomposition of model uncertainty into reducible and irreducible uncertainties. In this paper, we propose to simplify the computational phase and remove the dependence on observations, but more importantly to take into account the uncertainty already present in the labels, _i.e._ the uncertainty of the oracles. Two strategies are proposed, sampling by Klir uncertainty, which addresses the exploration-exploitation problem, and sampling by evidential epistemic uncertainty, which extends the reducible uncertainty to the evidential framework, both using the theory of belief functions. Keywords:Active Learning Uncertainty sampling Belief Functions ## 1 Introduction For reasons of efficiency, cost or energy reduction in machine learning or deep learning, one of the important issues is related to the amount of data and in some cases, to the amount of labelled data. Active learning [19] is a part of machine learning in which the learner can choose which observation to label in order to work with only a fraction of the labeled dataset to reduce the labeling cost. For this purpose, the learner uses a strategy that allows it to select only certain observations that will then be labeled. Among all the proposed strategies in the literature [1, 19] one of the best known is sampling by uncertainty [15]. In uncertainty sampling, the learner selects the instances for which it is most uncertain. The measures used to quantify this uncertainty, such as entropy, are up to now probabilistic. In this paper, we propose to use a broader framework of uncertainty that generalizes probabilities. As proposed in recent papers [10, 11, 18] the uncertainty can be decomposed into two interesting terms: the epistemic and the aleatoric uncertainties. Aleatoric uncertainty arises from the stochastic property of the event and is therefore not reducible, whereas epistemic uncertainty is related to a lack of knowledge and can be reduced. Proposed calculations depend on the model prediction but also on the observations. We suggest in this paper, to get rid of the direct dependence on the observations and to use only the model output for similar results. This representation also addresses the exploration-exploitation problem in active learning, with the possibility of choosing one or the other, or even a compromise as in [2]. The labeling process is often carried out by humans [7, 17]; without making any difference between a label given by someone who has hesitated for a long time and a label given by someone who has no doubt, and therefore uncertainty may already exist in the labels. This information is not taken into account in most models and sampling strategies. In the case of supervised classification, several models are now able to handle these uncertain labels [4, 5, 6, 23]. The main objective, in addition to not being dependent on observations and to address the problem of exploration-exploitation, is to take into account in the sampling, the uncertainty already present in the labels. Given the above, we propose in this paper two uncertainty sampling strategies capable of representing a decomposition of the model uncertainties with regard to the uncertainty already present in the labels. The first strategy is based upon two different uncertainties, the discord (how self-conflicting the information is) and non-specificity (how ignorant the information is) in the model output. The second strategy extends the epistemic uncertainty to the evidential framework and to several classes, thus simplifying the computation. The paper is organized as follows; section 2 introduces some important notions of imperfect labeling and the modeling of these richer labels using the theory of belief functions. The usual uncertainty sampling approach [15] is also recalled and section 3 describes the separation between aleatoric and epistemic uncertainties. Section 4 presents the two new proposed strategies, section 5 shows an application on a real world dataset, then section 6 discusses and concludes the article. The experiments performed in this paper are described in supplementary materials, to avoid lengthy explanations, since the purpose of the paper does not lie in this part. Furthermore, uncertainties are mapped on 2D representations but the objective is to later serve active learning. ## 2 Preliminaries In this section, we introduce some general knowledge useful to understand the rest of the paper, starting with rich labels, modeled by the theory of belief functions and ending with the classical approach of sampling by uncertainty. #### 2.0.1 Imperfect labeling - Most of the datasets used for classification consider hard labels, with a binary membership where the observation is either a member of the class or not. In this paper, we refer as rich labels the elements of response provided by a source that may include several degrees of imprecision (_i.e._ "_This might be a cat_", "_I don't know_" or "_I am hesitating between dog and cat, with a slight preference for cat_)". Such datasets, offering uncertainty already present in the labels, exist [22] but are not numerous. These labels are called rich in this paper since they provide more information than hard labels and can be modeled using the theory of belief functions. Theory of belief functions - The theory of belief functions [3; 20], is used in this study to model uncertainty and imprecision for labeling and prediction. Let \(\Omega=\{\omega_{1},\ldots,\omega_{M}\}\) be the frame of discernment for \(M\) exclusive and exhaustive hypotheses. It is assumed that only one element of \(\Omega\) is true (closed-world assumption) [21]. The power set \(2^{\Omega}\) is the set of all subsets of \(\Omega\). A mass function assigns the belief that a source may have about the elements of the power set of \(\Omega\), such that the sum of all masses is equal to 1. \[m:2^{\Omega}\rightarrow[0,1],\sum_{A\in 2^{\Omega}}m(A)=1. \tag{1}\] Each subset \(A\in 2^{\Omega}\) such as \(m(A)>0\) is called a _focal element_ of \(m\). The uncertainty is therefore represented by a mass \(m(A)<1\) on a focal element \(A\) and the imprecision is represented by a non-null mass \(m(A)>0\) on a focal element \(A\) such that \(|A|>1\). A mass function \(m\) is called _categorical mass function_ when it has only one focal element such that \(m(A)=1\). In the case where \(A\) is a set of several elements, the knowledge is certain but imprecise. For \(|A|=1\), the knowledge is certain and precise. On decision level, the pignistic probability \(BetP\)[21] helps decision making on singletons: \[BetP(\omega)=\sum_{A\in 2^{\Omega},\ \omega\in A}\frac{m(A)}{|A|}. \tag{2}\] It is also possible to combine several mass functions (beliefs from different sources) into a single body of evidence. If the labels and therefore the masses are not independent, a simple average of the mass functions \(m_{j}\) derived from \(N\) sources can be defined as follows: \[m(A)=\frac{1}{N}\sum_{j=1}^{N}m_{j}(A),\ \ A\in 2^{\Omega}. \tag{3}\] There are other possible combinations that are more common than the mean, many of which are listed in [14]. \(\bullet\)**Example 1:** Let \(\Omega=\{Cat,Dog\}\) be a frame of discernment. An observation labeled "Cat" by a source can be modeled in the framework of belief functions by the mass function \(m_{1}\) such as: \(m_{1}(\{Cat\})=1\) and \(m_{1}(A)=0,\ \forall A\in 2^{\Omega}\backslash\{Cat\}\). \(\bullet\)**Example 2:** An observation labeled "Cat or Dog" by a source can be modeled by the mass function \(m_{2}\) such as: \(m_{2}(\{Cat,Dog\})=1\) and \(m_{2}(A)=0\), \(\forall A\in 2^{\Omega}\backslash\{Cat,Dog\}\). \(\bullet\)**Example 3:** The average mass function \(\bar{m}\) of \(m_{1}\) and \(m_{2}\) is: \(\bar{m}(\{Cat\})=0.5\), \(\bar{m}(\{Cat,Dog\})=0.5\) and \(\bar{m}(A)=0\) for all other subsets \(A\) in \(2^{\Omega}\). Its pignistic probability \(BetP\), used for decision making is: \(BetP(\{Cat\})=0.75\) and \(BetP(\{Dog\})=0.25\). #### 2.1.1 Uncertainty sampling - Active learning iteratively builds a training set by selecting the best instances to label. The principle is, for a given performance or a given budget, to label as few observations as possible. Among all the strategies proposed in the literature [19] one of the best known methods is uncertainty sampling [13], where the function that defines the instances to be labeled maximizes the uncertainty related to the model prediction as described below. Let \(\mathcal{U}\) be the uncertainty to label a new observation \(x\) for a given model and \(\Omega=\{\omega_{1},\ldots,\omega_{M}\}\) the set of the \(M\) possible classes. The uncertainty \(\mathcal{U}\) can be calculated in several ways, a classical approach is to use Shannon's entropy: \[\mathcal{U}(x)=-\sum_{\omega\in\Omega}p(\omega|x)\text{log}[p(\omega|x)], \tag{4}\] with \(p(\omega|x)\) the probability for \(x\) to belong to the class \(\omega\), given by the model. Other uncertainty criteria exist, it is common to use the least confidence measure: \[\mathcal{U}(x)=1-\max_{\omega\in\Omega}[p(\omega|x)]. \tag{5}\] Measuring the uncertainty of a model to predict the class of some observations can be useful to find the areas of uncertainty in a space. Figure 1 represents three two-dimensional datasets, the classes are perfectly separated. Given the model and one of the uncertainty criteria, we can compute the uncertainty of any point in space. For each dataset, the areas of uncertainty of the model are represented, with more red for more uncertainty. It is remarkable that these uncertainty areas can be compared to the decision boundaries of the model. Often, the closer the observation is to the decision boundary, the less confident the model is about its prediction. Uncertainty sampling consists of choosing the observation for which the model is least certain of its prediction. This is one of the basis of active learning, Figure 1: Three 2-class datasets with areas of model uncertainty. however, other methods allow to extract more information about this uncertainty which leads to the decomposition into epistemic and aleatoric uncertainties. ## 3 On the interest and limits of epistemic and aleatoric uncertainties for active learning In this section, we introduce additional elements to decompose the uncertainty of the model so it can focus, in active learning, on the observations that will make it rapidly gain in performance. The uncertainty \(\mathcal{U}(x)\) can be separated into two uncertainties [9], one reducible and the other irreducible. The example1 of Figure 2 shows these two types of uncertainties, on 2a the result of a coin toss is uncertain and it is not possible to generate more knowledge to predict that the coin will flip heads or tails, this ignorance is called aleatoric uncertainty. On 2b either heads or tails is written in Finnish, it is an uncertainty that can be resolved by learning this language, it is called epistemic uncertainty. Footnote 1: This example is taken from Eyke Hullermeier’s talk “Representation and Quantification of Uncertainty in Machine Learning” at the LFA2022 conference. In our example the word tails is written in Finnish, the word heads is called “Kruuna”. Being able to model these two uncertainties can help delimit where it is more interesting to provide knowledge and where it is useless. The total uncertainty \(\mathcal{U}(x)\) is often represented as the sum of the epistemic uncertainty \(\mathcal{U}_{e}(x)\) and the aleatoric uncertainty \(\mathcal{U}_{a}(x)\): \(\mathcal{U}(x)=\mathcal{U}_{e}(x)+\mathcal{U}_{a}(x)\). For a two-class problem \(\Omega=\{0,1\}\), it is proposed in [18] to model this uncertainty (here under the [15] formalism) by computing the plausibility \(\pi\) of belonging to each of the two classes with the following formula, according to a probabilistic model \(\theta\): \[\begin{split}\pi(1|x)&=\sup_{\theta\in\Theta}\,\min[ \pi_{\Theta}(\theta),p_{\theta}(1|x)-p_{\theta}(0|x)],\\ \pi(0|x)&=\sup_{\theta\in\Theta}\,\min[\pi_{\Theta} (\theta),p_{\theta}(0|x)-p_{\theta}(1|x)],\end{split} \tag{6}\] with \(\pi_{\Theta}(\theta)\) depending on the likelihood \(L(\theta)\) and the maximum likelihood \(L(\hat{\theta})\): \[\pi_{\Theta}(\theta)=\frac{L(\theta)}{L(\hat{\theta})}. \tag{7}\] Figure 2: Representation of aleatoric and epistemic uncertainties through the tossing of a coin and the word “heads” or “tails” written in Finnish. The epistemic uncertainty is then high when the two classes are very plausible while the aleatoric uncertainty is high when the two classes are implausible: \[\begin{split}\mathcal{U}_{e}(x)&=\min[\pi(1|x),\pi(0|x )],\\ \mathcal{U}_{a}(x)&=1-\max[\pi(1|x),\pi(0|x)].\end{split} \tag{8}\] This calculation depends not only on the prediction of the model but also on the observations. To summarize, the fewer observations there are in a region, or the fewer decision elements there are to strongly predict a class, the higher the plausibility of the two classes, and the more reducible (and thus epistemic) the uncertainty is by adding knowledge. An example is shown in Figure 3, a two-class dataset is shown in (a)a and the areas of model uncertainty are shown in (b)b according to the uncertainty sampling presented in the previous section. An horizontal line can be distinguished where the model uncertainty is highest. However, the sample represented in (a)a, shows that part of the uncertainty can be removed more easily by adding observations. In the same figure, three different datasets show how the sample can evolve by adding observations. Whatever the final distribution, the uncertainty on the left is not very reducible, while the uncertainty on the right can be modified by adding knowledge. These two uncertainties can be calculated using equation (8), and are shown in Figure 4. The aleatoric uncertainty, and therefore irreducible, is represented in (a)a and the epistemic uncertainty, reducible, is represented in (b)b. The total uncertainty is then the sum of the two (c)c. The goal here is to use only the epistemic uncertainty, to know the areas where the model can learn new knowledge and where it will have more impact. Figure 3: Sample with areas of uncertainty according to the uncertainty sampling and three possible datasets based on the observations available in (a)a. Using epistemic uncertainty as a sampling strategy is not reductive since it provides similar areas of uncertainty to those used previously, where epistemic and aleatoric uncertainty are indistinguishable. Such information can be useful to find areas of reducible uncertainty, but it is not compatible with richer labels that also contain uncertainty. The way to compute this epistemic uncertainty is also dependent on the observations in addition to the model (_i.e._ the method could be oversimplified as: the model defines its zones of uncertainty, in which we look for the location with the smallest number of observations to define the reducible uncertainty.). Furthermore, the exploration-exploitation problem is not fully addressed. This leads to the next section in which two uncertainty sampling strategies for rich labels are proposed, they are also extended to several classes. ## 4 Richer labels and multiple classes In this section, we propose two uncertainty sampling strategies, with a simplified calculation phase, able to deal with richer labels and no longer directly dependent on the observations but only on the model prediction2. We also propose a natural extension for a number of classes higher than two. The first method uses discord and non-specificity to map uncertainty in order to address the exploration-exploitation problem. The second method extends the epistemic and aleatoric uncertainties to rich labels, also simplifying the computation phase. Footnote 2: The uncertainty is no longer directly dependent on the observations, but the model still is. From there, a label can be uncertain and imprecise, which means that additional information on ignorance is represented. Figure 5 shows how these labels are represented in this document, the darker the dot, the less ignorance the label contains (_e.g. I'm sure this is a dog_), the lighter the dot, the more ignorance it contains (_e.g. I have no idea between dog and cat_). ### Discord and non-specificity: Klir uncertainty In the framework of belief functions, discord and non-specificity are tools that allow to model uncertainty, we propose to use Klir's representation [12] for uncertainty sampling, some bridges can be made with epistemic and aleatoric uncertainty. Figure 4: Areas of uncertainty on (a)a for epistemic and aleatoric uncertainties. #### 3.1.2 Discord is here applied to the output of a model capable of making an uncertain and imprecise prediction3. It represents the amount of conflicting information in the model's prediction and is calculated with the following formula: Footnote 3: The Evidential \(K\)-nearest Neighbors model [5] is considered to illustrate the examples, which may vary depending on the model used. \[D(m)=-\sum_{A\subseteq\Omega}\,m(A)\,\log_{2}(BetP(A)), \tag{9}\] with \(m\) a mass function, or the output of the model (see section 2). Figure 6 represents three different cases where the discord varies, from high discordance where labels around the central point (the observation to label) highly disagree 6a, to low discordance where each of the labels is in agreement 6c. #### 3.1.3 Non-Specificity allows to quantify the degree of ignorance of the model, the higher it is, the more imprecise the response of the model, it is calculated with: \[N(m)\equiv\sum_{A\subseteq\Omega}m(A)\,\log_{2}(|A|). \tag{10}\] The same Figure 6 also represents three different cases of non-specificity, in 6d the non-specificity is low as there are relevant sources of information next to the observation to be labelled, in 6e the non-specificity increases the further away the elements are from the observation and in 6f the non-specificity is also high because the nearby sources of information are themselves ignorant. #### 3.1.4 Klir uncertainty is then derived from discord and non-specificity, it is used here for uncertainty sampling by adding the two previous formulas: \[\mathcal{U}_{m}(x)=N(x)+D(x), \tag{11}\] with \(N(x)\) and \(D(x)\) respectively the non-specificity and discord of the model in \(x\). Klir [12] proposes to use the same weight for discord and non-specificity, but in [4] a parameter \(\lambda\in[0,1]\) is introduced and allows to bring more weight to non-specificity (we propose to use it for more exploration) or to discord (for more exploitation): \[\mathcal{U}_{m}(x)=\lambda N(x)+(1-\lambda)D(x). \tag{12}\] Figure 5: Observations on two dimensions with their rich labels, the darker the point, the more certain and precise its label. Note that this uncertainty is naturally extended to \(|\Omega|\geq 2\) classes. This formula has the advantage of identifying the total uncertainty as well as the reducible one, but also of taking into account the uncertainty already present in the labels and of being adjustable for more exploration or exploitation. Figure 7 shows a dataset with two areas of uncertainty (a)a, on the right an area with a lack of data and on the left an area where labels are more ignorant. The uncertainty sampling, using Shannon's entropy (4) or the least confidence measure (5) is not able to see either of these two areas (b)b. The epistemic uncertainty (8) is able to distinguish the uncertainty related to the arrangement of the observations in space (_i.e._ the uncertainty on the right) but not the uncertainty related to the ignorance of the sources (c)c. The proposal of using Klir uncertainty for sampling (discord and non-specificity) allows to represent each of these uncertainties. Figure 8 shows the areas of non-specificity (a)a, of discord (b)b and Klir uncertainty (c)c. Klir uncertainty can then be used for uncertainty sampling in active learning, it is also possible to vary the result for more exploration or more exploitation by modifying \(\lambda\). Figure 9 shows the areas of uncertainty for different values of \(\lambda\), more discord on the left to more non-specificity on the right. Figure 6: Three degrees of discord and three degrees of non-specificity in the center. Figure 7: An imperfectly labeled dataset (a)a with the areas of uncertainty according to uncertainty sampling and epistemic uncertainty. We have proposed here to use Klir's uncertainty in sampling, which allows to represent some unknown uncertainties areas in active learning related to rich labels. The method is no longer dependent on the observations, but only on the prediction of the model and the exploration-exploitation problem is addressed thanks to the \(\lambda\) parameter. Even though discord may recall aleatoric uncertainty (non-reducible) and non-specificity may recall epistemic uncertainty (reducible). These notions are not quite equivalent. Therefore, in the following section we also propose an extension of epistemic (and aleatoric) uncertainty for rich labels and for several classes. ### Evidential epistemic uncertainty We propose here to extend the notion of epistemic uncertainty to rich labels, by removing the dependence on observations, simplifying the computational phase, and allowing the model to detect new areas of uncertainty. The epistemic uncertainty can be extended to rich labels by using the notion of plausibility within the framework of belief functions. It represents the total evidence that does not support the complementary event for a class \(\omega\) or more generally for an element \(A\in 2^{\Omega}\). The plausibility \(Pl\) defines the belief that could be allocated to \(A\): \[Pl(A)=\sum_{A\cap B\neq\emptyset}m(B). \tag{13}\] Figure 8: Areas of uncertainty corresponding to the dataset (a)a according to the non-specificity, the discord and the total uncertainty defined by Klir. Figure 9: Areas of Klir uncertainty, modifying the amount of non-specificity and discord. With \(\lambda=0.1\), more discord is taken into account, with \(\lambda=0.5\), discord and non-specificity are used as much and with \(\lambda=0.9\), more non-specificity is taken into account. The plausibility being the consistent evidence, the belief function \(Bel\) defines the total evidence directly supporting \(A\): \[Bel(A)=\sum_{B\subseteq A,B\neq\emptyset}m(B). \tag{14}\] We have \(Pl(A)=1-Bel(\bar{A})\). Analogous to equation (8) and for two classes \(\Omega=\{0,1\}\) the epistemic uncertainty is maximal when both classes are highly plausible. The proposed evidential epistemic and aleatoric uncertainties are defined as follows: \[\begin{split}\mathcal{U}_{e}(x)&=\min[Pl(1|x),Pl(0 |x)],\\ \mathcal{U}_{a}(x)&=1-\max[Pl(1|x),Pl(0|x)].\end{split} \tag{15}\] The equation for the aleatoric uncertainty can be rewritten depending on the belief \(Bel\): \[\mathcal{U}_{a}(x)=\min[Bel(1|x),Bel(0|x)]. \tag{16}\] The sum of the epistemic and aleatoric uncertainties is then the total evidential uncertainty: \(\mathcal{U}(x)=\mathcal{U}_{e}(x)+\mathcal{U}_{a}(x)\). However, when the number of classes exceeds 2 the equation of the epistemic uncertainty cannot be simplified by the minimum plausibility: \[\begin{split}\mathcal{U}_{e}(x)&\neq\min([Pl( \omega|x)|\omega\in\Omega]),\\ \mathcal{U}_{a}(x)&\neq 1-\max([Pl(\omega|x)|\omega\in \Omega]).\end{split} \tag{17}\] It is preferable to first define the uncertainty related to one of the classes \(\omega\), rewritten with the belief \(Bel\) to avoid having to manipulate \(\bar{\omega}\): \[\begin{split}\mathcal{U}_{e}(\omega|x)&=\min[Pl( \omega|x),Pl(\bar{\omega}|x)]\\ &=\min[Pl(\omega|x),1-Bel(\omega|x)].\end{split} \tag{18}\] The evidential extension of the epistemic and aleatoric uncertainties for \(|\Omega|\geq 2\) classes is then: \[\begin{split}\mathcal{U}_{e}(x)&=\sum_{\omega\in \Omega}\min[Pl(\omega|x),1-Bel(\omega|x)],\\ \mathcal{U}_{a}(x)&=\sum_{\omega\in\Omega}\min[Bel( \omega|x),1-Pl(\omega|x)].\end{split} \tag{19}\] The example in Figure 10 shows a dataset of three classes with a zone of ignorance for some labels (between the green and red classes). Probabilistic (4)-(5) and epistemic (8) uncertainties cannot model the imprecision present in the labels, this less complete uncertainty zone is represented in 10b. The previous uncertainty resulting from the sum of the discord and the non-specificity is presented in Figure 11. It manages both exploration 11a and exploitation 11b to give a better representation of the uncertainty 11c. Figure 11: Areas of uncertainty corresponding to the datasets (a)a according to the non-specificity, the discord and the total Klir uncertainty. Figure 12: Areas of uncertainty corresponding to the datasets (a)a according to the evidential epistemic uncertainty for green, red and blue classes. Figure 10: On the left, a sample of a dataset of three classes with an area of ignorance (labeled with imprecision) and on the right areas of uncertainty according to non-evidential uncertainty sampling. Figure 13: Areas of uncertainty for evidential epistemic and aleatoric uncertainties, according to (a)a. The extension of the epistemic uncertainty, also introduced in this paper, is presented in the following experiments. First, the evidential epistemic areas of uncertainties for each of the three classes are presented in Figure 12. Then, the resulting evidential epistemic uncertainty of the model is deducted from equation (19) in Figure 13 along with the evidential aleatoric and total uncertainties. ## 5 Sampling on real world dataset Some datasets have been labeled in an uncertain and imprecise way by users during crowdsourcing campaigns [22]. We therefore have access to really imperfectly labeled datasets with rich labels. Conventional methods for computing model uncertainty do not take into account the degrees of imprecision of these rich labels. The two proposed methods are illustrated on Credal Dog-2, one of these datasets. Figure 14 shows the dataset on the two first components of a Principal Component Analysis. This is a two-class dataset represented in 14a with true classes and in 14b with uncertain and imprecise rich labels given by contributors. Darker dots indicate higher certainty, and vice versa. Figure 16 shows the result of the first proposed method, sampling by Klir uncertainty, on the dataset with rich labels. The non-specificity is presented 15a and can be interpreted as the ignorance zones of the model. Discord is also represented 15b and the total uncertainty 15c is the sum of the two, it is this latter information that is used to sample on the model uncertainty. The second proposed method, the extension of epistemic uncertainty, which is a reducible uncertainty applied to evidential reasoning, is presented in Figure 16. The irreducible aleatoric evidential uncertainty 16a is presented along with the reducible epistemic evidential uncertainty 16b. The total uncertainty 16c is the sum of the reducible and irreducible uncertainties. For active learning, it is not the total uncertainty, but the epistemic reducible uncertainty that is used. ## 6 Discussion & Conclusion The calculation of epistemic uncertainty (non-evidential) is demanding, and not necessarily accessible. It is, depending on the observations, necessary to go through several phases of computation, estimation of likelihood, maximum likelihood and optimization. In this paper, we have proposed two new uncertainty sampling strategies and a new way to represent them. With these two proposed methods, the use of Klir uncertainty and the extended evidential epistemic uncertainty, a simple calculation on the output of the model allows to obtain the uncertainties. The objective is to also take into account the uncertainty present in richer labels, which is currently not possible. The first strategy is based on Klir's uncertainty, combining discord (how self-conflicting the information is) and non-specificity (how ignorant the information is) in the model output. The second strategy extends epistemic (reducible) uncertainty to the evidential framework and to several classes, simplifying the computational phase. This simplicity obviously has a counterpart: the model must be able to deliver a mass function, to represent uncertainty and imprecision in the output. Such models exist but are not numerous, among them are the much quoted Evidential \(K\)-Nearest Neighbors [5], Evidential Decision Trees [4, 6], Evidential Random Forest and even Evidential Neural Networks [23]. The proposed methods are compatible with probabilistic models (since a probability is a special mass function) but the full depth of evidence modeling would be lost. The novelty of this work lies in the representation of new information for uncertainty sampling, rather than in performance comparison. The next step is to apply these models to active learning, where the learning model has access to a very limited number of labeled observations, and must choose the most relevant observations to label in order to increase performance. The ability of the model to define these areas of uncertainty, and to categorize these uncertainties, is then relevant information. Figure 16: Areas of evidential epistemic uncertainty corresponding to 14b. Figure 14: Credal Dog-2 dataset, Brittany breed is in green and Beagle in red. Figure 15: Areas of uncertainty corresponding to the dataset 14b according to the non-specificity, the discord and to the total Klir uncertainty.
2309.07927
Kid-Whisper: Towards Bridging the Performance Gap in Automatic Speech Recognition for Children VS. Adults
Recent advancements in Automatic Speech Recognition (ASR) systems, exemplified by Whisper, have demonstrated the potential of these systems to approach human-level performance given sufficient data. However, this progress doesn't readily extend to ASR for children due to the limited availability of suitable child-specific databases and the distinct characteristics of children's speech. A recent study investigated leveraging the My Science Tutor (MyST) children's speech corpus to enhance Whisper's performance in recognizing children's speech. They were able to demonstrate some improvement on a limited testset. This paper builds on these findings by enhancing the utility of the MyST dataset through more efficient data preprocessing. We reduce the Word Error Rate (WER) on the MyST testset 13.93% to 9.11% with Whisper-Small and from 13.23% to 8.61% with Whisper-Medium and show that this improvement can be generalized to unseen datasets. We also highlight important challenges towards improving children's ASR performance. The results showcase the viable and efficient integration of Whisper for effective children's speech recognition.
Ahmed Adel Attia, Jing Liu, Wei Ai, Dorottya Demszky, Carol Espy-Wilson
2023-09-12T06:58:18Z
http://arxiv.org/abs/2309.07927v3
Kid-Whisper: Towards Bridging the Performance Gap in Automatic Speech Recognition for Children vs. Adults ###### Abstract Recent advancements in Automatic Speech Recognition (ASR) systems, exemplified by Whisper, have demonstrated the potential of these systems to approach human-level performance given sufficient data. However, this progress doesn't readily extend to ASR for children due to the limited availability of suitable child-specific databases and the distinct characteristics of children's speech. A recent study investigated leveraging the My Science Tutor (MyST) children's speech corpus to enhance Whisper's performance in recognizing children's speech. They were able to demonstrate some improvement on a limited testset. This paper builds on these findings by enhancing the utility of the MyST dataset through more efficient data preprocessing. We reduce the Word Error Rate (WER) on the MyST testset 13.93% to 9.11% with Whisper-Small and from 13.23% to 8.61% with Whisper-Medium and show that this improvement can be generalized to unseen datasets. We also highlight important challenges towards improving children's ASR performance. The results showcase the viable and efficient integration of Whisper for effective children's speech recognition. Ahmed Adel Attia\({}^{1}\), Jing Liu\({}^{1}\), Wei Ai\({}^{1}\), Dorottya Demszky\({}^{2}\), Carol Espy-Wilson\({}^{1}\)\({}^{1}\)University of Maryland College Park, MD, USA \({}^{2}\)Stanford University, CA, USA Whisper, children ASR, My Science Tutor, MyST, CSLU kids, automatic speech recognition ## 1 Introduction Automatic Speech Recognition (ASR) has witnessed a boom in recent years through utilizing huge amounts of transcribed speech scrapped from the internet. Whisper [1] was able to approach human-level accuracy by utilizing 680K hours of speech data. XLS-R [2] pre-trains on 436K hours of untranscribed speech in a self-supervised manner and 65K hours of transcribed speech. These models were able to achieve state-of-the-art (SOTA) results by leveraging huge amounts of data. ASR models still underperform with low-resource languages and tasks. Recent works have attempted to explore how ASR models performance can be improved for low-resource languages [3, 4, 5, 6] but they haven't caught up with high-resource languages. Children ASR is considered a low resource task and previous works have demonstrated the gap between children and adult ASR even in English. The main reason for that has been attributed to inter-speaker variability due to varying developmental rates and intra-speaker variability due to underdeveloped pronunciation skills [7, 8, 9, 10, 11, 12]. Current ASR models trained on adult speech are not capable of learning these variabilities as they are mostly unseen in the training data. Moreover, children's speech databases are limited and difficult to collect and transcribe [13]. In this work, we explore how Whisper can be fine-tuned on children's speech. We chose Whisper because of its massive training data which makes it more likely to generalize to unseen and uncommon speech patterns. Additionally, Whisper has been shown to be noise-robust [14]. We take advantage of the My Science Tutor (MyST) speech corpus [15] which is the largest publicly available children's speech corpus, provided free to academics for research purposes. A recent study [16] has attempted to adapt Whisper to the MyST corpus. They found that the quality of audio files as well as transcriptions in the MyST corpus varies, and were able to extract 65 hours of well-transcribed speech from the 197 hours of transcribed speech provided in MyST. We expand upon their work by outlining a more efficient data preprocessing scheme and extracting a total of 179.2 hours, which we show improves the performance and robustness of Whisper. Additionally, we maintain the train/test/development splits provided in the MyST corpus to ensure there's no overlap in speakers between data splits. We demonstrate tangible improvement on the MyST testset, reducing the Word Error Rate (WER) of the Small Whisper model from 13.93% to 9.11% and that of the Medium model from 13.23% to 8.61%. This also leads to improving to WER on the spontaneous part of the CSLU Kids dataset from 32.00% to 27.16% with the Small model, and from 37.04% to 16.53% with the Medium model without explicitly including this dataset in the training set. We begin by giving a quick overview of Whisper in Section 2, followed by a description of the datasets used and our proposed preprocessing scheme in Section 3. We follow that by showcasing our experiments and training parameters in Section 4. Results and further discussion are in Section 5. We end with a conclusion outlining plans for future research in Section 6, and acknowledgments in Section 7. ## 2 Model Description Whisper is a family of ASR models with varying sizes, namely, Tiny, Base, Small, Medium, and Large. Models from Tiny to Medium have an English-only variant and a multilingual variant. The training data for Whisper includes 438K hours of English-to-English transcription, 117K hours covering 96 languages not including English, and 125K hours of speech spoken in different languages, transcribed in English. To filter out low-quality transcription, the training set was passed through an initial model, and files with a high WER were flagged and manually inspected to remove automatically transcribed and mistranscribed files. This substantial amount of training data helped Whisper achieve near human-level transcription, especially in English, with their Large model achieving a WER of 2.82 on the Librispeech clean test set. ## 3 Dataset Description and Processing We mainly focus on the MyST corpus in this study. However, we also discuss how well the results on MyST can be generalized beyond this corpus. For that purpose, we use the CSLU kids database [17]. Additionally, we study how finetuning affects the performance on adult speech by testing our models on the test-clean subset of Librispeech. In this section, we describe each corpus. ### My Science Tutor Dataset The MyST corpus is the largest publicly available children's speech corpus. It consists of 393 hours of conversational children's speech, recorded from virtual tutoring sessions in physics, geography, biology, and other topics. The corpus spans 1,371 third, fourth, and fifth-grade students although age and gender information for each student are not available. Around 197 hours of the dataset were transcribed, although the quality of transcriptions varies. To the best of our knowledge, the MyST corpus was not included in Whisper's training set. Upon manual inspection, some transcriptions were assigned to the wrong files completely. Provided Transcription: Um, the wires are like a pathway energy goes through it into the motor and makes it work. Actual Transcription: Um, because it's metal, and metal I think has energy. Other files appear to have been automatically transcribed with a lower-quality transcriber. Provided Transcription: No, I don't hearing even a candle burns. Actual Transcription: No, I don't hear anything when the candle burns. Additionally, some files have poor audio quality, with the children speaking too close to the microphone, which resulted in a high level of distortion in the audio files. To identify these files, we follow a similar technique as in [1], by passing the entire dataset through Whisper-Large and flagging files with WER larger than 50%. Additionally, one and two-word files were removed altogether, because they lacked the context to distinguish between homophones, like "to", "too" and "two". All files with no speech activity, i.e. files labeled as \(<\)DISCARD\(>\) or \(<\)NO_SIGNAL\(>\) or \(<\)SILENCE\(>\), were also removed from the dataset. Table 1 shows the effect of different filtering steps on total dataset duration and WER. According to our results, around 5 hours of the training data is either mistranscribed or has low audio quality and is responsible for increasing the WER on the training data by about 3%. Similar results can be inferred about the test and development sets. Additionally, short files which accounted for only 4 hours of the training data increased the WER by more than 7%. We will publish the list of flagged files on GitHub and link to it in the camera-ready manuscript. Files longer than 30 seconds in the training and development sets were also removed. That is because Whisper processes files in 30-second chunks, and any files longer than 30 seconds are truncated. However, it is not possible to accurately truncate the transcriptions without any timestamps present, so these files are unsuitable for loss calculation. Additionally, the majority of the files in the MyST corpus were too short, with the average file length in the training data being 8 seconds. That would mean that training batches are mostly padding, leading to inefficient training. To remedy this, files within a single recording session were concatenated to be close to but not longer than 30 seconds while maintaining the context of the conversation within the recording session. Our filtering technique removes 17.8 hours from the entire dataset, which leaves us with 179.2 hours of well-transcribed speech in total. We maintain the train/development/test split provided in the MyST database to avoid any overlap in speakers between the splits. We ended up with 132.5 hours in the training data, 20.9 in the development data, and 25.8 in the test data. The text of the transcriptions was all upper case which destabilized the training. Consequently, all the text was mapped to be lowercase and further normalized using WhisperNormalizer1 Python package, which mapped tokens like "you're" to a standard "you are", as well as mapping all digit numbers to be spelled out. This ensured that only actual mistranscriptions would be penalized. This also reduces the \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Filtration Method** & **Train** & **Test** & **Development** \\ \hline **No Filteration** & 29.5 (145) & 26.2 (28.1) & 26.2 (25.5) \\ \hline **Removing Files w. WER \(>\) 50\%** & 26.8 (140) & 22.3 (26.7) & 22.3 (25.5) \\ \hline **Removing Files w. WER \(>\) 50\%** & 19.2 (132.5) & 14.2 (25.6) & 12.8 (21) \\ or **w. Less Than 3 Words** & & & \\ \hline \end{tabular} \end{table} Table 1: WER of Whisper-Large-v1 transcriptions of all three data splits of the MyST corpus before and after different levels of filtration (Duration of splits in hours). diversity in transcription quality, which was noted to harm the performance, unlike diversity in audio quality[1]. When contrasted with [18], their filtration method of removing all files longer than 20 seconds and shorter than 10 seconds yielded only 65 hours in total, which they partitioned into a 55-hour training set and a 10-hour test set and no development set. Their sets also suffered from overlapping speakers between the train and test sets. By sticking to the splits provided by MyST, our splits share no overlapping speakers, and have almost 3x the amount of data. To ensure fair comparison, the speech from all speakers in [16]'s test-set was removed from our training set, leaving us with a 125.7-hour training set. ### CSLU Kids The CSLU Kids speech corpus contains spontaneous and prompted speech from 1100 children between Kindergarten and Grade 10, with approximately 100 children per grade. In the scripted subset of the dataset, each child was prompted to read from a list of 319 scripts, that can either be simple words, sentences, or digit strings. Each utterance of spontaneous speech begins with a recitation of the alphabet followed by one minute of unprompted speech. The spontaneous speech in the CLSU corpus is distinct from the MyST corpus in that it is unstructured. Instead of talking about a particular topic, children were only given an open prompt like "Tell me about your favorite movie." [17]. Below is a sample such of transcription. ...usually just lay down on my bed, for now i don't like to i don't know, uh football okay first they are like standing on the ground and then they run and then they mm and if the girl pass the whole field you get a six points uh think it's twenty four i don't know think yeah they catch block and uh one uh the quarter back throws and the runners run uh it's blue uh and it has a big big big electric train set uh i have a workshop... The majority of the recordings in the spontaneous section of the CLSU corpus were longer than 30 seconds, and are thus unsuitable for training. Instead, we use the scripted portion of the CSLU corpus to help the model adapt to the channel differences between MyST and CSLU recordings, but still consider the spontaneous section as an out of sample testset. The transcriptions were of a high enough quality and filtering was not necessary, but they were all normalized to ensure a standard style of transcription. Files in the scripted portion of the dataset were shuffled and split into train, development, and test sets with an 80/10/10 split. The training set was 35 hours long, and the development and test sets were both around 4.8 hours long. Short files were combined to be close to 30 seconds as we did with the MyST corpus. ### Librespecch: test-clean The test-clean subset of the Librespecch corpus was used to test the ASR model's performance on Adult speech. It contains about 5.4 hours of speech read from Audiobooks from the LibriVox project. Since Librespecch was not used for training, we didn't combine the files, and we also didn't filter out any transcriptions to allow for reproducible and contrastable results. All transcriptions were normalized. ## 4 Training Details and Hyperparameters We followed the Huggingface Whipser finetuning tutorial 2. Our training scripts are available on Github 3, and we will link to the checkpoints on Huggingface in the camera-ready manuscript. Our evaluation script, which calculates the WER, was adapted from a code snippet by OpenAI4. All models were trained on Nvidia A6000 50GB GPU. Footnote 2: [https://huggingface.co/blog/fine-tune-whisper](https://huggingface.co/blog/fine-tune-whisper) Footnote 3: [https://github.com/ahmedadelphia/whisperKids](https://github.com/ahmedadelphia/whisperKids) Footnote 4: [https://github.com/openai/whisper/discussions/654](https://github.com/openai/whisper/discussions/654) For the Small models, we used a learning rate of \(1\times 10^{-5}\), batch size of 64, and 1 gradient accumulation step. For the Medium models, we used a learning rate of \(1\times 10^{-5}\), batch size of 32, and 1 gradient accumulation step. All models were finetuned until convergence and the best checkpoints were used for evaluation. ## 5 Results and Discussion ### Whisper Zero-shot Models Table 3 shows the WER for different Whisper models without any finetuning. Looking at these results, the gap between children and adult speech becomes immediately clear. The WER for the scripted part of CSLU Kids is between 6 and 10 times that of Librispeech, and the WER for MyST is between 3 and 5 times. In general, English models perform better than multilingual models, with the exception of the Medium model. That could be because the Medium model is big enough to benefit from seeing more data in different languages. The bigger the model, the better the performance, with the exception of Large-V1 being slightly worse than Medium. In fact, the performance seems to saturate beyond Medium and the difference in performance between Medium and Large-V2 is negligible. We note that the zero-shot WER reported here is smaller than that reported in [16]. We attribute this to the fact that they used a different normalizer than the one Whisper was trained with, which we validated by inspecting their datasets which are publicly accessible on Huggingface Based on these results, we finetune the Small and Medium models, both the English and multilingual variants. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Dataset** & **Training** & **Development** & **Testing** & **Filterted2** & **Age Group** \\ & **Duration** & **Duration** & **Duration** & **Filtration** & **Filtered2** & **Age Group** \\ \hline **MyST** & 125 & 20.9 & 25.8 & ✓ & 8-11 Years \\ **CSLU Kids - Scripted** & 35 & 4.8 & 4.8 & X & 6-11 Years \\ **Librespecch- testclean** & 0 & 0 & 5.4 & X & Adult \\ \hline \end{tabular} \end{table} Table 2: Summary of the Datasets Used. Durations in hours. ### Finetuned Whisper Models In this section, we showcase the performance of our finetuned models and contrast them with the models from [16], whose models are publicly available on Huggingface. We report the best-performing variants here. We tested all models on testsets from four corpora, listed in Table 2. Looking at the results in Table 4, it is clear that Whisper can and does improve its performance on the MyST dataset as well as CSLU, proving that transformer ASR models have the capacity to improve their performance on children's speech. We establish strong SOTA performance of 8.61% for the MyST testset. To the best of our knowledge, our best performance on the CSLU scripted dataset of 1.97% beats the current SOTA of 5.52% [19]. We also show improvement on unseen datasets, since both our models trained on just MyST, or a combination of MyST and CSLU scripted data show improvement on CSLU spontaneous speech without any training on speech from this dataset. Our best performing model on the CSLU spontaneous dataset scores 16.53% WER, which is about half the WER of zeroshot Whisper. Additionally, our models "forget" less about the adult speech than the baseline, with our models seeing a degradation of only about 1%. Medium models outperformed Small models, and generalized better to unseen datasets. The English-only variant of the Small model showed significant improvement over the multilingual variant in seen and unseen datasets. The Medium multilingual variant performed slightly better on the MyST dataset when finetuned exclusively on it, but the English-only variant generalized better to unseen data. Multilingual models in both sizes had higher WER for Librispeech. Looking at the results for the scripted portion of the CSLU corpus, it is clear that the lack of context in these script harm the performance of the models that weren't trained on speech from this dataset. However, the performance improved significantly when speech from this dataset was included in the training data, mainly because of the lack of variability on the scripts, unlike the more diverse MyST or CSLU spontaneous datasets. We also attribute the gap in performance between the MyST and CSLU spontaneous datasets to the fact that speech in the MyST corpus is more structured than the CSLU spontaneous dataset. This shows that one of the reasons behind the gap in performance between adult and children's ASR is that the decoder in Whisper, which acts as an audio-condtional language model, is not well adapted to the variablility found in children's speech, where they can suddenly change topic several times in a short period. ## 6 Conclusions and Future Work In this paper, we outlined how Whisper, a SOTA ASR system can be finetuned on children's speech using MyST, the largest publically available conversational children's speech corpus. We showcased a way to filter out mistranscribed files from the corpus and established a strong baseline for children's speech recognition. Our finetuning reduced the WER by 4 to 5% and reduced the gap between adult and children's speech. We also outlined some of the challenges that faces children ASR, namely the fact that audio-conditional language models are not well adapted to the variability in children's speech. In the future, we will explore the noise robustness of Whisper. Specifically we will look at babble noise and other typical classroom nonspeech sounds and how they can affect performance, and how to improve such robustness in children's ASR. We will also explore whether these models are biased towards a certain gender, racial group or age group. The authors of [20] developed grade-specific ASR models, and proposed grouping different age groups separately, instead of under the umbrella term "children speech". Their suggested grouping was kindergarten; 1st grade; 2nd and 3rd grade; and 4th grade and above, and they noted that it is possible to achieve adult-like performance with the latter group. We aim to expand upon their work in the future, exploring whether their results can be replicated with large transformer ASR models and whether such bias against youger children can be mitigated. ## 7 Acknowledgments The authors of this paper thank Wayne Ward for sharing his experience with Whisper, MyST and other children databases. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Training Data**} & \multirow{2}{*}{**MyST**} & **CSLU Kids** & **CSLU Kids** & **Librespeech** \\ & & & **Scripted** & **Spontaneous** & **testclean** \\ \hline \multicolumn{5}{|c|}{**Small**} \\ \hline **ML - Zeroshot** & - & 14.06 & 25.15 & 36.36 & 3.39 \\ **EN - Zeroshot** & - & 13.93 & 21.31 & 32.00 & **3.05** \\ \hline **EN - [16]** & MyST55H & 13.23 & 31.26 & 28.63 & 5.40 \\ **ML** & MyST & 11.80 & 55.51 & 28.53 & 6.23 \\ **ML** & MyST + CSLU & 12.11 & 2.74 & 32.72 & 7.97 \\ **EN** & MyST & **9.11** & 33.85 & 28.47 & **4.18** \\ **EN** & MyST + CSLU & 9.21 & **2.59** & **27.16** & 4.74 \\ \hline \multicolumn{5}{|c|}{**Medium**} \\ \hline **ML - Zeroshot** & - & 13.23 & 18.57 & 31.85 & 3.02 \\ **EN - Zeroshot** & - & 12.90 & 18.62 & 37.04 & **2.76** \\ \hline **EN - [16]** & MyST55H & 14.40 & 28.31 & 26.76 & 8.66 \\ **ML** & MyST & **8.61** & 30.10 & 24.26 & 5.32 \\ **ML** & MyST + CSLU & 8.99 & **1.97** & 20.28 & 4.28 \\ **EN** & MyST & 8.91 & 47.94 & 25.56 & 3.95 \\ **EN** & MyST + CSLU & 8.85 & 2.38 & **16.53** & **3.52** \\ \hline \end{tabular} \end{table} Table 4: WER on different test sets for different Whisper Models. EN stands for English-only model and ML stands for multilingual model. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**MyST**} & **CSLU Kids** & **CSLU Kids** & **Librespeech** \\ & & **Scripted** & **Spontaneous** & **testclean** \\ \hline **Tiny** & 21.16 & 74.98 & 57.01 & 7.49 \\ **Tiny.en** & 18.34 & 61.04 & 45.29 & 5.59 \\ **Base** & 18.54 & 40.20 & 43.71 & 4.98 \\ **Base.en** & 15.57 & 33.18 & 38.57 & 4.15 \\ **Small** & 14.06 & 25.15 & 36.36 & 3.39 \\ **Small.en** & 13.93 & 21.31 & 32.00 & 3.05 \\ **Medium** & 12.90 & 18.62 & 37.04 & 2.76 \\ **Medium.en** & 13.23 & 18.57 & 31.85 & 3.02 \\ **Large-V1** & 14.15 & 21.50 & 45.18 & 2.98 \\ **Large-V2** & 12.80 & 17.22 & 29.39 & 2.82 \\ \hline \end{tabular} \end{table} Table 3: Zero-shot WER on different test sets for different Whisper Models Without Finetuning.
2309.00090
Benford's Law under Zeckendorf expansion
In the literature, Benford's Law is considered for base-b expansions where b>1 is an integer. In this paper, we investigate the distribution of leading "digits" of a sequence of positive integers under other expansions such as Zeckendorf expansion, and declare what Benford's Law should be under generalized Zeckendorf expansion.
Sungkon Chang, Steven J. Miller
2023-08-31T19:16:07Z
http://arxiv.org/abs/2309.00090v1
# Benford's Law under Zeckendorf expansion ###### Abstract In the literature, Benford's Law is considered for base-\(b\) expansions where \(b>1\) is an integer. In this paper, we investigate the distribution of leading "digits" of a sequence of positive integers under other expansions such as Zeckendorf expansion, and declare what Benford's Law should be under generalized Zeckendorf expansion. ## 1 Introduction Introduced in [2, 18] is a probability distribution of the leading decimal digits of a sequence of positive integers, known as _Benford's Law_, and the exponential sequences such as \(\{3^{n}\}\) are standard examples of sequences that satisfy Benford's Law. Given \(d\in\{1,2,3,\ldots,9\}\), the probability of having the leading digit \(d\) in the decimal expansion of \(3^{n}\) is \(\log_{10}\frac{d+1}{d}\), and this distribution is Benford's Law. In fact, given a block \(B\) of digits of any length, the probability of having the leading block \(B\) in the decimal expansion of \(3^{n}\) is given by a similar logarithmic formula as well, and this is known as _strong Benford's Law;_ see Example 1.9. It is indeed a special property that a sequence has convergent proportions for each leading digit. For example, the proportion of odd integers \(2n-1\leq M\) with leading digit \(d\) oscillates, and does not converge as \(M\to\infty\); see Section 4.10. In the literature, Benford's Law is considered for base-\(b\) expansions where \(b>1\) is an integer. For example, the probabilities of the binary expansions of integer powers of \(3\) having the leading binary digits \(100_{2}\) and \(101_{2}\) are \(\log_{2}\frac{2^{2}+1}{2^{2}}\) and \(\log_{2}\frac{2^{2}+2}{2^{2}+1}\), respectively; for later reference, we may rewrite the values as follows: \[\log_{2}\frac{1+2^{-2}}{1}\approx 0.322,\quad\log_{2}\frac{1+2^{-1}}{1+2^{-2}} \approx 0.264. \tag{1}\] In this paper, we shall consider the distribution of leading "digits" of a sequence of positive integers under other expansions such as Zeckendorf expansion [19]. For example, let \(\{F_{n}\}_{n=1}^{\infty}\) for \(n\geq 1\) be the shifted Fibonacci sequence, i.e., \(F_{n+2}=F_{n+1}+F_{n}\) for all \(n\in\mathbb{N}\) and \(F_{1}=1\) and \(F_{2}=2\), and consider two Zeckendorf expansions: \(3^{5}=F_{12}+F_{5}+F_{2}\) and \(3^{8}=F_{18}+F_{16}+F_{14}+F_{11}+F_{7}+F_{5}\). Similar to the way the binary expansions are denoted, we may write \[3^{5}=100000010010_{F},\quad 3^{8}=101010010001010000_{F}\] where \(1\)'s are inserted at the \(k\)th place from the right if \(F_{k}\) is used in the expansions. **Definition 1.1**.: Let \(A=\{0,1\}\). Given \(\{s,n\}\subset\mathbb{N}\), let \(n=\sum_{k=1}^{M}a_{k}F_{M-k+1}\) be the Zeckendorf expansion of \(n\) (where \(a_{1}=1\)). We define \(\mathrm{LB}_{s}(n):=(a_{1},\ldots,a_{s})\in A^{s}\) if \(M\geq s\); otherwise, \(\mathrm{LB}_{s}(n)\) is undefined. The tuple \(\mathrm{LB}_{s}(n)\) is called _the leading block of \(n\) with length \(s\) under Zeckendorf expansion_. For example, \(\mathrm{LB}_{3}(3^{5})=(1,0,0)\), \(\mathrm{LB}_{3}(3^{8})=(1,0,1)\), and \(\mathrm{LB}_{6}(3^{8})=(1,0,1,0,1,0)\). Since \(\mathrm{LB}_{2}(n)=(1,0)\) for all integers \(n\geq 2\), it is only meaningful to consider the first three or more Zeckendorf digits. We prove Theorem 1.3 in this note. **Definition 1.2**.: Given a conditional statement \(P(n)\) where \(n\in\mathbb{N}\), and a subset \(A\) of \(\mathbb{N}\), let us define \[\mathrm{Prob}\left\{\,n\in A:P(n)\text{ is true}\,\right\}:=\lim_{n\to \infty}\frac{\#\{k\in A:P(k)\text{ is true},\ k\leq n\}}{\#\{k\in A:k\leq n\}}.\] For example, if \(A=\{n\in\mathbb{N}:n\equiv 2\mod 3\}\), then \(\mathrm{Prob}\left\{\,n\in A:n\equiv 1\mod 5\,\right\}=\frac{1}{5}\). If \(A\) is finite, the limit always exists. Let \(\phi\) be the Golden ratio. The following is an analogue of Benford's Law under binary expansion demonstrated in (1). **Theorem 1.3**.: _Let \(a>1\) be an integer._ \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{3}(a^{n})=(1,0, 0)\,\right\} =\,\log_{\phi}(1+\phi^{-2})\approx.672,\] \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{3}(a^{n})=(1,0, 1)\,\right\} =\,\log_{\phi}\frac{\phi}{1+\phi^{-2}}\approx.328.\] In particular, they exist! Although the probabilities are different from the binary cases, the structure of the log expressions in Theorem 1.3 is quite similar to that of the binary expansions in (1), i.e., the denominators of the quotients express the leading digits in power expansions with respect to their bases. The exponential sequences \((a^{n})_{n=1}^{\infty}\) where \(a>1\) is an integer are standard sequences that satisfy Benford's Law under base-\(b\) expansion. Motivated from these standard examples, we define Benford's Law under Zeckendorf expansion to be the above distribution of the leading blocks \((1,0,0)\) and \((1,0,1)\) under Zeckendorf expansion; see Definition 3.6. The exponential sequences \(\{a^{n}\}_{n=1}^{\infty}\) are standard sequences for so-called _strong Benford's Law under base-\(b\) expansion_ as well; see Example 1.9. We introduce below the probability of the leading Zeckendorf digits of \(a^{n}\) with arbitrary length, which is a generalization of Theorem 1.3; this result is rewritten in Theorem 3.8 with more compact notation. **Definition 1.4**.: Let \(A=\{0,1\}\), and let \(s\geq 2\) be an integer. Let \(\mathbf{b}=(b_{1},b_{2},\ldots,b_{s})\in A^{s}\) such that \(b_{1}=1\) and \(b_{k}b_{k+1}=0\) for all \(1\leq k\leq s-1\). We define \(\widetilde{\mathbf{b}}\) to be a tuple \((\widetilde{b}_{1},\ldots,\widetilde{b}_{s})\in A^{s}\) as follows. If \(1+\sum_{k=1}^{s}b_{k}F_{s-k+1}<F_{s+1}\), then \(\widetilde{b}_{k}\) for \(1\leq k\leq s\) are defined to be integers in \(A\) such that \(1+\sum_{k=1}^{s}b_{k}F_{s-k+1}=\sum_{k=1}^{s}\widetilde{b}_{k}F_{s-k+1}\) and \(\widetilde{b}_{k}\widetilde{b}_{k+1}=0\) for all \(1\leq k\leq s-1\). If \(1+\sum_{k=1}^{s}b_{k}F_{s-k+1}=F_{s+1}\), then \(\widetilde{b}_{1}:=\widetilde{b}_{2}:=1\), and \(\widetilde{b}_{k}:=0\) for all \(3\leq k\leq s\). For the case of \(1+\sum_{k=1}^{s}b_{k}F_{s-k+1}<F_{s+1}\), the existence of the tuple \(\widetilde{\mathbf{b}}\) is guaranteed by Zeckendorf's Theorem. **Theorem 1.5**.: _Let \(a>1\) and \(s\geq 2\) be integers. Let \(\mathbf{b}\) and \(\widetilde{\mathbf{b}}\) be tuples defined in Definition 1.4. Then,_ \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{s}(a^{n})=\mathbf{b}\,\right\} =\,\log_{\phi}\frac{\sum_{k=1}^{s}\widetilde{b}_{k}\phi^{-(k-1)}}{\sum_{k=1}^{ s}b_{k}\phi^{-(k-1)}}.\] For example, \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{6}(a^{n})=(1,0,0,0,1,0)\right\} =\,\log_{\phi}\frac{1+\phi^{-3}}{1+\phi^{-4}}\approx 0.157\] \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{6}(a^{n})=(1,0,1,0,1,0)\right\} =\,\log_{\phi}\frac{1+\phi^{-1}}{1+\phi^{-2}+\phi^{-4}}\] \[=\,\log_{\phi}\frac{\phi}{1+\phi^{-2}+\phi^{-4}}\approx 0.119.\] As in Benford's Law under Zeckendorf expansion, we define the probability distributions described in Theorem 3.8 to be _strong Benford's Law under Zeckendorf expansion_; see Definition 3.9. Exponential sequences are standard examples for Benford's Laws, but some exponential sequences do not satisfy Benford's Law under some base-\(b\) expansion. Let us demonstrate examples under Zeckendorf expansion. Let \(\{G_{n}\}_{n=1}^{\infty}\) be the sequence given by \(G_{k}=F_{2k}+F_{k}\) for \(k\in\mathbb{N}\). Then, given an integer \(s>1\), the \(s\) leading Zeckendorf digits of \(G_{k}\) is \(100\cdots 00_{F}\) as \(k\to\infty\) since the gap \(2k-k=k\) between the indices of \(F_{2k}\) and \(F_{n}\) approaches \(\infty\). Thus, \(\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{s}(G_{n})=(1,0,0,\ldots,0) \right\}=1\) for all \(s\in\mathbb{N}\), and the probabilities of other digits of length \(s\) are all (asymptotically) \(0\). Similar probability distributions occur for the Lucas sequence \(\{K_{n}\}_{n=1}^{\infty}\) given by \(K_{k+2}=K_{k+1}+K_{k}\) for \(k\in\mathbb{N}\) and \((K_{1},K_{2})=(2,1)\). Given \(s\in\mathbb{N}\), the probabilities of having leading Zeckendorf digits of length \(s\) are entirely concentrated on one particular string of digits. For example, \(\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{10}(K_{n})=(1,0,0,0,1,0,0,0,1,0)\right\}=1\), and the probabilities of having other digits of length \(10\) is all (asymptotically) \(0\); see Example 5.10 for full answers. Generalized Zeckendorf expansions are introduced in [10, 17]. In Section 6, we prove Theorem 6.9 on the probability of the leading digits of \(a^{n}\) with arbitrary length under generalized Zeckendorf expansion, and define these probability distributions to be strong Benford's Law under generalized Zeckendorf expansion; see Definition 6.10. As in the concept of _absolute normal numbers_[12], we introduce in Definition 6.15 the notion of _absolute Benford's Law_, which is the property of satisfying strong Benford's Law under all generalized Zeckendorf expansions. For example, the sequence given by \(K_{n}=\left\lfloor\frac{\phi}{\sqrt{5}}(\frac{89}{55})^{n}\right\rfloor\) for \(n\in\mathbb{N}\) satisfies strong Benford's Law under all generalized Zeckendorf expansions; see Example 6.18. Its first fifteen values are listed below: \[(1,1,3,4,8,12,21,34,55,89,144,233,377,610,988).\] They are nearly equal to the Fibonacci terms as \(\frac{89}{55}\) is the \(10\)th convergent of the continued fraction of \(\phi\). The differences amplify as we look at higher terms, and even under Zeckendorf expansion, this sequence satisfies strong Benford's Law. It is also natural to consider sequences that have different distributions, and in this note we investigate other distributions of leading digits under generalized Zeckendorf expansions as well. In the following paragraphs, we shall explain this approach using base-\(10\) expansion. The results for other expansions are introduced in Section 5 and 6. Strong Benford's Law for the sequence \(\{3^{n}\}_{n=1}^{\infty}\) under decimal expansion follows from the equidistribution of the fractional part of \(\log_{10}(3^{n})\) on the interval \((0,1)\). We realized that the function \(\log_{10}(x)\) is merely a tool for calculating the leading digits, and that other distributions of leading digits naturally emerge as we modified the function \(\log_{10}(x)\). We noticed that the frequency of leading digits converges when a continuation of the sequence \(\{10^{n-1}\}_{n=1}^{\infty}\) has convergent behavior over the intervals \([n,n+1]\), and we phrase it more precisely below. **Definition 1.6**.: Let \(\{H_{n}\}_{n=1}^{\infty}\) be an increasing sequence of positive integers. A continuous function \(h:[1,\infty)\to\mathbb{R}\) is called a _uniform continuation of \(\{H_{n}\}_{n=1}^{\infty}\)_ if \(h(n)=H_{n}\) for all \(n\in\mathbb{N}\), and the following sequence of functions \(h_{n}:[0,1]\to[0,1]\) uniformly converges to an increasing (continuous) function: \[h_{n}(p)=\frac{h(n+p)-h(n)}{h(n+1)-h(n)}.\] If \(h\) is a uniform continuation of \(\{H_{n}\}_{n=1}^{\infty}\), let \(h_{\infty}:[0,1]\to[0,1]\) denote the increasing continuous function given by \(h_{\infty}(p)=\lim_{n\to\infty}h_{n}(p)\). Theorem 1.8 below is a version specialized for decimal expansion. The proof of this theorem is similar to, and much simpler than the proof of Theorem 5.6 for Zeckendorf expansion, and we leave it to the reader. **Definition 1.7**.: If \(\alpha\in\mathbb{R}\), we denote the fractional part of \(\alpha\) by \(\operatorname{frc}(\alpha)\). Given a sequence \(\{K_{n}\}_{n=1}^{\infty}\) of real numbers, we say, \(\operatorname{frc}(K_{n})\)_is equidistributed_ if \(\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{frc}(K_{n})\leq \beta\,\big{\}}=\beta\) for all \(\beta\in[0,1]\). For example, consider the sequence \(\{\operatorname{frc}(n\pi)\}_{n=1}^{\infty}\) where \(\pi\approx 3.14\) is the irrational number. Then, by Weyl's Equidistribution Theorem, \(\operatorname{frc}(n\pi)\) is equidistributed on the interval \([0,1]\). The sequence \((\sin^{2}(n))_{n=1}^{\infty}\) is an example of sequences that have \(\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\sin^{2}(n)\leq\beta\,\big{\}}\) defined for each \(\beta\in[0,1]\), and the probability is \(\frac{1}{\pi}\cos^{-1}(1-2\beta)\). Thus, it is not equidistributed on \([0,1]\). **Theorem 1.8**.: _Let \(h:[1,\infty)\to\mathbb{R}\) be a uniform continuation of the sequence \(\{10^{k-1}\}_{n=1}^{\infty}\). Then, there is a sequence \(\{K_{n}\}_{n=1}^{\infty}\) of positive integers approaching \(\infty\) (see Theorem 6.19 for the description of \(K_{n}\)) such that \(\operatorname{frc}\big{(}h^{-1}(K_{n})\big{)}\) is equidistributed._ _Let \(\{K_{n}\}_{n=1}^{\infty}\) be a sequence of positive integers approaching \(\infty\) such that \(\operatorname{frc}\big{(}h^{-1}(K_{n})\big{)}\) is equidistributed. Let \(d\) be a positive integer of \(s\) decimal digits. Then, the probability of the \(s\) leading decimal digits of \(K_{n}\) being \(d\) is equal to_ \[{h_{\infty}}^{-1}\left(\frac{(d+1)-10^{s-1}}{9\cdot 10^{s-1}}\right)-{h_{ \infty}}^{-1}\left(\frac{d-10^{s-1}}{9\cdot 10^{s-1}}\right).\] **Example 1.9**.: Let \(h:[1,\infty)\to\mathbb{R}\) be the function given by \(h(x)=10^{x-1}\). Then, \(h\) is a uniform continuation of the sequence \(\{10^{n-1}\}\), and \(h_{\infty}(p)=\frac{1}{9}(10^{p}-1)\). By Theorem 6.19, the sequence \(\{K_{n}\}_{n=1}^{\infty}\) with the equidistribution property is given by \(K_{n}=\lfloor 10^{n+\operatorname{frc}(n\pi)}\rfloor\), but there are simpler sequences such as \(\{3^{n}\}_{n=1}^{\infty}\) that have the property. By Theorem 1.8, the probability of the \(s\) leading decimal digits of \(K_{n}\) being \(d\) is equal to \[\log_{10}\frac{d+1}{10^{s-1}}-\log_{10}\frac{d}{10^{s-1}}\,=\,\log_{10}\left(1 +\frac{1}{d}\right)\] where \(d\in\mathbb{N}\) has \(s\) decimal digits. This distribution is known as strong Benford's Law under base-10 expansion, and we may say that strong Benford's Law under base-10 expansion arises from the logarithmic continuation of \(\{10^{n-1}\}_{n=1}^{\infty}\). For this reason, we call \(h(x)\,a\)_Benford continuation of the base-10 sequence_. **Example 1.10**.: Let \(h:[1,\infty)\to\mathbb{R}\) be the function whose graph is the union of the line segments from \((n,10^{n-1})\) to \((n+1,10^{n})\) for all \(n\in\mathbb{N}\). Let \(\{K_{n}\}_{n=1}^{\infty}\) be the sequence given by \(K_{n}=\left\lfloor 10^{n+\log_{10}(9\operatorname{frc}(n\pi)+1)}\right\rfloor\) as described in Theorem 6.19. Then, the fractional part \(\operatorname{frc}\left(h^{-1}(K_{n})\right)\) is equidistributed. The limit function \(h_{\infty}\) defined in Theorem 1.8 is given by \(h_{\infty}(p)=p\) for \(p\in[0,1]\), and given a decimal expansion \(d\) of length \(s\), the probability of the \(s\) leading decimal digits of \(K_{n}\) being \(d\) is (uniformly) equal to \(1/(9\cdot 10^{s-1})\) by Theorem 1.8. The first ten values of \(K_{n}\) are \[(22,354,4823,60973,737166,8646003,99203371,219467105,\,3469004940,47433388230).\] For example, if we look at many more terms of \(K\), then the first two digits \(22\) of \(K_{1}\) will occur as leading digits with probability \(1/90\approx 0.011\), and the probability for the digits \(99\) is also \(1/90\). As in constructing a normal number, it's tricky to construct a sequence of positive integers with this property, and prove that it has the property. Let us note here that the \(s\) leading decimal digits of the sequence \(\{n\}_{n=1}^{\infty}\) has frequency close to \(1/(9\cdot 10^{s-1})\), but it oscillates and does not converge as more terms are considered; see Theorem 4.10 for a version under Zeckendorf expansion. In Example 5.4, we demonstrate the "line-segment" continuation of the Fibonacci sequence. In Example 5.7, we use a more refined "line segment continuation", and demonstrate a uniform continuation that generates the distribution of leading blocks that satisfies strong Benford's Law up to the 4th digits, but does not satisfy the law for the leading blocks of length \(>4\). Theorem 1.8 suggests that given a uniform continuation \(h\) of the sequence \(\{10^{n-1}\}_{n=1}^{\infty}\), we associate certain distributions of leading digits, coming from the equidistribution property. It's natural to consider the converse that given a sequence \(\{K_{n}\}_{n=1}^{\infty}\) with "continuous distribution of leading digits" of arbitrary length, we associate a certain uniform continuation of \(\{10^{n-1}\}_{n=1}^{\infty}\). Theorem 1.11 below is a version for base-10 expansion. In Section 5, we introduce results on this topic for the Fibonacci sequence \(\{F_{n}\}_{n=1}^{\infty}\). The proof of Theorem 1.11 is similar to, and simpler than Theorem 5.18 for the Fibonacci expansion, and leave it to the reader. **Theorem 1.11**.: _Let \(\{K_{n}\}_{n=1}^{\infty}\) be a sequence of positive integers approaching \(\infty\). Let \(h_{K}^{*}:[0,1]\to[0,1]\) be the function given by \(h_{K}^{*}(0)=0\), \(h_{K}^{*}(1)=1\), and_ \[h_{K}^{*}(\tfrac{1}{9}(\beta-1))\,=\,\lim_{s\to\infty}\operatorname{Prob}\big{\{} \,n\in\mathbb{N}:\text{The s leading decimal digits of $K_{n}$ is $\leq\left\lfloor 10^{s-1}\beta\right\rfloor$}\,\big{\}} \tag{2}\] _where \(\beta\) varies over the real numbers in the interval \([1,10)\) and we assume that the RHS of (2) is defined for all \(\beta\in[1,10)\). If \(h_{K}^{*}\) is an increasing continuous function, then there is a uniform continuation \(h\) of the sequence \(\{10^{n-1}\}_{n=1}^{\infty}\) such that \({h_{\infty}}^{-1}=h_{K}^{*}\), and \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \cdot The remainder of this paper is organized as follows. In Section 2, the notations for sequences and coefficient functions are introduced. In Section 3, the distribution of leading blocks of exponential sequences under Zeckendorf expansion is introduced, and Benford's Law and strong Benford's Law under Zeckendorf expansion are declared. Introduced in Section 4 are the method of calculating the distribution results introduced in Section 3, and also the distribution results for monomial sequences \(\{n^{a}\}_{n=1}^{\infty}\). In Section 5, we introduce a general approach to the distributions of leading blocks under Zeckendorf expansion that are different from that of Benford's Law. The approach establishes the correspondence between the continuations of the Fibonacci sequences and the distributions of leading blocks under Zeckendorf expansion. In Section 6, we introduce definitions and results that generalize the contents of Sections 3, 4, and 5 for generalized Zeckendorf expansions. The absolute Benford's Law mentioned earlier in this section is properly introduced in Section 6 as well. In Section 7, the Benford behavior introduced in Theorem 1.14 is generalized for the setting of two generalized Zeckendorf expansions. ## 2 Notation and definitions **Notation 2.1**.: Let \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\), and let \(\Omega_{n}:=\{k\in\mathbb{N}:k\leq n\}\). For simpler notation, let us use a capital letter for a sequence of numbers, and use the infinite tuple notation for listing its values, e.g., \(Q=(2,4,6,8,\ldots)\). We use the usual subscript notation for individual values, e.g., \(Q_{3}=6\). **Definition 2.2**.: Tuples \((c_{1},c_{2},\ldots,c_{t})\in\mathbb{N}_{0}^{t}\) where \(t\in\mathbb{N}\) are called _coefficient functions of length_\(t\) if \(c_{1}>0\). If \(\epsilon\) is a coefficient function of length \(t\), we denote the \(k\)th entry by \(\epsilon(k)\) (if \(k\leq t\)), and its length \(t\) by \(\operatorname{len}(\epsilon)\). For a coefficient function \(\epsilon\), let \(\epsilon*Q\) denote \(\sum_{k=1}^{t}\epsilon(k)Q_{t-k+1}\) where \(t=\operatorname{len}(\epsilon)\), and let \(\epsilon\cdot Q\) denote \(\sum_{k=1}^{t}\epsilon(k)Q_{k}\). If \(\epsilon=(4,1,6,2)\) and \(Q\) is a sequence, then \(\epsilon*Q=4Q_{4}+Q_{3}+6Q_{2}+2Q_{1}\), and \(\epsilon\cdot Q=4Q_{1}+Q_{2}+6Q_{3}+2Q_{4}\). ## 3 Benford's Law for Zeckendorf expansions Let \(a\) and \(b\) be two integers \(>1\) such that \(\gcd(a,b)=1\). The sequence \(K\) be the sequence given by \(K_{n}=a^{n}\) is a standard example of sequences that satisfy Benford's Law under base-\(b\) expansion. We shall declare the behavior of the leading digits of the Zeckendorf expansion of \(a^{n}\) to be Benford's Law under Zeckendorf expansion. Let us begin with formulating Zeckendorf's Theorem in terms of coefficient functions. **Definition 3.1**.: Let \(\mathscr{F}\) be the set of coefficient functions \(\epsilon\) such that \(\epsilon(k)\leq 1\) for all \(k\leq\operatorname{len}(\epsilon)\), and \(\epsilon(k)\epsilon(k+1)=0\) all \(k\leq\operatorname{len}(\epsilon)-1\). Let \(F\) be the shifted Fibonacci sequence such that \(F_{n+2}=F_{n+1}+F_{n}\) for all \(n\in\mathbb{N}\) and \((F_{1},F_{2})=(1,2)\). Let \(\phi\) be the golden ratio, let \(\omega:=\phi^{-1}\), and let \(\widehat{F}=(1,\omega,\omega^{2},\ldots)\) be the sequence given by \(\widehat{F}_{n}=\omega^{n-1}\). Recall the product notation from Definition 2.2. **Theorem 3.2** ([19], Zeckendorf's Theorem).: _For each positive integer \(n\), there is a unique coefficient function \(\epsilon\in\mathscr{F}\) such that \(n=\epsilon*F\)._ Recall the example \(3^{5}=F_{12}+F_{5}+F_{2}\). If \(\epsilon=(1,0,0,0,0,0,0,1,0,0,1,0)\), then \(\epsilon\in\mathscr{F}\) and \(3^{5}=c*F\). **Definition 3.3**.: The expression \(n=c*F\) where \(n\in\mathbb{N}\) and \(\epsilon\in\mathscr{F}\) is called _the \(\mathscr{F}\)-expansion of \(n\)_ or _the Zeckendorf expansion of \(n\)_. ### Benford's Law If \(\epsilon\in\mathscr{F}\) and \(\operatorname{len}(\epsilon)\geq 2\), then \((\epsilon(1),\epsilon(2))=(1,0)\) is always the case, and hence, the probability of having \((\epsilon(1),\epsilon(2))=(1,0)\) is \(1\). For the purpose of demonstration, we consider the first three entries of \(\epsilon\). To denote arbitrarily many _leading blocks of coefficient functions_, which are defined in Definition 3.4 below, we shall use the boldface font and subscripts, e.g., \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\), and in particular, \(\mathbf{b}_{k}\) for \(k=1,2\) are not numbers, but tuples. The reader must not be confused with the entries of a sequence \(Q\), e.g., \(Q_{1}\) and \(Q_{2}\), which are numbers, and we use the regular font for sequences. **Definition 3.4**.: A coefficient function of length \(s\) is also called _a leading block of length \(s\)_ in the context of investigating the frequency of leading blocks, and it is denoted with boldface fonts, e.g. \(\mathbf{b}=(1,0,0,1)\in\mathscr{F}\), \(\mathbf{b}(3)=0\), and \(\mathbf{b}(4)=1\). Let \(\mathscr{F}_{3}:=\{\mathbf{b}_{1},\mathbf{b}_{2}\}\) where \(\mathbf{b}_{1}=(1,0,0)\), \(\mathbf{b}_{2}=(1,0,1)\) are leading blocks of length \(3\), and the set is called _the set of leading blocks of length \(3\) under \(\mathscr{F}\)-expansion_. If \(\mathbf{b}\in\mathscr{F}_{3}\) and \(\mathbf{b}=\mathbf{b}_{1}\), then define \(\widetilde{\mathbf{b}}:=\mathbf{b}_{2}\), and and if \(\mathbf{b}\in\mathscr{F}_{3}\) and \(\mathbf{b}=\mathbf{b}_{2}\), then define \(\widetilde{\mathbf{b}}:=(1,1,0)\). The block \(\widetilde{\mathbf{b}}=(1,1,0)\) is not a member of \(\mathscr{F}\), and hence, does not occur as the leading block of an \(\mathscr{F}\)-expansion, but it's convenient to use for Theorem 3.5, where we rely on the equality \(\widetilde{\mathbf{b}}\cdot(1,\omega^{1},\omega^{2})=\phi\); see Definitions 2.2 and 3.1. The block \(\widetilde{\mathbf{b}}\) makes the statements of Definition 3.6 below more aesthetic, and the principle of defining an exclusive block such as \((1,1,0)\) for other generalized Zeckendorf expansions will be explained in Definition 3.7 and Section 6. The following is a special version of Corollary 4.7, and it is Theorem 1.3 written in terms of the dot product and blocks. Recall the notation \(\operatorname{LB}_{s}\) from Definition 1.1, the set \(\mathscr{F}_{3}\) from Definition 3.4, the sequence \(\widehat{F}\) from Definition 3.1, and the dot product from Definition 2.2. **Theorem 3.5**.: _Let \(K\) be a sequence given by \(K_{n}=a^{n}\) where \(a>1\) is an integer. Then, given \(\mathbf{b}\in\mathscr{F}_{3}\),_ \[\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(K_{n})= \mathbf{b}\,\right\}\;=\;\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat {F}}{\mathbf{b}\cdot\widehat{F}}.\] Motivated from the distribution of these standard sequences, we introduce the following definition. **Definition 3.6**.: A sequence \(K\) of positive integers is said to _satisfy \(\mathscr{F}\)-Benford's Law_ or _satisfy Benford's Law under \(\mathscr{F}\)-expansion_ if given \(\mathbf{b}\in\mathscr{F}_{3}\), \[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,\colon\operatorname{LB}_{3}(K_{n })=\mathbf{b}\,\big{\}}\,=\,\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot \widehat{F}}{\mathbf{b}\cdot\widehat{F}}.\] Let us demonstrate how the structure of the formulas in Definition 3.6 compares with the one for base-10 expansion. Consider the two leading blocks \(\mathbf{c}_{1}=(2,1,2)\) and \(\mathbf{c}_{2}=(2,1,3)\) for base-10 expansion. Let \(b=10\). Then, strong Benford's Law for decimal expansion requires the probability of having the leading block \(\mathbf{c}_{1}\) to be \(\log_{10}\frac{213}{212}\), which is equal to \[\log_{b}\frac{\mathbf{c}_{2}\cdot(1,b^{-1},b^{-2})}{\mathbf{c}_{1}\cdot(1,b^ {-1},b^{-2})}\,=\,\log_{b}\frac{b^{2}\mathbf{c}_{2}\cdot(1,b^{-1},b^{-2})}{b^ {2}\mathbf{c}_{1}\cdot(1,b^{-1},b^{-2})}\,=\,\log_{b}\frac{\mathbf{c}_{2}\cdot (b^{2},b,1)}{\mathbf{c}_{1}\cdot(b^{2},b,1)}\,=\,\log_{10}\frac{213}{212}.\] The first expression in terms of the negative powers of \(b\) is analogous to the ones in Definition 3.6. ### Strong Benford's Law Under base-\(b\) expansion, a sequence \(K\) is said to satisfy strong Benford's Law if the probability of the first \(M\) leading digits of \(K_{n}\) satisfies a certain logarithmic distribution, and exponential sequences \(\{a^{n}\}_{n=1}^{\infty}\) where \(a>1\) is an integer are standard examples that satisfy strong Benford's Law under base-\(b\) expansion. In Corollary 4.7, we calculate the distribution of leading blocks of arbitrary length of the Zeckendorf expansions of exponential sequence \(\{a^{n}\}_{n=1}^{\infty}\). We declare this distribution to be _strong Benford's Law under Zeckendorf expansion_. We state the formal definition below. Recall the convolution \(*\) from Definition 2.2. **Definition 3.7**.: Given an integer \(s\geq 2\), let \(\mathscr{F}_{s}:=\{\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{\ell}\}\) be the finite set of the leading blocks of length \(s\) occurring in the \(\mathscr{F}\)-expansions of the positive integers such that \(1+\mathbf{b}_{k}*F=\mathbf{b}_{k+1}*F\) for all \(k\leq\ell-1\). The leading block \(\mathbf{b}_{\ell}\) is called _the largest leading block of length \(s\) under \(\mathscr{F}\)-expansion_. If \(s\) is even, then let \(\mathbf{b}_{\ell+1}:=(1,0,1,0,\ldots,1,0,1,1)\), and if \(s\) is odd, then it is \(\mathbf{b}_{\ell+1}:=(1,0,1,0,\ldots,1,1,0)\). If \(\mathbf{b}=\mathbf{b}_{k}\in\mathscr{F}_{s}\), then we denote \(\mathbf{b}_{k+1}\) by \(\widetilde{\mathbf{b}}\). Notice that the existence of \(\widetilde{\mathbf{b}}\) defined above is guaranteed by Zeckendorf's Theorem. Let us demonstrate examples of \(\mathbf{b}\) and \(\widetilde{\mathbf{b}}\). Let \(\mathbf{b}=(1,0,0,0,1,0)\in\mathscr{F}_{6}\). Then, \(\widetilde{\mathbf{b}}=(1,0,0,1,0,0)\in\mathscr{F}_{6}\), and \(1+\mathbf{b}*F=\widetilde{\mathbf{b}}*F\). If we list the coefficient functions in \(\mathscr{F}_{6}\) with respect to the lexicographical order, then \(\widetilde{\mathbf{b}}\) is the immediate successor of \(\mathbf{b}\) if \(\mathbf{b}\neq(1,0,1,0,1,0)\). For each case of \(s\) being even or odd, the largest leading block \(\mathbf{b}\) of length \(s\) satisfies \(1+\mathbf{b}*F=\widetilde{\mathbf{b}}*F\). If \(\mathbf{b}^{\prime}=(1,0,1,0,1,0)\), then \(\widetilde{\mathbf{b}}^{\prime}=(1,0,1,0,1,1)\), and below we shall demonstrate that the equality \(\widetilde{\mathbf{b}}^{\prime}\cdot\widehat{F}=\sum_{k=0}^{2}\omega^{2k}+\omega^ {5}=\phi\) makes the sum of the probabilities in Theorem 3.8 and Definition 3.9 be 1. Let us compare this setup with the case of base-10 expansion. Let \(\mathbf{c}=(4,5,6,7,8,9)\) be the leading block of length 6 for base-10 expansion, and let the sequence \(H\) given by \(H_{n}=10^{n-1}\) be the "base" sequence. Then, \(1+\mathbf{c}*H=\widetilde{\mathbf{c}}*H\) where \(\widetilde{\mathbf{c}}=(4,5,6,7,9,0)\). If we list all the coefficient functions of length 6, with respect to the lexicographical order, that are legal for base-10 expansion, then \(\widetilde{\mathbf{c}}\) is the immediate successor of \(\mathbf{c}\). If \(\mathbf{c}^{\prime}=(9,10,9,9,9,9)\), then we let \(\widetilde{\mathbf{c}}^{\prime}=(9,10,0,0,0,0)\), and \(\sum_{n=1}^{6}\widetilde{\mathbf{c}}^{\prime}(n)10^{n-1}=1+\mathbf{c}^{\prime }*H=10^{6}\). If strong Benford's Law under base-10 expansion is satisfied, the probability of having the leading block \(\mathbf{c}^{\prime}\) under base-10 expansion is \[\log_{10}\frac{\widetilde{\mathbf{c}}^{\prime}*H}{\mathbf{c}^{\prime}*H}\,=\, \log_{10}\frac{\widetilde{\mathbf{c}}^{\prime}\cdot\widehat{H}}{\mathbf{c}^{ \prime}\cdot\widehat{H}}\,=\,1-\log_{10}\mathbf{c}^{\prime}\cdot\widehat{H}\] where \(\widehat{H}\) is the sequence given by \(\widehat{H}_{n}=10^{-(n-1)}\). Recall the sequence \(\widehat{F}\) from Definition 3.1. **Theorem 3.8**.: _Let \(K\) be a sequence of positive integers given by \(K_{n}=ab^{n}(1+o(1))\) where a and \(b\) are positive real numbers such that \(\log_{\phi}b\) is irrational. Then, given \(\mathbf{b}\in\mathscr{F}_{s}\) where \(s\geq 2\),_ \[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\big{\}}\,=\,\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat {F}}{\mathbf{b}\cdot\widehat{F}}.\] Proof.: It follows immediately from Corollary 4.7. Let us demonstrate below that the probabilities add up to 1 for \(s=6\), but the argument is sufficiently general to be extended for all cases of \(s\). Let \(\mathscr{F}_{6}=(\mathbf{b}_{1},\ldots,\mathbf{b}_{\ell})\) such that \(\mathbf{b}_{k+1}=\widetilde{\mathbf{b}}_{k}\) for all \(1\leq k\leq\ell\). Then, \(\mathbf{b}_{1}=(1,0,0,0,0,0)\) and \(\mathbf{b}_{\ell}=(1,0,1,0,1,0)\). Then, \(\mathbf{b}_{\ell+1}=(1,1,0,0,0,0)\), and \[\sum_{k=1}^{\ell}\log_{\phi}\frac{\widetilde{\mathbf{b}}_{k}\cdot\widehat{F}} {\mathbf{b}_{k}\cdot\widehat{F}}\,=\,\sum_{k=1}^{\ell}\log_{\phi}(\mathbf{b}_ {k+1}\cdot\widehat{F})-\log_{\phi}(\mathbf{b}_{k}\cdot\widehat{F})\,=\,\log_{ \phi}(\mathbf{b}_{\ell+1}\cdot\widehat{F})-\log_{\phi}1\,=\,1.\] **Definition 3.9**.: Let \(K\) be a sequence of positive integers approaching \(\infty\). Then, \(K\) is said to _satisfy strong Benford's Law under \(\mathscr{F}\)-expansion_ if given \(\mathbf{b}\in\mathscr{F}_{s}\) where \(s\geq 2\), \[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\big{\}}\,=\,\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat{ F}}{\mathbf{b}\cdot\widehat{F}}.\] **Example 3.10**.: Let \(K\) be a sequence satisfying strong Benford's Law under \(\mathscr{F}\)-expansion, e.g., \(\{2^{n}\}_{n=1}^{\infty}\); see Theorem 3.8. Let \(\mathbf{b}=(1,0,0,0,1,0)\), so \(\widetilde{\mathbf{b}}=(1,0,0,1,0,0)\). Then, \[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{6}(K_{n})= \mathbf{b}\,\big{\}}\,=\,\log_{\phi}\frac{1+\omega^{3}}{1+\omega^{4}}\approx 0.157.\] Calculations Notice that \(\log_{b}(x)\) makes it convenient to calculate the distribution of the leading digits of exponential sequences \(\{a^{n}\}_{n=1}^{\infty}\) under base-\(b\) expansion where \(b>1\) is an integer. In this section, we introduce an analogue of \(\log_{b}(x)\) for Zeckendorf expansion in Section 4.1, and use it for various calculations. As mentioned in the introduction, these functions are merely a tool for calculating the leading digits, and in Section 5, we consider other continuations, and demonstrate their connections to different distributions of leading digits. ### An analytic continuation of the Fibonacci sequence Below we introduce an analytic continuation of the Fibonacci sequence. **Definition 4.1**.: Let \(\alpha=\frac{\phi}{\sqrt{5}}\), and define \(\mathfrak{F}:\mathbb{R}\to\mathbb{R}\) be the function given by \[\mathfrak{F}(x)=\alpha(\phi^{x}+\phi^{-x}\cos(\pi x)\phi^{-2}).\] We call the function \(a\)_Benford continuation of the Fibonacci sequence_. Notice that \(F_{n}=\frac{1}{\sqrt{5}}(\phi^{n+1}-(-1/\phi)^{n+1})=\frac{\phi}{\sqrt{5}}( \phi^{n}+(-1)^{n}\phi^{-(n+2)})\). Thus, \(\mathfrak{F}\) is a real analytic continuation of \(F_{n}\), so \(\mathfrak{F}(n)=F_{n}\) for all \(n\in\mathbb{N}\). It is an increasing function on \([1,\infty)\). Let \(\mathfrak{F}^{-1}\) denote the inverse function of \(\mathfrak{F}:[1,\infty)\to\mathbb{R}\). Comparing it with the case of base-10 expansion, we find that \(10^{x-1}\) is an analytic continuation of the sequence \(\{10^{n-1}\}_{n=1}^{\infty}\), and its inverse is \(1+\log_{10}(x)\), which is the main object for the equidistribution for Benford's Law under base-10 expansion. The equidistribution property described in Theorem 4.5 is associated with strong Benford's Law under \(\mathscr{F}\)-expansion, and the name of the function is due to this connection. **Lemma 4.2**.: _For real numbers \(x\geq 1\), we have \(\mathfrak{F}(x)=\alpha\phi^{x}+O(\phi^{-x})\), and_ \[\mathfrak{F}^{-1}(x)\;=\;\log_{\phi}(x)-\log_{\phi}(\alpha)+O(1/x^{2}).\] Proof.: Let \(y=\alpha\phi^{x}+\alpha\phi^{-x}\cos(\pi x)\phi^{-2}\) and let \(w=\alpha\phi^{-x}\cos(\pi x)\phi^{-2}=O(\phi^{-x})\). Since \(y=\alpha\phi^{x}+o(1)\), we have \(w=O(1/y)\). Then, \(y=\alpha\phi^{x}+w\) implies \[x \;=\;\log_{\phi}(y-w)-\log_{\phi}\alpha\;=\;\log_{\phi}(y)-\log _{\phi}\alpha+\log_{\phi}(1-w/y)\] \[\;=\;\log_{\phi}(y)-\log_{\phi}\alpha+O(|w|/y)\;=\;\log_{\phi}(y )-\log_{\phi}\alpha+O(1/y^{2}).\] ### Equidistribution Recall the set \(\mathscr{F}_{s}\) of leading blocks from Definition 3.7. In this section, having a leading block \(\mathbf{b}\in\mathscr{F}_{s}\) is interpreted in terms of the fractional part of the values of \(\widetilde{\mathfrak{F}}^{-1}\). **Definition 4.3**.: Given \(\epsilon\in\mathbb{N}_{0}^{t}\) and an integer \(s\leq t\), let \(\epsilon|s:=(\epsilon(1),\ldots,\epsilon(s))\). Recall \(\widehat{F}\) from Definition 3.1 and the product notation from Definition 2.2. **Lemma 4.4**.: _Let \(K\) be a sequence of positive real numbers approaching \(\infty\), and let \(s\) be an integer \(\geq 2\). Let \(\mathbf{b}\in\mathscr{F}_{s}\), and let \(A_{\mathbf{b}}:=\{n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})=\mathbf{b}\}\). Then, there are real numbers \(\gamma_{n}=o(1)\) and \(\widetilde{\gamma}_{n}=o(1)\) such that \(n\in A_{\mathbf{b}}\) if and only if_ \[\log_{\phi}\mathbf{b}\cdot\widehat{F}+\gamma_{n}\;\leq\;\operatorname{frc} \bigl{(}\widetilde{\mathfrak{F}}^{-1}(K_{n})\bigr{)}\;<\;\log_{\phi}\widetilde {\mathbf{b}}\cdot\widehat{F}+\widetilde{\gamma}_{n} \tag{3}\] _where \(\widetilde{\gamma}_{n}=0\) if \(\mathbf{b}\) is the largest block of length \(s\)._ Proof.: Suppose that \(n\in\mathbb{N}\) is sufficiently large, so that \(\mathbf{b}^{\prime}:=\operatorname{LB}_{s}(K_{n})\) exists. By Zeckendorf's Theorem, there is \(\mu\in\mathscr{F}\) such that \(K_{n}=\mu*F\), so \(m:=\operatorname{len}(\mu)\geq s\), and \(\mathbf{b}^{\prime}=\mu|s\). There are \(\epsilon\in\mathscr{F}\) of length \(m\) and a coefficient function \(\check{\epsilon}\) of length \(m\) such that \(\epsilon|s=\mathbf{b}^{\prime}\), \(\check{\epsilon}|s=\widetilde{\mathbf{b}}^{\prime}\), \(\epsilon(k)=\check{\epsilon}(k)=0\) for all \(k>s\), so \(\epsilon*F\leq K_{n}<\check{\epsilon}*F\). Recall \(\alpha\) from Definition 4.1. Then, \[\epsilon*F\;=\;\alpha\sum_{k=1}^{s}\epsilon(k)\phi^{m-k+1}+O(1)\;=\;\alpha\phi ^{m}(1+o(1))\sum_{k=1}^{s}\epsilon(k)\omega^{k-1}\;=\;\alpha\phi^{m}(1+o(1)) \,\mathbf{b}^{\prime}\cdot\widehat{F}.\] By Lemma 4.2, \[\widetilde{\mathfrak{F}}^{-1}(\epsilon*F)\;=\;m+\log_{\phi}(\mathbf{b}^{ \prime}\cdot\widehat{F})+\gamma_{n},\quad\gamma_{n}\;=\;o(1).\] Similarly, we have \(\widetilde{\mathfrak{F}}^{-1}(\check{\epsilon}*F)=m+\log_{\phi}(\widetilde {\mathbf{b}}^{\prime}\cdot\widehat{F})+\widetilde{\gamma}_{n}\) where \(\widetilde{\gamma}_{n}=o(1)\). If \(\mathbf{b}^{\prime}\) is the largest block of length \(s\), then \(\check{\epsilon}*F=F_{m+1}\), and hence, \(\widetilde{\mathfrak{F}}^{-1}(\check{\epsilon}*F)=m+1\), which implies \(\widetilde{\gamma}_{n}=0\). In general, \(\check{\epsilon}*F\leq F_{m+1}\), so \(\widetilde{\mathfrak{F}}^{-1}(\check{\epsilon}*F)\leq m+1\). Thus, if \(n\in A_{\mathbf{b}}\), then \(\mathbf{b}^{\prime}=\mathbf{b}\), and \[\epsilon*F\leq K_{n}\;<\;\check{\epsilon}*F\Rightarrow\widetilde{ \mathfrak{F}}^{-1}(\epsilon*F)\leq\widetilde{\mathfrak{F}}^{-1}(K_{n})\;<\; \widetilde{\mathfrak{F}}^{-1}(\check{\epsilon}*F)\] \[\qquad\Rightarrow\log_{\phi}\mathbf{b}\cdot\widehat{F}+\gamma_{n }\;\leq\;\operatorname{frc}\bigl{(}\widetilde{\mathfrak{F}}^{-1}(K_{n}) \bigr{)}\;<\;\log_{\phi}\widetilde{\mathbf{b}}\cdot\widehat{F}+\widetilde{ \gamma}_{n}.\] There is no difficulty in reversing this argument, and we leave the proof of the converse to the reader. **Theorem 4.5**.: _Let \(K\) be an increasing sequence of positive integers such that \(\operatorname{frc}\bigl{(}\widetilde{\mathfrak{F}}^{-1}(K_{n})\bigr{)}\) is equidistributed. Then, \(K\) satisfies strong Benford's Law under the \(\mathscr{F}\)-expansion._ Proof.: Notice that \(\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})=\mathbf{b} \,\right\}\) where \(s\geq 2\) is equal to the probability of \(n\) satisfying (3). Let \(t\in\mathbb{N}\). Then, there is an integer \(M_{t}\) such that \(\left|\gamma_{n}\right|\) and \(\left|\widetilde{\gamma}_{n}\right|\) are \(<1/t\) for all \(n\geq M_{t}\). Thus, by Lemma 4.4, \[\operatorname{Prob}\left\{\,k\in\Omega_{n}:\operatorname{LB}_{s}(K_{n})=B\, \right\}+o(1)\] \[\leq\,\operatorname{Prob}\left\{k\in\Omega_{n}:\log_{\phi}\mathbf{b}\cdot \widehat{F}-\frac{1}{t}\,\leq\,\operatorname{frc}\left(\widehat{\mathfrak{F}} ^{-1}(K_{n})\right)\,<\,\log_{\phi}\widetilde{\mathbf{b}}\cdot\widehat{F}+ \frac{1}{t}\right\}+o(1)\] \[\Rightarrow\,\limsup_{n}\operatorname{Prob}\left\{\,k\in\Omega_{n}: \operatorname{LB}_{s}(K_{n})=\mathbf{b}\,\right\}\,\leq\,\log_{\phi}\frac{ \widetilde{\mathbf{b}}\cdot\widehat{F}}{\mathbf{b}\cdot\widehat{F}}+\frac{2}{ t}.\] \[\operatorname{Prob}\left\{\,k\in\Omega_{n}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\right\}+o(1)\] \[\geq\,\operatorname{Prob}\left\{\,k\in\Omega_{n}:\log_{\phi}\mathbf{b}\cdot \widehat{F}+\frac{1}{t}\leq\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1 }(K_{n})\right)\,<\,\log_{\phi}\widetilde{\mathbf{b}}\cdot\widehat{F}-\frac{1} {t}\,\right\}+o(1)\] \[\Rightarrow\,\liminf_{n}\operatorname{Prob}\left\{\,k\in\Omega_{n}: \operatorname{LB}_{s}(K_{n})=\mathbf{b}\,\right\}\,\geq\,\log_{\phi}\frac{ \widetilde{\mathbf{b}}\cdot\widehat{F}}{\mathbf{b}\cdot\widehat{F}}-\frac{2}{ t}.\] Since \(\liminf\) and \(\limsup\) are independent of \(t\), we prove that \(\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\right\}=\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat{F}}{ \mathbf{b}\cdot\widehat{F}}\). The converse of Theorem 4.5 is true as well, i.e., if \(K\) satisfies strong Benford's Law under \(\mathcal{F}\)-expansion, then \(\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1}(K_{n})\right)\) is equidistributed. We shall prove it in Section 5. The following lemma is useful, and it is probably known. **Lemma 4.6**.: _Let \(h:\mathbb{N}\to\mathbb{R}\) be a function such that \(\operatorname{frc}(h(n))\) is equidistributed, and let \(E:\mathbb{N}\to\mathbb{R}\) be a function such that \(E(n)\to 0\) as \(n\to\infty\). Then, \(\operatorname{frc}(h(n)+E(n))\) is equidistributed._ **Corollary 4.7**.: _Let \(K\) be a sequence of positive integers given by \(K_{n}=ab^{n}(1+o(1))\) where \(a\) and \(b\) are positive real numbers such that \(\log_{\phi}b\) is irrational. Then, \(\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1}(K_{n})\right)\) is equidistributed, and hence, given \(\mathbf{b}\in\mathcal{F}_{s}\) where \(s\geq 2\),_ \[\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\right\}\,=\,\log_{\phi}\frac{\widetilde{\mathbf{b}}\cdot\widehat {F}}{\mathbf{b}\cdot\widehat{F}}.\] Proof.: By Lemma 4.2, \[\widehat{\mathfrak{F}}^{-1}(K_{n})\,=\,n\log_{\phi}b-\log_{\phi}(a/\alpha)+ \log_{\phi}(1+o(1))+o(1).\] Since \(\log_{\phi}b\) is irrational, by Weyl's Equidistribution Theorem, \(\operatorname{frc}\left(n\log_{\phi}b\right)\) is equidistributed, and by the lemma, \(\operatorname{frc}\left(n\log_{\phi}b+o(1)\right)\) is equidistributed. Shifting it by a constant \(-\log_{\phi}(a/\alpha)\) does not change the equidistribution property, and this concludes the proof. For example, if \(K\) is a sequence given by \(K_{n}=\sum_{k=1}^{N}a_{k}\,b_{k}^{\,n}\) where \(a_{k},b_{k}\in\mathbb{Z}\), \(a_{1}>0\), and \(b_{1}>|b_{k}|\) for all \(k\geq 2\), then \(K_{n}=a_{1}b_{1}^{n}(1+o(1))\), and \(\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1}(K_{n})\right)\) is equidistributed. Many increasing sequences \(K\) of positive integers given by a linear recurrence with constant positive integer coefficients satisfy \(K_{n}=ab^{n}(1+o(1))\) where \(\log_{\phi}(b)\) is irrational, and hence, \(\operatorname{frc}\left(\widehat{\mathfrak{F}}^{-1}(K_{n})\right)\) is equidistributed. ### The leading blocks of integer powers Let \(a\) be a positive integer, and let \(K\) be the sequence given by \(K_{n}=n^{a}\). Then, \(K\) does not satisfy Benford's Law under the base-10 expansion, but it has a close relationship with Benford's Law [14]. In this section, we show that both statements are true under \(\mathscr{F}\)-expansion as well. Recall \(\Omega_{n}\) from Notation 2.1 and \(\mathscr{F}_{3}\) from Definition 3.4, and let \(\mathbf{b}_{1}:=(1,0,0)\in\mathscr{F}_{3}\). We also introduce the oscillating behavior of \(\operatorname{Prob}\left\{\,k\in\Omega_{n}:\operatorname{LB}_{3}(K_{k})= \mathbf{b}_{1}\,\right\}\) as \(n\to\infty\), and hence, \(\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(K_{n})= \mathbf{b}_{1}\,\right\}\) does not exist. **Example 4.8**.: Let \(K\) be the sequence given by \(K_{n}=n\), and let \(t>0\) be a large integer. Given a sufficiently large positive random integer \(n<F_{t+1}\), let \(n=\mu*F\) be the \(\mathscr{F}\)-expansion, and \(M:=\operatorname{len}(\mu)\). Notice that \(\operatorname{LB}_{3}(n)=\mathbf{b}_{1}\) if and only if \(n=F_{M}+m\) where \(0\leq m<F_{M-2}\). Thus, there are \(F_{M-2}\) integers \(n\) in \([1,F_{t+1})\) such that \(F_{M}\leq n<F_{M+1}\) and \(\operatorname{LB}_{3}(n)=\mathbf{b}_{1}\). Thus, \[\operatorname{Prob}\left\{\,n\in\Omega_{F_{t+1}}:\operatorname{LB}_{3}(n)= \mathbf{b}_{1}\,\right\}\,=\,\left(\frac{1}{F_{t+1}}\sum_{M=3}^{t}F_{M-2} \right)+o(1)=\left(\frac{1}{F_{t+1}}\sum_{M=3}^{t}\alpha\phi^{M-2}+o(1)\right) +o(1)\\ =\,\frac{1}{\alpha\phi^{t+1}+o(1)}\frac{\alpha\phi^{t-1}}{\phi-1} +o(1)\,=\,\frac{1}{\phi^{2}(\phi-1)}+o(1)\,=\,\phi-1+o(1)\] as function of \(t\). However, by Theorem 4.10, we have \[\limsup_{n}\operatorname{Prob}\left\{\,k\in\Omega_{n}: \operatorname{LB}_{3}(k)=\mathbf{b}_{1}\,\right\}\,=\,\frac{\phi+1}{\phi+2} \approx.724,\] \[\liminf_{n}\operatorname{Prob}\left\{\,k\in\Omega_{n}: \operatorname{LB}_{3}(k)=\mathbf{b}_{1}\,\right\}\,=\,\phi-1\approx.618.\] Thus, \(\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(n)= \mathbf{b}_{1}\,\right\}\) does not exist. Recall \(\mathfrak{F}\) from Definition 4.1, and its inverse \(\mathfrak{F}^{-1}\). We use the function \(\mathfrak{F}\) to more generally handle the distribution of the leading blocks of \(\{n^{a}\}_{n=1}^{\infty}\) with any length. Given a positive integer \(m\), let \(A_{m}=\{n\in\mathbb{N}:n<F_{m}^{1/a}\}\). **Lemma 4.9**.: _If \(\beta\in[0,1]\), then_ \[\operatorname{Prob}\left\{\,n\in A_{m}:\operatorname{frc}\left(\mathfrak{F}^{- 1}(n^{a})\right)\leq\beta\,\right\}\,=\,\frac{\phi^{\beta/a}-1}{\phi^{1/a}-1}+ O(m\phi^{-m/a}).\] Proof.: Let \(m\in\mathbb{N}\), and let \(n\in A^{\prime}_{m+1}:=A_{m+1}-A_{m}\), so that \(F_{m}\leq n^{a}<F_{m+1}\) and \(m\leq\mathfrak{F}^{-1}(n^{a})<m+1\). Thus, given a real number \(\beta\in[0,1]\), \[\left\{\,n\in A^{\prime}_{m+1}:\operatorname{frc}\left(\mathfrak{F}^{- 1}(n^{a})\right)\leq\beta\,\right\} \,=\,\left\{\,n\in A^{\prime}_{m+1}:m\leq\mathfrak{F}^{-1}(n^{a}) \leq m+\beta\,\right\}\] \[\,=\,\left\{\,n\in A^{\prime}_{m+1}:\operatorname{frc}\left( \mathfrak{F}^{-1}(n^{a})\right)\leq\beta\,\right\} \,=\,\mathfrak{F}(m+\beta)^{1/a}-\mathfrak{F}(m)^{1/a}+O(1)\] \[\,=\,\alpha^{1/a}\phi^{(m+\beta)/a}-\alpha^{1/a}\phi^{m/a}+O(1)\] \[\,=\,\alpha^{1/a}\phi^{(m+\beta)/a}\gamma-\alpha^{1/a}\phi^{m/a} \gamma+O(m),\quad\gamma=\frac{\phi^{1/a}}{\phi^{1/a}-1}.\] This proves that \[\mathrm{Prob}\left\{\,n\in A_{m+1}:\mathrm{frc}\left(\widehat{ \mathfrak{F}}^{-1}(n^{a})\right)\leq\beta\,\right\} \,=\,\frac{\alpha^{1/a}\phi^{(m+\beta)/a}\gamma-\alpha^{1/a}\phi^{m/ a}\gamma+O(m)}{F_{m+1}^{1/a}+O(1)}\] \[\,=\,\frac{\phi^{\beta/a}\gamma-\gamma+O(m\phi^{-m/a})}{\phi^{1/a} +O(\phi^{-m/a})}\,=\,\frac{\phi^{\beta/a}-1}{\phi^{1/a}-1}+O(m\phi^{-m/a}).\] Recall from Lemma 4.4 that \[\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{LB}_{3}(n^{a})=\mathbf{b}_{1}\, \right\}=\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{frc}\left(\widehat{ \mathfrak{F}}^{-1}(n^{a})\right)\leq\delta_{1}+o(1)\,\right\}\] where \(\delta_{1}:=\log_{\phi}\frac{\widehat{\mathbf{b}}_{1}:\widehat{F}}{\mathbf{b }_{1}:\widehat{F}}\). Thus, as \(m\to\infty\), by Lemma 4.9, \[\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{LB}_{3}(n^{a})=\mathbf{b}_{1}\, \right\}\,\to\,\frac{\phi^{\delta_{1}/a}-1}{\phi^{1/a}-1}\,=\,\frac{(1+\omega ^{2})^{1/a}-1}{\phi^{1/a}-1}\] where \(\omega=\phi^{-1}\). Let us show that \[\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{LB}_{3}(n^{a})=\mathbf{b}_{1}\, \right\}\,\not\to\,\delta_{1}\] as \(m\to\infty\). We claim that the ratio \(\frac{(1+\omega^{2})^{1/a}-1}{\phi^{1/a}-1}\) is not equal to \(\delta_{1}=\log_{\phi}(1+\omega^{2})\). Since \(a\in\mathbb{N}\), the ratio is an algebraic number over \(\mathbb{Q}\). However, by the Gelfand-Schneider Theorem, \(\log_{\phi}(1+\omega^{2})\) is a transcendental number. Thus, \(K\) does not satisfy Benford's Law under the \(\mathscr{F}\)-expansion. However, as noted in [14] for base-\(b\) expansions, we have \[\lim_{a\to\infty}\lim_{m\to\infty}\mathrm{Prob}\left\{\,n\in A_{m}:\mathrm{ LB}_{3}(n^{a})=\mathbf{b}_{1}\,\right\}\,=\,\lim_{a\to\infty}\frac{\phi^{ \delta_{1}/a}-1}{\phi^{1/a}-1}\,=\,\delta_{1}\,=\,\log_{\phi}(1+\omega^{2}).\] Even though the leading blocks of \(K_{n}\) do not satisfy Benford's Law under \(\mathscr{F}\)-expansion, the limiting behavior of high power sequences for special values of \(n\) resembles Benford's Law. Recall \(\Omega_{n}\) from Definition 2.1. Let us use Lemma 4.9 to prove that \(\mathrm{Prob}\left\{\,k\in\Omega_{n}:\mathrm{frc}\left(\widehat{\mathfrak{F}}^ {-1}(K_{k})\right)\leq\beta\,\right\}\) oscillates, and does not converge. **Theorem 4.10**.: _Let \(\beta\) be a real number in \([0,1]\), and let \(\,r:=(\phi^{\beta/a}-1)/(\phi^{1/a}-1)\). Given an integer \(n>1\), let \(\widehat{\mathfrak{F}}^{-1}(n^{a})=m+p\) where \(\,p=\mathrm{frc}\left(\widehat{\mathfrak{F}}^{-1}(n^{a})\right)\) and \(\,m\in\mathbb{N}\). Then,_ \[P_{n}:=\mathrm{Prob}\left\{\,k\in\Omega_{n}:\mathrm{frc}\left(\widehat{ \mathfrak{F}}^{-1}(K_{k})\right)\leq\beta\,\right\}\,=\,\,\begin{cases}\frac{r+ \phi^{p/a}-1}{\phi^{p/a}}+O(m\phi^{-m/a})&\text{if }0\leq p\leq\beta\\ \frac{r+\phi^{p/a}-1}{\phi^{p/a}}+O(m\phi^{-m/a})&\text{if }\beta<p<1\end{cases}.\] _In particular,_ \[\limsup P_{n}=r\phi^{1/a-\beta/a}=\beta+O(1/a),\quad and\quad\liminf P_{n}=r= \beta+O(1/a).\] Proof.: Let \(m\) be a sufficiently large positive integer, and let \(n\in A_{m+1}-A_{m}\). Let \(n=\mathfrak{F}(m+p)^{1/a}\) for \(p\in[0,1)\). If \(p\leq\beta\), then, \(\operatorname{frc}\left(\mathfrak{F}^{-1}(n^{a})\right)=\operatorname{frc} \left(\mathfrak{F}^{-1}\mathfrak{F}(m+p)\right)=\operatorname{frc}(m+p)=p\leq\beta\), and if \(p>\beta\), then, \(\operatorname{frc}\left(\mathfrak{F}^{-1}(n^{a})\right)=p>\beta\). Thus, \[\left\{n\in A_{m+1}-A_{m}:\operatorname{frc}\left(\mathfrak{F}^{-1}(n^{a}) \right)\leq\beta\right\}\;=\;\left\{n\in A_{m+1}-A_{m}:n\leq\mathfrak{F}(m+ \beta)^{1/a}\right\}.\] If \(n\leq\mathfrak{F}(m+\beta)^{1/a}\), i.e., \(p\leq\beta\), then by Lemma 4.9 \[P_{n} =\;\frac{1}{n}\left(\operatorname{Prob}\left\{\;k\in A_{m}: \operatorname{frc}\left(\mathfrak{F}^{-1}(k^{a})\right)\leq\beta\;\right\}\; \#A_{m}+n-\mathfrak{F}(m)^{1/a}+O(1)\right)\] \[=\;\frac{r\mathfrak{F}(m)^{1/a}+O(m)+\mathfrak{F}(m+p)^{1/a}- \mathfrak{F}(m)^{1/a}}{\mathfrak{F}(m+p)^{1/a}+O(1)}\] \[=\;\frac{r+O(m\phi^{-m/a})+\phi^{p/a}-1}{\phi^{p/a}+O(\phi^{-m/a })}\;=\;\frac{r+\phi^{p/a}-1}{\phi^{p/a}}+O(m\phi^{-m/a})\] If \(n>\mathfrak{F}(m+\beta)^{1/a}\), i.e., \(p>\beta\), then \[P_{n}=\frac{r+\phi^{\beta/a}-1}{\phi^{p/a}}+O(m\phi^{-m/a})\;=\; \frac{r\phi^{1/a}}{\phi^{p/a}}+O(m\phi^{-m/a}).\] \[\text{Thus, }\limsup P_{n}=\frac{r+\phi^{\beta/a}-1}{\phi^{\beta/a }}=\frac{r\phi^{1/a}}{\phi^{\beta/a}},\quad\liminf P_{n}=\frac{r\phi^{1/a}}{ \phi^{1/a}}=r.\] Thus, \(\operatorname{Prob}\left\{\;n\in\mathbb{N}:\operatorname{frc}\left( \mathfrak{F}^{-1}(K_{n})\right)\leq\beta\;\right\}\) does not converge, but \(\operatorname{frc}\left(\mathfrak{F}^{-1}(K_{n})\right)\) is almost equidistributed for large values of \(a\). **Example 4.11**.: Let \(\mathbf{b}\) and \(\widetilde{\mathbf{b}}\) be the blocks defined in Example 3.10, and let \(K\) be the sequence given by \(K_{n}=n^{2}\). By Lemma 4.4, if \(D:=\{n\in\mathbb{N}:\operatorname{LB}_{6}(K_{n})=\mathbf{b}\}\), then for \(n\in D\), \[\log_{\phi}(1+\omega^{4})+o(1)\;<\;\operatorname{frc}\left(\mathfrak{F}^{-1}(K _{n})\right)\;<\;\log_{\phi}(1+\omega^{3})+o(1)\] where the upper and lower bounds are functions of \(n\in D\). Let \(\beta=\log_{\phi}(1+\omega^{4})\) and \(\widetilde{\beta}=\log_{\phi}(1+\omega^{3})\). Recall \(\Omega_{n}\) from Definition 2.1. Then, \[\operatorname{Prob}\left\{\;k\in\Omega_{n}:\operatorname{LB}_{6}( K_{k})=\mathbf{b}\;\right\}=\] \[\operatorname{Prob}\left\{\;k\in\Omega_{n}:\operatorname{frc} \left(\mathfrak{F}^{-1}(K_{n})\right)<\widetilde{\beta}\;\right\}\;-\; \operatorname{Prob}\left\{\;k\in\Omega_{n}:\operatorname{frc}\left(\mathfrak{F} ^{-1}(K_{n})\right)<\beta\;\right\}\;+\;o(1).\] Let \(r=(\phi^{\beta/2}-1)/(\phi^{1/2}-1)\) and \(\widetilde{r}=(\phi^{\widetilde{h}/2}-1)/(\phi^{1/2}-1)\), and let \(n=\mathfrak{F}(m+p)^{1/a}\) where \(p=\operatorname{frc}\big{(}\mathfrak{F}^{-1}(n^{a})\big{)}\in[0,1)\). Then, by Theorem 4.10, we have \[\operatorname{Prob}\big{\{}\,k\in\Omega_{n}:\operatorname{LB}_{6}(K_{k})= \mathbf{b}\,\big{\}}\;=\;\begin{cases}\frac{\widetilde{r}+\phi^{\rho^{\beta 2}}-1}{\phi^{\beta/2}}-\frac{r+\phi^{\rho^{\beta 2}}-1}{\phi^{\beta/2}}+o(1)&\text{ if }p \leq\beta\;,\\ \frac{\widetilde{r}+\phi^{\rho^{\beta 2}}-1}{\phi^{\rho^{2}}}-\frac{r+\phi^{\rho^{ \beta 2}}-1}{\phi^{\rho^{2}}}+o(1)&\text{ if }\beta<p\leq\widetilde{\beta}\\ \frac{\widetilde{r}+\phi^{\widetilde{h}/2}-1}{\phi^{\rho^{2}}}-\frac{r+\phi^ {\rho^{\beta 2}}-1}{\phi^{\rho^{2}}}+o(1)&\text{ if }p>\widetilde{\beta}\;.\end{cases}\] \[\Rightarrow\limsup_{n}\operatorname{Prob}\big{\{}\,k\in\Omega_{n}: \operatorname{LB}_{6}(K_{k})=\mathbf{b}\,\big{\}}\;=\;\frac{\widetilde{r}+ \phi^{\widetilde{h}/2}-1}{\phi^{\widetilde{h}/2}}-\frac{r+\phi^{\beta/2}-1}{ \phi^{\widetilde{h}/2}}\approx 0.1737\] \[\liminf_{n}\operatorname{Prob}\big{\{}\,k\in\Omega_{n}:\operatorname{LB}_{6}(K _{k})=\mathbf{b}\,\big{\}}\;=\;\frac{\widetilde{r}+\phi^{\widetilde{h}/2}-1} {\phi^{\beta/2}}-\frac{r+\phi^{\beta/2}-1}{\phi^{\beta/2}}\approx 0.1419.\] ## 5 Other continuations Reflecting upon Lemma 4.4 and Theorem 4.5, we realized that we could consider different continuations of the Fibonacci sequence \(F\), and ask which sequence satisfies the equidistribution property, and which distributions its leading blocks follow. Let us demonstrate the idea in Example 5.4. The claims in this example can be proved using Theorem 5.6. Recall the Benford continuation \(\mathfrak{F}\) from Definition 4.1. **Definition 5.1**.: Given \(n\in\mathbb{N}\), let \(\mathfrak{F}_{n}:[0,1]\to[0,1]\) be the increasing function given by \[\mathfrak{F}_{n}(p)\,:=\,\frac{\mathfrak{F}(n+p)-\mathfrak{F}(n)}{\mathfrak{F }(n+1)-\mathfrak{F}(n)}=\frac{\mathfrak{F}(n+p)-\mathfrak{F}(n)}{F_{n-1}}\;=\; \phi(\phi^{p}-1)+o(1)\] where \(F_{0}:=1\). Let \(\mathfrak{F}_{\infty}:[0,1]\to[0,1]\) be the increasing function given by \(\mathfrak{F}_{\infty}(p)=\phi(\phi^{p}-1)\). Recall uniform continuations of sequences from Definition 1.6. **Lemma 5.2**.: _The function \(\mathfrak{F}\) is a uniform continuation of \(F\)._ Proof.: Notice that \(\mathfrak{F}_{n}(p)=\phi(\phi^{p}-1)+\gamma(n,p)\) where \(\big{|}\gamma(n,p)\big{|}<C/\phi^{n}\) where \(C\) is independent of \(p\) and \(n\). Thus, it uniformly converges to \(\phi(\phi^{p}-1)\). **Lemma 5.3**.: _Let \(p\in[0,1]\) be a real number. Then, \(\mathfrak{F}(n+\mathfrak{F}_{n}{}^{-1}(p))=F_{n}+(F_{n+1}-F_{n})p\)._ Proof.: Let \(p^{\prime}=\mathfrak{F}_{n}{}^{-1}(p)\). Then, \(\mathfrak{F}_{n}(p^{\prime})=p\), and hence, \(\frac{\mathfrak{F}(n+p^{\prime})-\mathfrak{F}(n)}{F_{n+1}-F_{n}}=p\). The assertion follows from the last equality. **Example 5.4**.: Let \(f:[1,\infty)\to\mathbb{R}\) be the increasing continuous function whose graph is the union of the line segments from \((n,F_{n})\) to \((n+1,F_{n+1})\) for \(n\in\mathbb{N}\). Then, \(f_{\infty}(p)=p\) for all \(p\in[0,1]\). Let \(K\) be the sequence given by \(K_{n}=\big{|}\mathfrak{F}(n+\mathfrak{F}_{n}{}^{-1}(\operatorname{frc}(n\pi))) \big{|}\). Then, by Lemma 5.3, \[f^{-1}\big{(}\mathfrak{F}(n+\mathfrak{F}_{n}{}^{-1}(\operatorname{frc}(n\pi))) \big{)}=n+\operatorname{frc}(n\pi)\;\Rightarrow\;\operatorname{frc}\big{(}f^{-1 }(K_{n})\big{)}=\operatorname{frc}(n\pi)+o(1),\] which is equidistributed. Recall \(\mathcal{F}_{s}\) from Definition 3.7 where \(s\geq 2\), and let \(\mathbf{b}\in\mathcal{F}_{s}\). Recall \(\widehat{F}\) from Definition 3.1 and the product notation from Definition 2.2. Then, by Theorem 5.6, \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{s}(K_{n})=\mathbf{b}\,\right\} \;=\;\phi(\widetilde{\mathbf{b}}\cdot\widehat{F}-\mathbf{b}\cdot\widehat{F}) \;=\;\phi^{-s+2}(\widetilde{\mathbf{b}}\ast\overline{F}-\mathbf{b}\ast \overline{F})\] where \(\overline{F}\) is the sequence given by \(\overline{F}_{n}=\phi^{n-1}\). If \(\mathbf{b}(s)=0\), then \(\omega^{s-2}(\widetilde{\mathbf{b}}\ast\overline{F}-\mathbf{b}\ast\overline{ F})=\omega^{s-2}\), and if \(\mathbf{b}(s)=1\), then \(\omega^{s-2}(\widetilde{\mathbf{b}}\ast\overline{F}-\mathbf{b}\ast\overline{ F})=\omega^{s-1}\). For example, if \(s=6\), then \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{6}(K_{n})\;=\;(1,0,0,1,0)\right\}\;=\;\omega^{5}\] \[\mathrm{Prob}\left\{\,n\in\mathbb{N}:\mathrm{LB}_{6}(K_{n})\;=\;( 1,0,1,0,1,0)\right\}\;=\;\omega^{4}.\] It's nearly a uniform distribution. Let us show that the probabilities add up to \(1\). Notice that \(\#\mathcal{F}_{s}=F_{s-1}\), \(\#\{\mathbf{b}\in\mathcal{F}_{s}:\mathbf{b}(s)=0\}=F_{s-2}\), and and \(\#\{\mathbf{b}\in\mathcal{F}_{s}:\mathbf{b}(s)=1\}=F_{s-3}\). Then, by Binet's Formula, the following sum is equal to \(1\): \[\sum_{\mathbf{b}\in\mathcal{F}_{s}}\omega^{s-2}(\widetilde{\mathbf{b}}\ast \overline{F}-\mathbf{b}\ast\overline{F})\;=\;\frac{F_{s-2}}{\phi^{s-2}}+\frac{ F_{s-3}}{\phi^{s-1}}=1.\] By Lemma 5.3, we have \(K_{n}=\left\{F_{n}+(F_{n+1}-F_{n})\mathrm{frc}(n\pi)\right\}\) for \(n\in\mathbb{N}\), and the following are the first ten values of \(K_{n}\): \[(1,2,3,6,11,19,33,36,64,111).\] Let us introduce and prove the main results on continuations. **Lemma 5.5**.: _Let \(f\) be a uniform continuation of \(F\), and let \(K\) be a sequence of positive real numbers approaching \(\infty\). Then, \(\mathrm{frc}\left(f^{-1}(\left\lfloor K_{n}\right\rfloor)\right)=\mathrm{frc} \left(f^{-1}(K_{n})\right)+o(1)\)._ Proof.: Let \(n\in\mathbb{N}\). Then, \(F_{m}\leq\left\lfloor K_{n}\right\rfloor\leq K_{n}<F_{m+1}\) for \(m\in\mathbb{N}\) depending on \(n\). Let \(K_{n}=f(m+p)\) and \(\left\lfloor K_{n}\right\rfloor=f(m+p^{\prime})\) where \(p,p^{\prime}\in[0,1]\) are real numbers, which depend on \(n\). Then, \(F_{m}+f_{m}(p^{\prime})(F_{m+1}-F_{m})+O(1)=F_{m}+f_{m}(p)(F_{m+1}-F_{m})\), and hence, \(f_{m}(p^{\prime})+o(1)=f_{m}(p)\). Thus, \[f^{-1}(K_{n})\;=\;m+p=m+{f_{m}}^{-1}\left(f_{m}(p^{\prime})+o(1)\right)\;=\;m +{f_{m}}^{-1}\left(f_{\infty}(p^{\prime})+o(1)\right).\] By the uniform convergence, \[=\;m+{f_{\infty}}^{-1}\left(f_{\infty}(p^{\prime})+o(1)\right)+o(1)\;=\;m+{f_{ \infty}}^{-1}\left(f_{\infty}(p^{\prime})\right)+o(1)\;=\;m+p^{\prime}+o(1).\] Therefore, \(\mathrm{frc}\left(f^{-1}(K_{n})\right)=\mathrm{frc}\left(f^{-1}(\left\lfloor K _{n}\right\rfloor)\right)+o(1)\). **Theorem 5.6**.: _Let \(f:[1,\infty)\rightarrow\mathbb{R}\) be a uniform continuation of \(F\). Then there is a sequence \(K\) of positive integers approaching \(\infty\), e.g., \(K_{n}=\left\lfloor\widehat{\mathfrak{F}}\left(n+\widehat{\mathfrak{F}}_{n}^{-1 }\circ f_{n}(\mathrm{frc}(n\pi)\right)\right\rfloor\), such that \(\mathrm{frc}\left(f^{-1}(K_{n})\right)\) is equidistributed._ _Let \(K\) be a sequence of of positive integers approaching \(\infty\) such that \(\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}\) is equidistributed. Let \(\mathbf{b}\in\mathscr{F}_{s}\) where \(s\geq 2\). Then,_ \[\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{s} (K_{n})=\mathbf{b}\,\big{\}} =\,{f_{\infty}}^{-1}\circ\mathfrak{F}_{\infty}(\log_{\phi} \widetilde{\mathbf{b}}\cdot\widehat{F})-{f_{\infty}}^{-1}\circ\mathfrak{F}_{ \infty}(\log_{\phi}\mathbf{b}\cdot\widehat{F})\] \[=\,{f_{\infty}}^{-1}\bigl{(}\phi(\widetilde{\mathbf{b}}\cdot \widehat{F}-1)\bigr{)}-{f_{\infty}}^{-1}\bigl{(}\phi(\mathbf{b}\cdot\widehat{F} -1)\bigr{)}\,. \tag{4}\] Proof.: Let \(x\geq 1\) be a real number, and let \(F_{n}\leq x<F_{n+1}\) for \(n\in\mathbb{N}\). Since \(\mathfrak{F}\) and \(f\) are increasing continuations of \(F\), there are two unique real numbers \(p\) and \(p^{\prime}\) in \([0,1]\) such that \(x=\mathfrak{F}(n+p)=f(n+p^{\prime})\). We claim that \[f^{-1}(x)=n+{f_{n}}^{-1}(\mathfrak{F}_{n}(p)), \tag{5}\] and \(\mathfrak{F}^{-1}(x)=n+\mathfrak{F}_{n}^{-1}(f_{n}(p^{\prime}))\). To prove the claim, note \[\mathfrak{F}(n+p)=f(n+p^{\prime}) \,\Rightarrow\,{F_{n}}+\mathfrak{F}_{n}(p)({F_{n+1}}-{F_{n}}) \,=\,{F_{n}}+{f_{n}}(p^{\prime})({F_{n+1}}-{F_{n}})\] \[\,\Rightarrow\,p^{\prime}={f_{n}}^{-1}(\mathfrak{F}_{n}(p)),\,p= \mathfrak{F}_{n}^{-1}(f_{n}(p^{\prime})).\] Then \(f(n+p^{\prime})=x\) and \(\mathfrak{F}(n+p)=x\) imply the claim. Let \(\overline{K}\) and \(K\) be the sequences given by \(\overline{K}_{n}=\mathfrak{F}\bigl{(}n+\mathfrak{F}_{n}^{-1}\circ f_{n}( \operatorname{frc}(n\pi))\bigr{)}\) and \(K_{n}=\left\lfloor\overline{K}_{n}\right\rfloor\). Given \(n\in\mathbb{N}\), let \(p_{n}=\mathfrak{F}_{n}^{-1}\circ f_{n}(\operatorname{frc}(n\pi))\). Then, \[f^{-1}(\overline{K}_{n})\,=\,n+{f_{n}}^{-1}\bigl{(}\mathfrak{F}_{n}(p_{n}) \bigr{)}\,=\,n+\operatorname{frc}(n\pi)\,.\] Thus, \(\operatorname{frc}\Bigl{(}f^{-1}(\overline{K}_{n})\Bigr{)}\) is equidistributed. If we further assume that \(f\) is a uniform continuation, then, by Lemmas 4.6 and 5.5, \(\operatorname{frc}\Bigl{(}f^{-1}(\left\lfloor\overline{K}_{n}\right\rfloor) \Bigr{)}=\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}\) is equidistributed as well. Let \(K\) be a sequence of of positive integers approaching \(\infty\) such that \(\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}\) is equidistributed. Let \(\mathbf{b}\in\mathscr{F}_{s}\), and let \(A_{\mathbf{b}}:=\{n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})=\mathbf{b}\}\). Let \(n\in A_{\mathbf{b}}\), and \(F_{m}\leq K_{n}<F_{m+1}\) for \(m\in\mathbb{N}\) depending on \(n\). Let \(K_{n}=\mathfrak{F}(m+p)=f(m+p^{\prime})\) where \(p\) and \(p^{\prime}\) are real numbers in \([0,1]\) depending on \(n\). Then, by Lemma 4.4, \[\log_{\phi}\mathbf{b}\cdot\widehat{F}+o(1)\,\,<\,\operatorname{ frc}\bigl{(}\mathfrak{F}^{-1}(K_{n})\bigr{)}\,<\,\log_{\phi}\widetilde{\mathbf{b}} \cdot\widehat{F}+o(1)\] \[\Rightarrow \log_{\phi}\mathbf{b}\cdot\widehat{F}+o(1)\,<\,\operatorname{ frc}\bigl{(}m+\mathfrak{F}_{n}^{-1}(f_{n}(p^{\prime}))\bigr{)}\,<\,\log_{\phi} \widetilde{\mathbf{b}}\cdot\widehat{F}+o(1)\] \[\Rightarrow {f_{n}}^{-1}\circ\mathfrak{F}_{n}(\log_{\phi}\mathbf{b}\cdot \widehat{F}+o(1))\,<\,p^{\prime}\,<\,{f_{n}}^{-1}\circ\mathfrak{F}_{n}(\log_{ \phi}\widetilde{\mathbf{b}}\cdot\widehat{F}+o(1))\] \[\Rightarrow {f_{\infty}}^{-1}\circ\mathfrak{F}_{\infty}(\log_{\phi}\mathbf{b} \cdot\widehat{F})+o(1)\,<\,{\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}} \,<\,{f_{\infty}}^{-1}\circ\mathfrak{F}_{\infty}(\log_{\phi}\widetilde{ \mathbf{b}}\cdot\widehat{F})+o(1).\] Since \(\operatorname{frc}\bigl{(}f^{-1}(K_{n})\bigr{)}\) is equidistributed, the above inequalities imply the assertion (4). Let us demonstrate a continuation, for which the distribution of leading blocks of length \(4\) coincides with that of strong Benford's Law, but the distribution does not coincide for higher length blocks. **Example 5.7**.: Consider \(\mathscr{F}_{4}=\{\mathbf{b}_{1},\mathbf{b}_{2},\mathbf{b}_{3}\}\), i.e., \[\mathbf{b}_{1}=(1,0,0,0),\ \mathbf{b}_{2}=(1,0,0,1),\ \mathbf{b}_{3}=(1,0,1,0).\] Let \(p_{k}=\log_{\phi}(\mathbf{b}_{k}\cdot\widehat{F})<1\) for \(k=1,2,3\), and let \(p_{0}=0\) and \(p_{4}=1\). For each \(n\in\mathbb{N}\), define \(f_{n}:[0,1]\to[0,1]\) to be the function whose graph is the union of line segments from \((p_{k},\widehat{\mathfrak{s}}_{\infty}(p_{k}))\) to \((p_{k+1},\widehat{\mathfrak{s}}_{\infty}(p_{k+1}))\) for \(k=0,1,2,3\). Notice that \(f_{n}\) is defined independently of \(n\), and that it defines a uniform continuation \(f:[1,\infty)\to[1,\infty)\) such that \(f_{\infty}=f_{n}\) for all \(n\in\mathbb{N}\) as follows: Given \(x\in[1,\infty)\), find \(n\in\mathbb{N}\) such that \(n\leq x<n+1\), and define \(f(x)=F_{n}+f_{n}(x-n)(F_{n+1}-F_{n})\). Note that \(f_{\infty}(p_{k})=\widehat{\mathfrak{s}}_{\infty}(p_{k})\), i.e., \(f_{\infty}{}^{-1}(\widehat{\mathfrak{s}}_{\infty}(p_{k}))=p_{k}\) for \(k=0,1,2,3\). By Theorem 5.6, if \(\operatorname{\mathrm{missing}}\left(f^{-1}(K_{n})\right)\) is equidistributed, we have \[\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{4}( K_{n})=\mathbf{b}_{k}\,\right\}\ =\ p_{k+1}-p_{k}\ =\ \log_{\phi}\frac{\widetilde{\mathbf{b}}_{k}\cdot\widehat{F}}{\mathbf{b}_{k} \cdot\widehat{F}}\] where \(\widetilde{\mathbf{b}}_{3}=(1,0,1,1)\) as defined Definition 3.7. However, the leading blocks of length \(>4\) do not satisfy Benford's Law under \(\mathscr{F}\)-expansion. The following is an example where \(f_{\infty}\) is analytic. **Example 5.8**.: Let \(f:[1,\infty)\to\mathbb{R}\) be the function given by \(f(n+p)=F_{n}+(F_{n+1}-F_{n})p^{2}\) where \(n\in\mathbb{N}\) and \(p\in[0,1)\). Then, \(f_{\infty}(p)=p^{2}\). Let \(K\) be the sequence given by \(K_{n}=\left\lfloor\widehat{\mathfrak{s}}(n+\widehat{\mathfrak{s}}_{n}{}^{-1} (p^{2}))\right\rfloor\), and let \(\mathbf{b}\in\mathscr{F}_{s}\). Then, Theorem 5.6, \[\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_ {n})=\mathbf{b}\,\right\}=\sqrt{\phi(\widetilde{\mathbf{b}}\cdot\widehat{F}- 1)}-\sqrt{\phi(\mathbf{b}\cdot\widehat{F}-1)}.\] ### Converse Let's consider the converse of Theorem 5.6, i.e., given a sequence \(K\) of positive integers approaching \(\infty\), let us construct a uniform continuation \(f\), if possible, such that \(\operatorname{\mathrm{missing}}\left(f^{-1}(K_{n})\right)\) is equidistributed. Recall the set \(\mathscr{F}_{s}\) from Definition 3.7. **Definition 5.9**.: A sequence \(K\) of positive integers approaching \(\infty\) is said to have _strong leading block distribution under \(\mathscr{F}\)-expansion_ if \(\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_ {n})=\mathbf{b}\,\right\}\) exists for each integer \(s\geq 2\) and each \(\mathbf{b}\in\mathscr{F}_{s}\). **Example 5.10**.: Let \(K\) be the Lucas sequence, i.e., \(K=(2,1,3,4,\ldots)\) and \(K_{n+2}=K_{n+1}+K_{n}\). Recall that \(F_{n}=\frac{1}{10}(5+\sqrt{5})\phi^{n}(1+o(1))\) and \(K_{n}=\frac{1}{2}(\sqrt{5}-1)\phi^{n}(1+o(1))\), and let \(\alpha=\frac{1}{10}(5+\sqrt{5})\) and \(a=\frac{1}{2}(\sqrt{5}-1)\). Then, by Lemma 4.2, \[\operatorname{\mathrm{missing}}\left(\widehat{\mathfrak{s}}^{-1}(K_{n})\right) =-\log_{\phi}(a/\alpha)+o(1)\approx.328+o(1).\] By Lemma 4.4, the leading block of \(K_{n}\) being \(\mathbf{b}_{1}=(1,0,0)\) is determined by whether \(0\leq\operatorname{\mathrm{missing}}\left(\widehat{\mathfrak{s}}^{-1}(K_{n}) \right)<\log_{\phi}(1+\omega^{2})\approx.67\). Thus, \(\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(K_ {n})=\mathbf{b}_{1}\,\right\}=1\), and \(\operatorname{\mathrm{missing}}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{3}(K_ {n})=\mathbf{b}_{2}\,\right\}=0\). In fact, the sequence \(K\) has strong leading block distribution. Recall \(\widehat{F}\) from Definition 3.1, and let us claim that \(\mathbf{b}\cdot\widehat{F}\neq\frac{\alpha}{a}=\frac{1}{10}(5+3\sqrt{5})\) for all \(s\in\mathbb{N}\) and \(\mathbf{b}\in\mathcal{F}_{s}\). Notice that \[\frac{\alpha}{\alpha}-1=\sum_{k=1}^{\infty}\omega^{4k}. \tag{6}\] The equality (6) is called _the Zeckendorf expansion of a real number in \((0,1)\)_ since it is a power series expansion in \(\omega\) where no consecutive powers are used; a formal definition is given in Definition 5.11 below. By the uniqueness of Zeckendorf expansions of the real numbers in \((0,1)\), the above infinite sum in (6) is not equal to any finite sum \(\mathbf{b}\cdot\widehat{F}-1\) where \(\mathbf{b}\in\mathcal{F}_{s}\); see Theorem 5.13. Let \(s\) be an integer \(\geq 2\), and let \(\mathcal{F}_{s}=\{\mathbf{b}_{1},\ldots,\mathbf{b}_{\ell}\}\). Then, there is \(k\in\mathbb{N}\) such that \(\mathbf{b}_{k}\cdot\widehat{F}\;<\;\frac{\alpha}{a}\;<\;\mathbf{b}_{k+1}\cdot \widehat{F}.\) This implies that \[\log_{\phi}(\mathbf{b}_{k}\cdot\widehat{F})\;<\;\log_{\phi}(\tfrac{\alpha}{a} )\;<\;\log_{\phi}(\mathbf{b}_{k+1}\cdot\widehat{F}).\] Since \(\operatorname{\mathrm{frc}}\big{(}\widehat{y}^{-1}(K_{n})\big{)}=\log_{\phi}( \alpha/a)+o(1)\) for all \(n\in\mathbb{N}\), by Lemma 4.4, we have \(\operatorname{\mathrm{Prob}}\big{\{}\;n\in\mathbb{N}\colon\operatorname{LB}_{ s}(K_{n})=\mathbf{b}_{k}\;\big{\}}=1\). For example, consider the case of \(s=9\), and notice that \(\omega^{4}+\omega^{8}<\frac{\alpha}{a}-1<\omega^{4}+\omega^{7}\) by (6). Then, we have \(\mathbf{b}\cdot\widehat{F}<\frac{\alpha}{a}<\widetilde{\mathbf{b}}\cdot \widehat{F}\) where \[\mathbf{b}=(1,0,0,0,1,0,0,0,1)\;\;\text{and}\;\;\widetilde{\mathbf{b}}=(1,0,0,0,1,0,0,1,0),\] and the probability of having the leading block \(\mathbf{b}\) in the values of the Lucas sequence is \(1\). Recall uniform continuations from Definition 1.6. Since the distribution of the leading blocks of the Lucas sequence \(K\) is concentrated on one particular block in \(\mathcal{F}_{s}\) for each \(s\), there does not exist a uniform continuation \(f\), described in Theorem 5.6, whose equidistribution is associated with the leading block distributions of the Lucas sequence \(K\). For a uniform continuation to exist, the values of the leading block distributions must be put together into a continuous function, and below we formulate the requirement more precisely. **Definition 5.11**.: Let \(\mathbf{I}\) denote the interval \((0,1)\) of real numbers. An infinite tuple \(\mu\in\prod_{k=1}^{\infty}\mathbb{N}_{0}\) is called a _Zeckendorf expression for \(\mathbf{I}\)_ if \(\mu(k)\leq 1\), \(\mu(k)\mu(k+1)=0\), and for all \(j\in\mathbb{N}_{0}\), the sequence \(\{\mu(j+n)\}_{n=1}^{\infty}\) is not equal to the sequence \(\{1+(-1)^{n+1}\}/2\}_{n=1}^{\infty}=(1,0,1,0,\ldots)\). Let \(\mathcal{F}^{*}\) be the set of Zeckendorf expressions for \(\mathbf{I}\). Given \(s\in\mathbb{N}\) and \(\mu\in\mathcal{F}^{*}\), let \(\mu|s:=(\mu(1),\ldots,\mu(s))\). Given \(s\in\mathbb{N}\) and \(\{\mu,\tau\}\subset\mathcal{F}^{*}\), we declare \(\mu|s<\tau|s\) if \(\mu|s\cdot\widehat{F}<\tau|s\cdot\widehat{F}\), which coincides with the lexicographical order on \(\mathcal{F}\). **Notation 5.12**.: Given a sequence \(Q\) of real numbers, and \(\mu\in\prod_{k=1}^{\infty}\mathbb{N}_{0}\), we define \(\mu\cdot Q:=\sum_{k=1}^{\infty}\mu(k)Q_{k}\), which may or may not be a convergent series. **Theorem 5.13** ([10], Zeckendorf Theorem for \(\mathbf{I}\)).: _Given a real number \(\beta\in\mathbf{I}\), there is a unique \(\mu\in\mathcal{F}^{*}\) such that \(\beta=\sum_{k=1}^{\infty}\mu(k)\omega^{k}=(\mu\cdot\widehat{F})\omega\)._ For the uniqueness of \(\mu\) in the theorem, we require the infinite tuples such as \((0,1,0,1,0,\ldots)\) to be not a member of \(\mathcal{F}^{*}\) since \(\sum_{k=1}^{\infty}\omega^{2k}=\omega\), which is analogous to \(0.0999\ldots=0.1\) in decimal expansion. **Proposition 5.14** ([10]).: _Let \(\{\mu,\tau\}\subset\mathcal{F}^{*}\). Then, \(\mu\cdot\widehat{F}<\tau\cdot\widehat{F}\) if and only if \(\mu|s<\tau|s\) for some \(s\in\mathbb{N}\)_ Given a sequence with strong leading block distribution, we shall construct a function on \(\mathbf{I}\) in Definition 5.16 below, and it is well-defined by Lemma 5.15. **Lemma 5.15**.: _Given a real number \(\beta\in\mathbf{I}\), there is a unique \(\mu\in\mathcal{F}^{*}\) such that \(\mu(1)=1\) and \(\phi(\mu\cdot\widehat{F}-1)=\beta\)._ Proof.: Let \(\widehat{F}^{*}\) be the sequence defined by \(\widehat{F}^{*}_{n}=\omega^{n}\). Given a real number \(\beta\in\mathbf{I}\), we have \(0<\omega+\beta\omega^{2}<1\). By Theorem 5.13, there are is \(\mu\in\mathcal{F}^{*}\) such that \((\mu\cdot\widehat{F})\omega=\mu\cdot\widehat{F}^{*}=\omega+\beta\omega^{2}\), which implies \(\phi(\mu\cdot\widehat{F}-1)=\beta\). We claim that \(\mu(1)=1\). If \(\mu(1)=0\), then by Proposition 5.14, \(\omega+\beta\omega^{2}=\mu\cdot\widehat{F}^{*}=(0,\ldots)\cdot\widehat{F}^{*}< \omega=(1,0,0,\ldots)\cdot\widehat{F}^{*}\), which implies a false statement \(\beta\omega^{2}<0\). Thus, \(\mu(1)=1\). Recall from Definition 5.11 the definition of inequalities on tuples. **Definition 5.16**.: Let \(K\) be a sequence of positive integers with strong leading block distribution under \(\mathcal{F}\)-expansion such that given \(\mu\in\mathcal{F}^{*}\) and an integer \(s\geq 2\) such that \(\mu(1)=1\), the following limit exists: \[\lim_{s\to\infty}\operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\,\mathrm{LB}_{ s}(K_{n})\leq\mu|s\,\big{\}} \tag{7}\] where \(\mu|s\) is identified in \(\mathcal{F}_{s}\). Let \(f^{*}_{K}:[0,1]\to[0,1]\) be the function given by \(f^{*}_{K}(0)=0\), \(f^{*}_{K}(1)=1\), and \(f^{*}_{K}(\phi(\mu\cdot\widehat{F}-1))\) is equal to the value in (7). If \(f^{*}_{K}\) is continuous and increasing, then \(K\) is said to _have continuous leading block distribution under \(\mathcal{F}\)-expansion_. **Lemma 5.17**.: _Let \(K\) be a sequence with continuous leading block distribution under \(\mathcal{F}\)-expansion, and let \(f^{*}_{K}\) be the function defined in Definition 5.16. Let \(\mu\in\mathcal{F}^{*}\) such that there is \(t\in\mathbb{N}\) such that \(\mu(1)=1\) and \(\mu(k)=0\) for all \(k>t\). Then, \(f^{*}_{K}(\phi(\mu|t\cdot\widehat{F}-1))\ \leq\ \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\, \mathrm{LB}_{t}(K_{n})\leq\mu|t\,\big{\}}\)._ Proof.: Notice that if \(s>t\), then \[\{n\in\mathbb{N}\,:\,\mathrm{LB}_{s}(K_{n})\leq\mu|s\}\subset\{n \in\mathbb{N}\,:\,\mathrm{LB}_{t}(K_{n})\leq\mu|t\}\] \[\Rightarrow \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\,\mathrm{LB}_{s}(K _{n})\leq\mu|s\,\big{\}}\ \leq\ \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\,\mathrm{LB}_{t}(K_{n})\leq\mu|t \,\big{\}}\] \[\lim_{s\to\infty}\operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,: \,\mathrm{LB}_{s}(K_{n})\leq\mu|s\,\big{\}}=f^{*}_{K}(\phi(\mu\cdot\widehat{F} -1))\ \leq\ \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\,\mathrm{LB}_{t}(K_{n})\leq\mu|t \,\big{\}}\] Since \(\mu|t\cdot\widehat{F}=\mu\cdot\widehat{F}\), \[\Rightarrow f^{*}_{K}(\phi(\mu|t\cdot\widehat{F}-1))\ \leq\ \operatorname{Prob}\big{\{}\,n\in\mathbb{N}\,:\, \mathrm{LB}_{t}(K_{n})\leq\mu|t\,\big{\}}\,.\] Recall uniform continuations from Definition 1.6. **Theorem 5.18**.: _Let \(K\) be a sequence with continuous leading block distribution under \(\mathscr{F}\)-expansion. Let \(f_{K}^{*}\) be the function defined in Definition 5.16. Then, there is a uniform continuation \(f\) of \(F\) such that \({f_{\infty}}^{-1}=f_{K}^{*}\) and \(\operatorname{\mathrm{frc}}\big{(}f^{-1}(K_{n})\big{)}\) is equidistributed._ Proof.: Let \(f:[1,\infty)\to\mathbb{R}\) be the function given by \(f(x)=F_{n}+(F_{n+1}-F_{n})(f_{K}^{*})^{-1}(p)\) where \(x=n+p\) and \(p=\operatorname{\mathrm{frc}}(x)\). Then, \(f\) is a uniform continuation of \(F_{n}\) since \((f_{K}^{*})^{-1}\) is independent of \(n\). Then, \({f_{\infty}}=(f_{K}^{*})^{-1}\), i.e., \({f_{\infty}}^{-1}=f_{K}^{*}\). Let \(\beta\in(0,1)\) be a real number, and below we show that \(\operatorname{\mathrm{Prob}}\big{\{}\,n\in\mathbb{N}:\operatorname{\mathrm{ frc}}\big{(}f^{-1}(K_{n})\big{)}\leq\beta\,\big{\}}\) exists, and it is equal to \(\beta\). Recall \(\mathfrak{F}\) from Definition 4.1 and \(\mathfrak{F}_{n}\) from Definition 5.1. Let \(n\in\mathbb{N}\), and let \(m\in\mathbb{N}\) such that \(F_{m}\leq K_{n}<F_{m+1}\). Then, \(K_{n}=f(m+p_{n}^{\prime})=\mathfrak{F}(m+p_{n})\) where \(p_{n},p_{n}^{\prime}\in[0,1]\), i.e., \(f_{\infty}(p_{n}^{\prime})=\mathfrak{F}_{m}(p_{n})\). By Theorem 5.13 and Lemma 5.15, there is a unique \(\mu\in\mathscr{F}^{*}\) such that \(f_{\infty}(\beta)=\phi(\mu\cdot\widehat{F}-1)\) and \(\mu(1)=1\). Recall \(\mathfrak{F}_{\infty}\) from Definition 5.1. Notice that \[\operatorname{\mathrm{frc}}\big{(}f^{-1}(K_{n})\big{)}\,=\,p_{n}^ {\prime}\,\leq\,\beta\,{\Rightarrow}\,{f_{\infty}}^{-1}(\mathfrak{F}_{m}(p_{n }))\,\leq\,\beta\,\Rightarrow\,p_{n}\,{\leq}\,{\mathfrak{F}_{m}}^{-1}(f_{ \infty}(\beta))\] \[\Rightarrow \operatorname{\mathrm{frc}}\big{(}\mathfrak{F}^{-1}(K_{n})\big{)} \,\leq\,{\mathfrak{F}_{m}}^{-1}(f_{\infty}(\beta))\,=\,{\mathfrak{F}_{ \infty}}^{-1}(f_{\infty}(\beta))+o(1)\,=\,{\log_{\phi}(\mu\cdot\widehat{F})}+ o(1).\] Fix an integer \(t\geq 2\). By Proposition 5.14, we have \(\mu\cdot\widehat{F}=\mu|t\cdot\widehat{F}+\gamma_{t}<\mu|t\cdot\widehat{F}\) where \(\gamma_{t}\geq 0\) and \(\widetilde{\mu|t}\in\mathscr{F}_{t}\) is as defined Definition 3.7. Since \({\log_{\phi}(\widetilde{\mu|t}\cdot\widehat{F})}-{\log_{\phi}(\mu\cdot\widehat {F})}>0\), there is \(M_{t}\in\mathbb{N}\) such that for all \(n\geq M_{t}\), \[\Rightarrow \operatorname{\mathrm{frc}}\big{(}\mathfrak{F}^{-1}(K_{n}))\big{)}\, \leq\,{\log_{\phi}(\mu\cdot\widehat{F})}+o(1)\,<\,{\log_{\phi}(\widetilde{\mu| t}\cdot\widehat{F})}.\] By Lemma 4.4, this implies \(\operatorname{\mathrm{LB}}_{t}(K_{n})\leq\mu|t\). Recall \(\Omega_{n}=\{k\in\mathbb{N}:k\leq n\}\); \[\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}:\operatorname{ \mathrm{frc}}\big{(}f^{-1}(K_{k})\big{)}\leq\beta\,\big{\}}+o(1)\,\leq\, \operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}:\operatorname{\mathrm{ LB}}_{t}(K_{k})\leq\mu|t\,\big{\}}+o(1)\] \[\Rightarrow \,\limsup_{n}\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}: \operatorname{\mathrm{frc}}\big{(}f^{-1}(K_{k})\big{)}\leq\beta\,\big{\}}\,\leq \,\operatorname{\mathrm{Prob}}\big{\{}\,n\in\mathbb{N}:\operatorname{\mathrm{ LB}}_{t}(K_{n})\leq\mu|t\,\big{\}}.\] Let us work on the \(\liminf\) of the probability. Since \(\beta\neq 0\), there is \(t_{0}>1\) such that \(\mu(t_{0})>0\). Thus, if \(t>t_{0}\) is sufficiently large, then there are at least two entries \(1\) in \(\mu|t\), and \(\mu|t\) has more entries after the second entry of \(1\) from the left. Recall the product \(*\) from Definition 2.2. This choice of \(t\) allows us to have the unique coefficient functions \(\tilde{\mu}\) and \(\widehat{\mu}\) in \(\mathscr{F}_{t}\) such that \(1+\tilde{\mu}*F=\widehat{\mu}*F\) and \(1+\tilde{\mu}*F=\mu|t*F\). Then, by Lemma 4.4, \[\operatorname{\mathrm{LB}}_{t}(K_{n})\,{\leq}\,\tilde{\mu}\, \Rightarrow\,\operatorname{\mathrm{frc}}\big{(}\mathfrak{F}^{-1}(K_{n})\big{)} \,<\,{\log_{\phi}(\widehat{\mu}\cdot\widehat{F})}+o(1)\] \[\Rightarrow \,p_{n}\,<\,{\mathfrak{F}_{m}}^{-1}(\phi(\widehat{\mu}\cdot \widehat{F}-1))+o(1)\] \[\Rightarrow \,{\mathfrak{F}_{m}}(p_{n})\,=\,f_{\infty}(p_{n}^{\prime})\,<\, {\phi}(\widehat{\mu}\cdot\widehat{F}-1)+o(1)\] \[\Rightarrow \,{p_{n}^{\prime}}\,=\,\operatorname{\mathrm{frc}}\big{(}f^{-1}(K _{n})\big{)}\,<\,{f_{\infty}}^{-1}(\phi(\widehat{\mu}\cdot\widehat{F}-1))+o(1)\] \[\,<\,{f_{\infty}}^{-1}(\phi(\mu|t\cdot\widehat{F}-1))\quad\text{by Proposition \ref{prop:1},}\] \[\,\leq\,{f_{\infty}}^{-1}(\phi(\mu\cdot\widehat{F}-1))\,=\,\beta\] \[\Rightarrow \,\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}: \operatorname{\mathrm{LB}}_{t}(K_{k})\,{\leq}\,\tilde{\mu}\,\big{\}}+o(1)\, \leq\,\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}:\operatorname{ \mathrm{frc}}\big{(}f^{-1}(K_{k})\big{)}\leq\beta\,\big{\}}+o(1)\] \[\Rightarrow \,\operatorname{\mathrm{Prob}}\big{\{}\,n\in\mathbb{N}: \operatorname{\mathrm{LB}}_{t}(K_{n})\,{\leq}\,\tilde{\mu}\,\big{\}}\,\leq\, \liminf_{n}\,\operatorname{\mathrm{Prob}}\big{\{}\,k\in\Omega_{n}: \operatorname{\mathrm{frc}}\big{(}f^{-1}(K_{k})\big{)}\leq\beta\,\big{\}}\] By Lemma 5.17, \[{f_{\infty}}^{-1}(\phi(\hat{\mu}\cdot\widehat{F}-1))\,\leq\,\liminf_{n}\,\, \operatorname{Prob}\big{\{}\,k\in\Omega_{n}:\operatorname{frc}\big{(}f^{-1}(K_{k })\big{)}\leq\beta\,\big{\}}\,.\] It is given that \(\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{t}(K_{n})\leq \mu|t\,\big{\}}\to{f_{\infty}}^{-1}(\phi(\mu\cdot\widehat{F}-1))\) as \(t\to\infty\). Let us calculate the other bound; \[2+\tilde{\mu}*F\,=\,\mu|t*F\,\,\Rightarrow\,\,\,2+\sum_{k=1}^{t }\tilde{\mu}(k)F_{t-k+1}=\sum_{k=1}^{t}\mu(k)F_{t-k+1}\] \[\,\,\Rightarrow\,\,\,2+\sum_{k=1}^{t}\tilde{\mu}(k)\Big{(} \alpha\phi^{t-k+1}+O(\phi^{-t+k-1})\Big{)}\,=\,\,\sum_{k=1}^{t}\mu(k)\Big{(} \alpha\phi^{t-k+1}+O(\phi^{-t+k-1})\Big{)}\] \[\,\,\Rightarrow\,\,O(1)+\alpha\sum_{k=1}^{t}\tilde{\mu}(k)\phi^ {t-k+1}\,=\,\,\alpha\sum_{k=1}^{t}\mu(k)\phi^{t-k+1}\] \[\,\,\Rightarrow\,\,O(\phi^{-t})+\sum_{k=1}^{t}\tilde{\mu}(k) \omega^{k-1}\,=\,\,\sum_{k=1}^{t}\mu(k)\omega^{k-1}\] \[\,\,\Rightarrow\,\,{o(1)}+\tilde{\mu}\cdot\widehat{F}\,=\,\,\mu |t\cdot\widehat{F}\,\,\Rightarrow\,\,\tilde{\mu}\cdot\widehat{F}\to\mu\cdot \widehat{F}\] \[\,\,\Rightarrow\,{f_{\infty}}^{-1}(\phi(\hat{\mu}\cdot\widehat{ F}-1))\to{f_{\infty}}^{-1}(\phi(\mu\cdot\widehat{F}-1))\,=\,\,\beta.\] It is clear that if \(f\) is a uniform continuation of \(F\), and \(K\) is a sequence of positive integers approaching \(\infty\) such that \(\operatorname{frc}\big{(}f^{-1}(K_{n})\big{)}\) is equidistributed, then, by Lemma 4.4, \(K\) has continuous leading block distribution under \(\mathscr{F}\)-expansion. Therefore, we have the following. **Theorem 5.19**.: _Let \(K\) be a sequence of positive integers approaching \(\infty\). Then, \(K\) has continuous leading block distribution under \(\mathscr{F}\)-expansion if and only if there is a uniform continuation \(f\) of \(F\) such that \(\operatorname{frc}\big{(}f^{-1}(K_{n})\big{)}\) is equidistributed._ ## 6 Benford's Law under generalized Zeckendorf expansion The contents in Sections 3, 4, and 5 are for Zeckendorf expansion, but the arguments of the proofs apply to the setup for generalized Zeckendorf expansion without difficulties. In this section, we introduce definitions and results for generalized Zeckendorf expansion without proofs, but only refer to the corresponding theorems for Zeckendorf expansion proved in the earlier sections. ### Generalized Zeckendorf expansion Let us review the generalized Zeckendorf expansion. Recall \(\mathbb{N}_{0}\) from Definition 2.1 **Definition 6.1**.: Given a tuple \(L=(a_{1},a_{2},\ldots,a_{N})\in\mathbb{N}_{0}^{N}\) where \(N\geq 2\) and \(a_{1}>0\), let \(\Theta\) be the following infinite tuple in \(\prod_{k=1}^{\infty}\mathbb{N}_{0}\): \[(a_{1},a_{2},\ldots,a_{N-1},a_{N},a_{1},a_{2},\ldots,a_{N-1},a_{N},\ldots)\] where the finite tuple \((a_{1},a_{2},\ldots,a_{N-1},a_{N})\) repeats. Let \(\Theta(k)\) denote the \(k\)th entry of \(\Theta\), and let \(\Theta|s=(\Theta(1),\ldots,\Theta(s))\) for \(s\in\mathbb{N}\). Recall len from Definition 2.2. Let \(\mathscr{H}^{\circ}\) be the recursively-defined set of tuples \(\epsilon\) with arbitrary finite length such that \(\epsilon\in\mathscr{H}^{\circ}\) if and only if there is smallest \(s\in\mathbb{N}_{0}\) such that \(\epsilon|s=\Theta|s\), \(\epsilon(s+1)<\Theta(s+1)\), and \((\epsilon(s+2),\ldots,\epsilon(n))\in\mathscr{H}^{\circ}\) where \(n=\operatorname{len}(\epsilon)\) and \(s\) is allowed to be \(\operatorname{len}(\epsilon)\). Let \(\mathscr{H}:=\{\epsilon\in\mathscr{H}^{\circ}:\epsilon(1)>0\}\). The set \(\mathscr{H}\) is called a _periodic Zeckendorf collection of coefficient functions for positive integers_, and \(L\) is called _a principal maximal block of the periodic Zeckendorf collection \(\mathscr{H}\)_. Notice that if \(L=(1,0,1,0)\) is a principal maximal block of the periodic Zeckendorf collection \(\mathscr{H}\), then \(L^{\prime}=(1,0)\) is a principal maximal block of \(\mathscr{H}\) as well. For this reason, the indefinite article was used in the statement of the definition of principal maximal blocks. **Example 6.2**.: Let \(\mathscr{H}\) be the (periodic) Zeckendorf collection determined by the principal maximal block \(L=(3,2,1)\). Then, \(\Theta=(3,2,1,3,2,1,\ldots)\), and \((0)\) and \((3,2,1)\) are members of \(\mathscr{H}^{\circ}\). For \((0)\in\mathscr{H}^{\circ}\), we set \(s=0\) in Definition 6.1, and for \((3,2,1)\in\mathscr{H}^{\circ}\), we set \(s=3\). Let \(\epsilon=(3,2,0)\) and \(\mu=(3,1,3,2,0)\). For \(\epsilon\), if \(s=2\), by the definition, we have \(\epsilon\in\mathscr{H}\). For \(\mu\), if \(s=1\), then \(\mu|1=\Theta|1\), \(\mu(2)<\Theta(2)\), and \((\mu(3),\ldots,\mu(5))=\epsilon\in\mathscr{H}^{\circ}\). Listed below are more examples of members of \(\mathscr{H}\): \[(3,2,1,3,2,1),\,(3,0,0,3),\,(1,2,3,1,0,3),\,(1,2,3,1,1,0).\] Recall the product notation from Definition 2.2 **Definition 6.3**.: Let \(\mathscr{H}\) be a set of coefficient functions, and let \(H\) be an increasing sequence of positive integers. If given \(n\in\mathbb{N}\), there is a unique \(\epsilon\in\mathscr{H}\) such that \(\epsilon*H=n\), then \(H\) is called a _fundamental sequence_ of \(\mathscr{H}\), and the expression \(\epsilon*H\) is called an \(\mathscr{H}\)-_expansion_. If \(\mathscr{H}\) is a periodic Zeckendorf collection for positive integers, then, by Theorem 6.4 below, there is a unique fundamental sequence of \(\mathscr{H}\). **Theorem 6.4** ([10, 17]).: _Let \(\mathscr{H}\) be a periodic Zeckendorf collection, and let \(L=(a_{1},\ldots,a_{N})\) be its principal maximal block. Then, there is a unique fundamental sequence \(H\) of \(\mathscr{H}\), and it is given by the following recursion:_ \[H_{n+N}\,=\,a_{1}H_{n+N-1}+\cdots+a_{N-1}H_{n+1}+(1+a_{N})H_{n} \ \ \text{for all}\ n\in\mathbb{N},\ \text{and} \tag{8}\] \[H_{n}\,=\,1+\sum_{k=1}^{n-1}a_{k}H_{n-k}\ \ \text{for all}\ \ 1\leq n\leq N+1.\] If \(L=(1,0)\), then its periodic Zeckendorf collection is \(\mathcal{F}\) defined in Definition 3.1, and its fundamental sequence is the Fibonacci sequence. If \(L=(9,9)\), then the fundamental sequence \(H\) is given by \(H_{n}=10^{n-1}\), and \(\epsilon*H\) for \(\epsilon\in\mathcal{H}\) are base-10 expansions. **Definition 6.5**.: Let \(L=(a_{1},\ldots,a_{N})\) be the list defined in Definition 6.1. Let \(\psi=\psi_{\mathcal{H}}=\psi_{L}\) be the dominant real zero of the polynomial \(g=g_{\mathcal{H}}=g_{L}(x):=x^{N}-\sum_{k=1}^{N-1}a_{k}x^{N-k}-(1+a_{N})\), and \(\theta:=\psi^{-1}\). Let \(\widehat{H}\) be the sequence given by \(\widehat{H}_{n}=\theta^{n-1}\). By (8), the sequence \(\widehat{H}\) in Definition 6.5 satisfies \[\widehat{H}_{n}\,=\,a_{1}\widehat{H}_{n+1}+\cdots+a_{N-1}\widehat{H}_{n+N-1}+ (1+a_{N})\widehat{H}_{n+N}\quad\text{for all }n\in\mathbb{N}. \tag{9}\] The following proposition is proved in [10, Lemma 43] and [16, Lemma 2.1]. **Proposition 6.6**.: _Let \(L=(a_{1},\ldots,a_{N})\) be the list defined in Definition 6.1, and let \(g=x^{N}-\sum_{k=1}^{N-1}a_{k}x^{N-k}-(1+a_{N})\) be the polynomial. Then, \(g\) has one and only one positive real zero \(\psi\), it is a simple zero, and there are no other complex zeros \(z\) such that \(|z|\geq\psi\)._ **Theorem 6.7**.: _Let \(\mathcal{H}\) be a periodic Zeckendorf collection with a principal maximal block \(L=(a_{1},\ldots,a_{N})\), and let \(H\) be the fundamental sequence of \(\mathcal{H}\). Then \(H_{n}=\delta\psi^{n}+O(\psi^{rn})\) for \(n\in\mathbb{N}\) where \(\delta\) and \(r\) are positive (real) constants, \(r<1\), and \(\psi\) is the dominant zero defined in Definition 6.5._ Proof.: Let \(g\) be the characteristic polynomial of degree \(N\) defined in Definition 6.5, and let \(\{\lambda_{1},\ldots,\lambda_{m}\}\) be the set of \(m\) distinct (complex) zeros of \(g\) where \(m\leq N\) and \(\lambda_{1}=\psi\). Then, by Proposition 6.6, we have \(|\lambda_{k}|<\psi\) for \(2\leq k\leq m\). Since \(\psi\) is a simple zero, by the generalized Binet's formula [15], there are polynomials \(h_{k}\) for \(2\leq k\leq m\) and a constant \(\delta\) such that \(H_{n}=\delta\psi^{n}+\sum_{k=2}^{m}h_{k}(n)\lambda_{k}^{n}\) for \(n\in\mathbb{N}\). Thus, there is a positive real number \(r<1\) such that \(H_{n}=\delta\psi^{n}+O(\psi^{rn})\) for \(n\in\mathbb{N}\). Notice that \(\lim_{n\to\infty}H_{n}/\psi^{n}=\delta\), and let us show that \(\delta\) is a positive real number, and in particular, it is non-zero. By [11, Theorem 5.1], \[\delta\,=\,\lim_{n\to\infty}\frac{H_{n}}{\psi^{n}}\,=\,\frac{1}{\psi g^{\prime}( \psi)}\sum_{k=1}^{N}\frac{H_{k}}{(k-1)!}\left[\frac{d^{k-1}}{dx^{k-1}}\frac{g( x)}{x-\psi}\,\right]_{x=0}. \tag{10}\] By the product rule, we have \[\left[\frac{d^{k-1}}{dx^{k-1}}\frac{g(x)}{x-\psi}\,\right]_{x=0}\,=\,\left[ \sum_{j=0}^{k-1}\binom{k-1}{j}g^{(j)}(x)(x-\psi)^{-1-j}\prod_{t=1}^{j}(-t) \right]_{x=0}.\] Notice that if \(1\leq j\leq N-1\), then \(g^{(j)}(0)=-a_{N-j}j!\leq 0\), and if \(g(0)=-(1+a_{N})<0\). The inequality \((-\psi)^{-1-j}\prod_{t=1}^{j}(-t)<0\) for all \(0\leq j\leq k-1\) follows immediately from considering the cases of \(j\) being even or odd. Thus, the summands in (10) are non-negative, and some are positive. This concludes the proof of \(\delta\) being a positive real number. For the remainder of the paper, let \(\mathcal{H}\), \(H\), and \(\psi\) be as defined in Definition 6.1. ### Strong Benford's Law Let us begin with definitions related to leading blocks under \(\mathcal{H}\)-expansion. **Definition 6.8**.: Let \(n=\epsilon*H\) for \(n\in\mathbb{N}\) and \(\epsilon\in\mathcal{H}\). If \(s\leq\operatorname{len}(\epsilon)\), then \((\epsilon(1),\ldots,\epsilon(s))\in\mathcal{H}\) is called _the leading block of \(n\) with length \(s\) under \(\mathcal{H}\)-expansion_. Recall that \(N=\operatorname{len}(L)\). If \(N\leq s\leq\operatorname{len}(\epsilon)\), let \(\operatorname{LB}_{s}^{\mathcal{H}}(n)\), or simply \(\operatorname{LB}_{s}(n)\) if the context is clear, denote the leading block of length \(s\), and if \(s\leq\operatorname{len}(\epsilon)\) and \(s<N\), then let \(\operatorname{LB}_{s}^{\mathcal{H}}(n)\) or simply \(\operatorname{LB}_{s}(n)\) denote \((\epsilon(1),\ldots,\epsilon(s),0,\ldots,0)\in\mathbb{N}_{0}^{N}\). If \(s>\operatorname{len}(\epsilon)\), \(\operatorname{LB}_{s}(n)\) is declared to be undefined. Recall the product \(*\) from Definition 2.2. Given an integer \(s\geq N\), let \(\mathcal{H}_{s}:=\{\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{\ell}\}\) be the finite set of the leading blocks of length \(s\) occurring in the \(\mathcal{H}\)-expansions of \(\mathbb{N}\) such that \(1+\mathbf{b}_{k}*H=\mathbf{b}_{k+1}*H\) for all \(k\leq\ell-1\). Recall the truncation notation from Definition 4.3. If \(1\leq s<N\), then let \(\mathcal{H}_{s}:=\{\mathbf{b}_{1},\mathbf{b}_{2},\ldots,\mathbf{b}_{\ell}\}\) be the finite set of the leading blocks of length \(N\) occurring in the \(\mathcal{H}\)-expansions of \(\mathbb{N}\) such that \(\mathbf{b}_{k}(j)=0\) for all \(1\leq k\leq\ell\) and \(j>s\) and \(1+\mathbf{b}_{k}|s*H=\mathbf{b}_{k+1}|s*H\) for all \(k\leq\ell-1\). The leading block \(\mathbf{b}_{\ell}\) is called _the largest leading block in \(\mathcal{H}_{s}\)_. The exclusive block \(\mathbf{b}_{\ell+1}\) is a coefficient function of length \(s\) defined as follows. If \(s\geq N\), \(s\equiv p\pmod{N}\), and \(0\leq p<N\), then \[\mathbf{b}_{\ell+1}:=(a_{1},\ldots,a_{N-1},a_{N},\ldots,a_{1},\ldots,a_{N-1},1 +a_{N},c_{1},\ldots,c_{p})\] where \(c_{k}=0\) for all \(k\). If \(1\leq s<N\), then \(\mathbf{b}_{\ell+1}:=(a_{1},\ldots,a_{N-1},1+a_{N})\). If \(\mathbf{b}\) is a leading block \(\mathbf{b}_{k}\in\mathcal{H}_{s}\), then we denote \(\mathbf{b}_{k+1}\) by \(\widetilde{\mathbf{b}}\). If \(s<N\), then the leading blocks \(\mathbf{b}\) in \(\mathcal{H}_{s}\) has lengths \(N\) with \(N-s\) last entries of \(0\), and this case is treated as above in order to make \(\mathbf{b}\) and \(\widetilde{\mathbf{b}}\) in the statement and proof of Lemma 4.4 fit into the case of periodic Zeckendorf collections; see Lemma 6.13. By [10, Definition 2 & Lemma 3] and Theorem 6.4, the subscript numbering of \(\mathbf{b}_{k}\in\mathcal{H}_{s}\) for \(1\leq k\leq\ell\) coincides with the lexicographical order on the coefficient functions. If \(\mathbf{b}\) is the largest leading block in \(\mathcal{H}_{s}\) where \(s\geq N\), then \[\mathbf{b}=(\ldots,a_{1},\ldots,a_{N},a_{1},\ldots,a_{p})\text{ if }s\equiv p \pmod{N}\text{ and }0\leq p<N\text{,}\] and \(1+\mathbf{b}*H=\widetilde{\mathbf{b}}*H=(\ldots,a_{1},\ldots,1+a_{N},0,\ldots,0 )*H=H_{s+1}\) where the last \(p\) entries of \(\widetilde{\mathbf{b}}\) are zeros. If \(s\equiv 0\pmod{N}\) and \(\mathbf{b}\) is the largest leading block in \(\mathcal{H}_{s}\), then \[\widetilde{\mathbf{b}}=(a_{1},\ldots,a_{N-1},a_{N},\ldots,a_{1},\ldots,a_{N-1},1+a_{N}).\] If \(s<N\) and \(\mathbf{b}\) is the largest leading block in \(\mathcal{H}_{s}\), then \(\widetilde{\mathbf{b}}=(a_{1},\ldots,a_{N-1},1+a_{N})\). Recall \(\widehat{H}\) from Definition 6.5. For all cases, if \(\mathbf{b}\) is the largest leading block in \(\mathcal{F}_{s}\), then \(\widetilde{\mathbf{b}}\cdot\widehat{H}=\psi\). The proof of Theorem 6.9 below follows immediately from Lemma 6.12 and Theorem 6.14. **Theorem 6.9**.: _Let \(K\) be a sequence of positive integers such that \(K_{n}=ab^{n}(1+o(1))\) where \(a\) and \(b\) are positive real numbers such that \(\log_{\psi}b\) is irrational. Then, given \(\mathbf{b}\in\mathcal{H}_{s}\),_ \[\operatorname{Prob}\left\{\,n\in\mathbb{N}\,\colon\operatorname{LB}_{s}(K_{n })=\mathbf{b}\,\right\}\;=\;\log_{\psi}\frac{\widetilde{\mathbf{b}}\cdot\widehat {H}}{\mathbf{b}\cdot\widehat{H}}.\] Motivated from the leading block distributions of the exponential sequences considered in Theorem 6.9, we declare strong Benford's Law under \(\mathscr{H}\)-expansion as follows. **Definition 6.10**.: A sequence \(K\) of positive integers is said to _satisfy strong Benford's Law under \(\mathscr{H}\)-expansion_ if given \(\mathbf{b}\in\mathscr{H}_{s}\), \[\operatorname{Prob}\left\{\,n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})= \mathbf{b}\,\right\}\;=\;\log_{\psi}\frac{\widetilde{\mathbf{b}}\cdot \widehat{H}}{\mathbf{b}\cdot\widehat{H}}.\] ### Benford continuation of \(H\) We used a real analytic continuation of the Fibonacci sequence for Zeckendorf expansion, but as demonstrated in the earlier sections, the leading block distributions are determined by its limit \(\mathfrak{F}_{\infty}\). Thus, rather than using a real analytic continuation of \(H\), we may use the limit version directly, which is far more convenient. By Theorem 6.7, \(H_{n}=\delta\psi^{n}+O(\psi^{rn})=\delta\psi^{n}(1+o(1))\) where \(\delta\) and \(r<1\) are positive real constants, and we define the following: **Definition 6.11**.: Let \(\mathfrak{H}:[1,\infty)\to\mathbb{R}\) be the function given by \[\mathfrak{H}(x)=H_{n}+(H_{n+1}-H_{n})\frac{\psi^{p}-1}{\psi-1}\] where \(x=n+p\) and \(p=\operatorname{frc}(x)\), and it is called _a Benford continuation of \(H\)_. Recall Definition 1.6. Then, \(\mathfrak{H}\) is a uniform continuation of \(H\), and \(\mathfrak{H}_{\infty}(p)=\frac{\psi^{p}-1}{\psi-1}\) for all \(p\in[0,1]\). We leave the proof of the following to the reader. **Lemma 6.12**.: _For real numbers \(x\in[1,\infty)\), we have \(\mathfrak{H}(x)=\delta\psi^{x}(1+o(1))\), and \(\mathfrak{H}^{-1}(x)=\log_{\psi}(x)-\log_{\psi}\delta+o(1)\)._ Recall \(\mathscr{H}_{s}\) from Definition 6.8 and \(\widehat{H}\) from Definition 6.5. **Lemma 6.13**.: _Let \(K\) be a sequence of positive real numbers approaching \(\infty\). Let \(\mathbf{b}\in\mathscr{H}_{s}\), and let \(A_{\mathbf{b}}:=\{n\in\mathbb{N}:\operatorname{LB}_{s}(K_{n})=\mathbf{b}\}\). Then, there are real numbers \(\gamma_{n}=o(1)\) and \(\widetilde{\gamma}_{n}=o(1)\) such that \(n\in A_{\mathbf{b}}\) if and only if_ \[\log_{\psi}\mathbf{b}\cdot\widehat{H}+\gamma_{n}\;\leq\;\operatorname{frc} \left(\mathfrak{H}^{-1}(K_{n})\right)\;<\;\log_{\psi}\widetilde{\mathbf{b}} \cdot\widehat{H}+\widetilde{\gamma}_{n}, \tag{11}\] _where \(\widetilde{\gamma}_{n}=0\) when \(\mathbf{b}\) is the largest leading block of length \(s\)._ There is no difficulty in applying the arguments of the proof of Lemma 4.4 to Lemma 6.13, and we leave the proof to the reader. Recall Definition 6.10. **Theorem 6.14**.: _Let \(K\) be an increasing sequence of positive integers such that \(\operatorname{frc}\left(\mathfrak{H}^{-1}(K_{n})\right)\) is equidistributed. Then, \(K\) satisfies strong Benford's Law under the \(\mathscr{H}\)-expansion._ There is no difficulty in applying the arguments of the proof of Theorem 4.5 to Theorem 6.14, and we leave the proof to the reader. ### Absolute Benford's Law Introduced in [10] is a full generalization of Zeckendorf expressions, which is based on the very principle of how Zeckendorf expressions are constructed in terms of lexicographical order. In this most general sense, the collection \(\mathcal{H}\) in Definition 6.1 is called a periodic Zeckendorf collection of coefficient functions. We believe that a property concerning all periodic Zeckendorf collections may be noteworthy, and as in the notion of normal numbers, we introduce the following definition. **Definition 6.15**.: A sequence \(K\) of positive integers is said to _satisfy absolute Benford's Law_ if \(K\) satisfies strong \(\mathcal{H}\)-Benford's Law for each periodic Zeckendorf collection \(\mathcal{H}\). Recall the Lucas sequence \(K=(2,1,3,4,\ldots)\) from Example 5.10. It satisfies strong Benford's Law under all base-\(b\) expansions, but it does not satisfy strong Benford's Law under Zeckendorf expansion. Thus, the Lucas sequence does not satisfy absolute Benford's Law. **Theorem 6.16**.: _Let \(\gamma\) be a positive real number such that \(\gamma\) is not equal to \(\psi^{r}\) for any \(r\in\mathbb{Q}\) and any dominant real zero \(\psi\) of \(g_{\mathcal{H}}\) where \(\mathcal{H}\) is as defined in Definition 6.5. Let \(K\) be the sequence given by \(K_{n}=\left\lfloor\gamma^{n}\right\rfloor\). Then, \(K\) satisfies absolute Benford's Law._ Proof.: Let \(H\) and \(\psi\) be as defined in Definitions 6.3 and 6.5, and let \(\mathfrak{H}\) be the Benford continuation defined in Definition 6.11. Note that \(\psi\) is algebraic. Notice that \(\left\lfloor\gamma^{n}\right\rfloor=\gamma^{n+o(1)}\), and \(\log_{\psi}(\gamma)\) is irrational. Thus, by Lemma 6.12, \[\mathfrak{H}^{-1}(K_{n})\,=\,(n+o(1))\log_{\psi}(\gamma)-\log_{\psi}(\delta)+ o(1)\,=\,n\log_{\psi}(\gamma)-\log_{\psi}(\delta)+o(1).\] By Weyl's Equidistribution Theorem, \[\Rightarrow\,\operatorname{Prob}\left\{\,n\in\mathbb{N}\,\colon\operatorname{ frc}\left(\mathfrak{H}^{-1}(K_{n})\right)\leq\beta\,\right\}\,=\,\operatorname{Prob} \left\{\,n\in\mathbb{N}\,\colon\operatorname{frc}\left(n\log_{\psi}(\gamma) \right)\leq\beta\,\right\}\,=\,\beta.\] By Theorem 6.14, \(K\) satisfies Benford's Law under \(\mathcal{H}\)-expansion. **Corollary 6.17**.: _Let \(\gamma>1\) be a real number that is not an algebraic integer. Then, the sequence \(K\) given by \(K_{n}=\left\lfloor\gamma^{n}\right\rfloor\) satisfies absolute Benford's Law._ Proof.: The dominant real zero \(\psi\) defined in Definition 6.5 is an algebraic integer, and so is \(\psi^{r}\) for all \(r\in\mathbb{Q}\). Thus, if \(\gamma\in\mathbb{R}\) is not an algebraic integer, then by Theorem 6.16, \(K\) satisfies absolute Benford's Law. **Example 6.18**.: Let \(K\) be the sequence given by \(K_{n}=\left\lfloor\frac{\phi}{\sqrt{5}}(\frac{89}{55})^{n}\right\rfloor\), which is considered in the introduction. Since \(\frac{89}{55}\) is not an algebraic integer, by Corollary 6.17, the sequence \(K\) satisfies absolute Benford's Law. ### Other Continuations Recall Definition 1.6, and that \(H\) is the fundamental sequence of \(\mathcal{H}\) defined in Definition 6.3. As in Section 5, we relate other continuations of \(H\) to the distributions of leading blocks under \(\mathcal{H}\)-expansion. Recall the Benford continuation \(\mathfrak{H}\) from Definition 6.11, uniform continuations \(h\) and \(h_{\infty}\) from Definition 1.6, and the definition of \(\widetilde{\mathbf{b}}\) from Definition 6.8. **Theorem 6.19**.: _Let \(h:[1,\infty)\to\mathbb{R}\) be a uniform continuation of \(H\). Then, there is a sequence \(K\) of positive integers approaching \(\infty\), e.g., \(K_{n}=\big{|}\mathfrak{H}(n+\mathfrak{H}_{n}{}^{-1}\circ h_{n}\big{(}\mathrm{ frc}(n\pi)\big{)}\big{|}\), such that \(\mathrm{frc}\left(h^{-1}(K_{n})\right)\) is equidistributed._ _Let \(K\) be a sequence of of positive integers approaching \(\infty\) such that \(\mathrm{frc}\left(h^{-1}(K_{n})\right)\) is equidistributed. Let \(\mathbf{b}\in\mathcal{H}_{s}\). Then,_ \[\mathrm{Prob}\big{\{}\,n\in\mathbb{N}:\mathrm{LB}_{s}(K_{n})= \mathbf{b}\,\big{\}} =\,h_{\infty}{}^{-1}\circ\mathfrak{H}_{\infty}(\log_{\psi} \widetilde{\mathbf{b}}\cdot\widehat{H})-h_{\infty}{}^{-1}\circ\mathfrak{H}_ {\infty}(\log_{\psi}\mathbf{b}\cdot\widehat{H})\] \[=\,h_{\infty}{}^{-1}\Bigg{(}\frac{\widetilde{\mathbf{b}}\cdot \widehat{H}-1}{\psi-1}\Bigg{)}-h_{\infty}{}^{-1}\Bigg{(}\frac{\mathbf{b}\cdot \widehat{H}-1}{\psi-1}\Bigg{)}.\] There is no difficulty in applying the arguments of the proof of Theorem 5.6 to Theorem 6.19, and we leave the proof to the reader. Recall that \(\mathbf{I}=(0,1)\). As in Definition 5.11, we introduce expressions for \(\mathbf{I}\) that are associated with \(\mathcal{H}\). Recall also the infinite tuple \(\Theta\), \(\theta\), and \(\widehat{H}\), from Definitions 6.1 and 6.5. **Definition 6.20**.: An infinite tuple \(\mu\in\prod_{k=1}^{\infty}\mathbb{N}_{0}\) is called an \(\mathcal{H}\)-_expression for \(\mathbf{I}\)_ if there is a smallest \(i\in\mathbb{N}\) such that \(\mu(i)>0\), \((\mu(i),\ldots,\mu(k))\in\mathcal{H}\) for all \(k\geq i\), and for all \(j\in\mathbb{N}_{0}\), the sequence \(\{\mu(j+n)\}_{n=1}^{\infty}\) is not equal to the sequence \(\{\Theta(n)\}_{n=1}^{\infty}\). Let \(\mathcal{H}^{*}\) be the set of \(\mathcal{H}\)-expressions for \(\mathbf{I}\). Given \(s\in\mathbb{N}\) and \(\{\mu,\tau\}\subset\mathcal{H}^{*}\), we declare \(\mu|s<\tau|s\) if \(\mu|s\cdot\widehat{H}<\tau|s\cdot\widehat{H}\), which coincides with the lexicographical order on \(\mathbb{N}_{0}^{s}\). We define \(\mu\cdot\widehat{H}:=\sum_{k=1}^{\infty}\mu(k)\theta^{k-1}\), which is a convergent series. Theorem 6.21 and Proposition 6.22 below are proved in [10]. **Theorem 6.21** (Zeckendorf Theorem for \(\mathbf{I}\)).: _Given a real number \(\beta\in\mathbf{I}\), there is a unique \(\mu\in\mathcal{H}^{*}\) such that \(\beta=\sum_{k=1}^{\infty}\mu(k)\theta^{k}=(\mu\cdot\widehat{H})\theta\)._ **Proposition 6.22**.: _Let \(\{\mu,\tau\}\subset\mathcal{H}^{*}\). Then, \(\mu\cdot\widehat{H}<\tau\cdot\widehat{H}\) if and only if \(\mu|s<\tau|s\) for some \(s\in\mathbb{N}\)._ By Theorem 6.21, Proposition 6.22 and (9), the function from \(\{\mu\in\mathcal{F}^{*}:\mu(1)=1\}\) to \([0,1)\) given by the following is bijective: \[\mu\mapsto\frac{\mu\cdot\widehat{H}-1}{\psi-1},\] and hence, \(h_{K}^{*}\) defined in Definition 6.23 is well-defined. **Definition 6.23**.: Let \(K\) be a sequence of positive integers approaching \(\infty\) such that given \(\mu\in\mathscr{H}^{*}\) such that \(\mu(1)=1\), the following limit exists: \[\lim_{s\to\infty}\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:\operatorname{LB} _{s}(K_{n})\leq\mu|s\,\big{\}}\,. \tag{12}\] Let \(h_{K}^{*}:[0,1]\to[0,1]\) be the function given by \(h_{K}^{*}(0)=0\), \(h_{K}^{*}(1)=1\), and \(h_{K}^{*}\left(\frac{\mu\cdot\hat{H}-1}{\psi-1}\right)\) is equal to the value in (12). If \(h_{K}^{*}\) is continuous and increasing, then \(K\) is said to _have continuous leading block distribution under \(\mathscr{H}\)-expansion_. **Theorem 6.24**.: _Let \(K\) be a sequence with continuous leading block distribution under \(\mathscr{H}\)-expansion. Let \(h_{K}^{*}\) be the function defined in Definition 6.23. Then, there is a uniform continuation \(h\) of \(H_{n}\) such that \(h_{\infty}{}^{-1}=h_{K}^{*}\) and \(\operatorname{frc}\big{(}h^{-1}(K_{n})\big{)}\) is equidistributed._ There is no difficulty in applying the arguments of the proof of Theorem 5.18 to Theorem 6.24, and we leave the proof to the reader. ## 7 Benford behavior within expansions As mentioned in the introduction, Benford's Law under base-\(b\) expansion arises with Zeckendorf expansion, and let us review this result, which is available in [4]. Let \(\mathscr{K}\) be a periodic Zeckendorf collection defined in Definition 6.1, and let \(K\) be the fundamental sequence of \(\mathscr{K}\), defined in Definition 6.3. Let \(S\) be an infinite subset of \(\{K_{n}:n\in\mathbb{N}\}\) such that \(q(S):=\operatorname{Prob}\big{\{}\,n\in\mathbb{N}:K_{n}\in S\,\big{\}}\) exists. Recall the product \(*\) from Definition 2.2. For a randomly selected integer \(n\in[1,K_{t+1})\), let \(\mu*K\) be the \(\mathscr{K}\)-expansion of \(n\), let \(M=\operatorname{len}(\mu)\), and define \[P_{t}(n):=\frac{\sum_{k=1}^{M}\mu(k)\chi_{S}(K_{k})}{\sum_{k=1}^{M}\mu(k)} \tag{13}\] where \(\chi_{S}\) is the characteristic function on \(\{K_{k}:k\in\mathbb{N}\}\), i.e., \(\chi_{S}(K_{k})=1\) if \(K_{k}\in S\) and \(\chi_{S}(K_{k})=0\), otherwise. Proved in [3] is that given a real number \(\epsilon>0\), the probability of \(n\in[1,K_{t+1})\) such that \(|P_{t}(n)-q(S)|<\epsilon\) is equal to \(1+o(1)\) as a function of \(t\). For Benford behavior, we let \(S\) be the set of \(K_{n}\) that have leading (fixed) leading decimal digit \(d\). Then, \(q(S)=\log_{10}(1+\frac{1}{d})\), and the probability of having a summand \(K_{n}\) with leading digit \(d\) within the \(\mathscr{K}\)-expansion is nearly \(q(S)\) most of the times. This result immediately applies to our setup. Let \(\mathscr{H}\) and \(H\) be as defined in Definition 6.1 different from \(\mathscr{K}\) and \(K\). For example, let \(\mathscr{H}\) be the base-\(b\) expressions, and let \(\mathscr{K}\) be the Zeckendorf expressions. Then, \(H\) is the sequence given by \(H_{n}=b^{n-1}\) and \(K=F\) is the Fibonacci sequence. Recall from Definition 6.8 that \(\mathscr{H}_{s}\) is a set of leading blocks under \(\mathscr{H}\)-expansion, and that \(\operatorname{LB}_{s}^{\mathscr{H}}(n)\) denotes the leading block of \(n\) in \(\mathscr{H}_{s}\) under \(\mathscr{H}\)-expansion. By Corollary 4.7, the sequence \(K\) satisfies (strong) Benford's Law under \(\mathscr{H}\)-expansion, i.e., \[\operatorname{Prob}\Big{\{}\,n\in\mathbb{N}:\operatorname{LB}_{s}^{\mathscr{H }}(K_{n})=\mathbf{b}\,\Big{\}}\,=\,\log_{\psi}\frac{\tilde{\mathbf{b}}\cdot \widehat{H}}{\mathbf{b}\cdot\widehat{H}}\] where \(\mathbf{b}\in\mathscr{H}_{s}\) and \(\psi=b\), and this is Benford's Law under base-\(b\) expansion. The case considered in the introduction is that \(\mathscr{H}\) is the Zeckendorf expansion and \(\mathscr{K}\) is the binary expansion. The following is a corollary of [4, Theorem 1.1]. Recall Definition 6.5. **Theorem 7.1**.: _Let \(\mathscr{H}\) and \(H\) be as defined in Definition 6.1, and Let \(K\) be the fundamental sequence of a periodic Zeckendorf collection \(\mathscr{K}\) such that \(\psi^{r}_{\mathscr{H}}\neq\psi_{\mathscr{K}}\) for all \(r\in\mathbb{Q}\) where \(\psi_{\mathscr{H}}\) and \(\psi_{\mathscr{K}}\) are the dominant real zeros of \(g_{\mathscr{H}}\) and \(g_{\mathscr{K}}\), respectively. Given \(\mathbf{b}\in\mathscr{H}_{s}\), let \(S_{\mathbf{b}}:=\langle K_{n}:\mathrm{LB}_{s}^{\mathscr{H}}(K_{n})=\mathbf{b},\ n\in\mathbb{N}\rangle\). For a randomly selected integer \(n\in[1,K_{t+1})\), let \(P_{t}(n)\) be the proportion defined in (13) with respect to \(S=S_{\mathbf{b}}\). Then, given a real number \(\epsilon>0\), the probability of \(n\in[1,K_{t+1})\) such that_ \[\left|P_{t}(n)-\log_{\psi_{\mathscr{K}}}\frac{\widetilde{\mathbf{b}}\cdot \widehat{H}}{\mathbf{b}\cdot\widehat{H}}\right|\,<\,\epsilon\] _is equal to \(1+o(1)\) as a function of \(t\)._ ## 8 Future work Instead of the leading digit, one can look at the distribution of the digit in the second, third, or generally any location. For a sequence that is strong Benford, the further to the right we move in location, the more uniform is the distribution of digits. A natural question is to ask whether or not a similar phenomenon happens with Zeckendorf decompositions, especially as there is a natural furthest to the right one can move. We can also look at signed Zeckendorf decompositions. Alpert [1] proved that every integer can be written uniquely as a sum of Fibonacci numbers and their additive inverses where two if two consecutive summands have the same sign then their indices differ by at least \(4\) and if they are of opposite sign then their indices differ by at least \(3\). We now have more possibilities for the leading block, and one can ask about the various probabilities. More generally, one can consider the \(f\)-decompositions introduced in [13], or the non-periodic Zeckendorf collections introduced in [10]. Additionally, one can explore sequences where there is no longer a unique decomposition, see for example [5, 6, 7, 8, 9], and ask what is the distribution of possible leading blocks. There are many ways we can formulate this question. We could look at all legal decompositions, we could look at what happens for specific numbers, we could look at what happens for specific types of decompositions, such as those arising from the greedy algorithm or those that use the fewest or most summands.
2306.00173
Discovering Love numbers through resonance excitation during extreme mass ratio inspirals
General Relativity predicts that black holes do not possess an internal structure and consequently cannot be excited. This leads to a specific prediction about the waveform of gravitational waves, which they emit during a binary black hole inspiral and to the vanishing of their Love numbers. However, if astrophysical black holes do possess an internal structure, their Love numbers would no longer vanish, and they could be excited during an inspiral by the transfer of orbital energy. This would affect the orbital period and lead to an observable imprint on the emitted gravitational waves waveform. The effect is enhanced if one of the binary companions is resonantly excited. We discuss the conditions for resonant excitation of a hypothetical internal structure of black holes and calculate the phase change of the gravitational waves waveform that is induced due to such resonant excitation during intermediate- and extreme-mass-ratio inspirals. We then relate the phase change to the electric quadrupolar Love number of the larger companion, which is resonantly excited by its smaller companion. We discuss the statistical error on measuring the Love number by LISA and show that, because of this phase change, the statistical error is small even for small values of the Love number. Our results provide a strong indication that the Love number could be detected by LISA with remarkable accuracy, much higher than what can be achieved via tidal deformation effects. Our results further indicate that resonant excitation of the central black hole during an extreme- or intermediate-mass-ratio inspirals is the most promising effect for putting bounds on, or detecting, non-vanishing tidal Love numbers of black holes.
Shani Avitan, Ram Brustein, Yotam Sherf
2023-05-31T20:42:13Z
http://arxiv.org/abs/2306.00173v1
# Discovering Love numbers through resonance excitation during extreme mass ratio inspirals ###### Abstract General Relativity predicts that black holes do not possess an internal structure and consequently cannot be excited. This leads to a specific prediction about the waveform of gravitational waves which they emit during a binary black hole inspiral and to the vanishing of their Love numbers. However, if astrophysical black holes do possess an internal structure, their Love numbers would no longer vanish, and they could be excited during an inspiral by the transfer of orbital energy. This would affect the orbital period and lead to an observable imprint on the emitted gravitational waves waveform. The effect is enhanced if one of the binary companions is resonantly excited. We discuss the conditions for resonant excitation of a hypothetical internal structure of black holes and calculate the phase change of the gravitational waves waveform that is induced due to such resonant excitation during intermediate- and extreme-mass-ratio inspirals. We then relate the phase change to the electric quadrupolar Love number of the larger companion, which is resonantly excited by its smaller companion. We discuss the statistical error on measuring the Love number by LISA and show that, because of this phase change, the statistical error is small even for small values of the Love number. Our results provide a strong indication that the Love number could be detected by LISA with remarkable accuracy, much higher than what can be achieved via tidal deformation effects. Our results further indicate that resonant excitation of the central black hole during an extreme- or intermediate-mass-ratio inspirals is the most promising effect for putting bounds on, or detecting, non-vanishing tidal Love numbers of black holes. Department of Physics, Ben-Gurion University, Beer-Sheva 84105, Israel [email protected], [email protected], [email protected] Introduction General Relativity predicts that black holes (BHs) do not possess an internal structure. They are "bald" and can be characterize solely by their mass and angular momentum [1]. Coalescing BHs radiate gravitational waves (GWs) which are being detected by the LIGO and VIRGO observatories since September 2015 [2]. The calculational efforts for improving the accuracy of the general relativistic (GR) predictions for the emitted GW waveform, could hopefully provide an opportunity for testing the baldness of BHs. Particularly, the inclusion of tidal interactions may allow us to probe the hypothetical interior structure of the binary companions and quantitatively test the predictions of GR [3, 4, 5, 6, 7, 8]. In spite of the increasing precision of ground-based detectors, their limited frequency band enables observations of only a few cycles in the inspiral phase of a binary-BH (BBH) coallesence event for a limited range of masses. The LISA space detector [9, 10], whose design sensitivity is maximal in the mHZ region, is expected to be able to detect and track many BBH coalescence events from the early stages of the inspiral through the merger to the late post-merger ringdown. In GR, the interior of BHs is vacuum, except for a possibly singular core. But is this their true description? A common expectation is that quantum effects at the Planck length scale, or at a somewhat larger length scale, such as the string length scale, will be sufficient to resolve all singularities. However, there are strong indications to the contrary when it comes to the resolution of BH singularities. First, a seemingly necessary condition for evading the singularity theorems [11, 12] and the closely related "Buchdahl-like" bounds [13, 14, 15, 16] is that the geometry is sourced by matter that has the maximal negative radial pressure permitted by causality, \(p_{r}=-\rho\), all the way to the surface of the star [17]. Furthermore, if one also considers the emitted Hawking radiation from such a quantum-regularized BH, one finds an untenable violation of energy conservation: When the scale of resolution is parametrically smaller than that of the Schwarzschild radius, the emitted energy of Hawking particles will greatly exceed the original mass of the collapsing matter [18, 19]. Thus, the tentative conclusion that we will adopt in our following discussion, is that deviations from GR must extend throughout the object's interior, that is, horizon-scale deviations from GR. The Love numbers of an object encode its response to an external tidal field. These numbers could provide some limited information about the mass distribution and the compactness of the object. The Love numbers determine the possible corrections to the GW signal due to tidal interactions of the companion in a binary system. The quadrupolar Love number \(k_{2}\) identically vanishes for GR BHs in four spacetime dimensions [20, 21, 22, 23, 24, 25, 26], making it a key observable. Measuring non-zero values will indicate a deviation from the GR predictions [27, 28, 29, 30, 31, 32]. If indeed horizon scale deviations from the GR predictions occur, then the expectation is that the Love numbers will be small, but not extremely small, suppressed only by some additional pertrubative parameter that quantifies the strength of the deviations. The reason for such expectation is that the Love numbers are normalized such that they are order unity if all the dimensional scales are of order of their radius [29, 30]. Previous studies have primarily focused on measuring the Love numbers using tidal deformability, which constitutes a subleading correction to the emitted GW waveform and enters at 5PN order compared to the dominant point-particle term. Tidal-deformability effects are more pronounced at the late inspiral phase. This makes the measurement of the Love number more challenging, since other finite-size effects are also of similar magnitude, requiring the construction of more accurate GW waveforms and detectors with better sensitivity. [33, 34, 35, 36, 3]. For GR BHs the inspiral evolution is dominated by the point-particle GW emission. For BHs which posses an internal structure, an interesting different effect can dominate the evolution if the orbital frequency becomes comparable to a characteristic frequency of some internal mode. In this case, this mode is resonantly excited, resulting in a rapid energy transfer from the orbit to the internal mode. The loss of orbital energy effectively advances the inspiral evolution, bringing the companions to a closer and faster orbit. The abrupt energy transfer changes significantly the emitted GW waveform compared to the point particle waveform since it leads to an instantaneous phase jump and a secondary subleading accumulated dephasing due to the differences in orbital velocities. Such resonant energy transfer can only be realized when the internal modes are non-relativistic. The reason is that the Keplerian orbital frequency is much smaller than the relativistic frequency \(c/R\) (\(c\) is the speed of light and \(R\) is the radius of the compact object) when the two objects are far from each other. Tidal resonant excitations were first discussed in the context of ordinary polytropic stars [37], then, much later, for binaries with at least one of the companions being a neutron star [38, 39, 40]. In these cases, the effect was related to the tidal Love numbers [51, 52, 53, 54, 55, 56]. However, as already mentioned, since the corrections enter the GWs waveform at 5PN order, the effect becomes significant during the late inspiral phase, where additional effects are also significant, making it difficult to detect the Love number with high confidence. More recent studies related to BH mimickers [57, 58, 59, 60], treat the BBH as if they were horizonless ultra-compact objects (UCOs). In [57, 59, 60], the tidal field was exciting some additional spacetime modes of a hypothetical spacetime structure outside the UCO. In [59], the resulting phase shift due to the resonant excitation of these additional spacetime modes was related to the tidal Love numbers, and the detectability of the quadrupolar Love number \(k_{2}\) using observations of ground-based GW detectors and the proposed Einstein telescope was discussed. In [58], the detectability prospects of the resonance effects were discussed, but without connecting the effect to the tidal Love numbers. In this study, no evidence for resonance was found in the observations of the first two runs of Advanced LIGO and Advanced Virgo. Here, in contrast to previous studies, we discuss the tidal excitation of hypothetical non-relativistic internal modes of the BH, relate the resulting phase shift to the Love numbers and discuss the possible detectability of \(k_{2}\) in LISA observations of IMRIs and EMRIs. We find that this is, by far, the most promising way to put bounds on, or detect Love numbers of astrophysical BHs. In the following, contrary to GR, we assume that astrophysical BHs do have an internal structure that supports non-relativistic fluid modes. We keep the calculations as model-independent as possible by expressing the model-dependent quantities through the dimensionless tidal Love number. We follow the discussion in [29, 59], to relate the resonance phase shift of the excited modes to the quadrupolar tidal Love number \(k_{2}\), and their relation to internal modes frequencies of quantum black holes [29, 30, 31, 61] and recent frozen star model results [62, 63, 64]. We estimate the statistical error in the measurement of \(k_{2}\) through resonance excitations during the inspiral of slowly rotating EMRIs and IMRIs, using the design noise spectrum of LISA [9, 65]. We find that the statistical error is small even for small values of the Love number, providing a strong indication that the Love number could be detected with impressive accuracy. We end with an explicit comparison between the detection prospects of the Love numbers with tidal deformability and tidal resonance, and conclude that resonance excitations are the most promising effect for detecting the Love numbers. ## 2 Tidal-Resonance interaction Here, we examine the tidal interaction in a binary system, focusing on the central object that is subjected to the weak periodic tidal force exerted by the smaller companion. Following the ideas presented in [38, 42, 54, 56] and more recently in [30], we describe the response of the object to the tidal force from the perspective of an asymptotic observer. The idea is that the object possess a set of non-relativistic fluid modes which are driven by the tidal force and can be therefore described as a collection of driven harmonic oscillators. The spectrum of the interior fluid modes depends on the radial number \(n\) and the angular numbers \(l,m\), so their frequencies depend of the three numbers \(\omega_{nlm}\). Here, we are particularly interested in the dominant effect which is due to the excitation of the \(n=1\) mode by the quadrupolar tidal field, so we focus on the case \(l=m=2\)[42]. As for the other modes; the spherically symmetric static \(m=0\) mode cannot generate pressure gradients that are needed for resonance excitaion and therefore is not relevant to our discussion. The \(m=1\) mode can be resonantly excited in the case that the spin-orbit configuration is misaligned [42, 44]. Here, we restrict our attention to spin vectors that are aligned with the orbital angular momentum. The mode corresponding to \(n,\ l,\ m=1,\ 2,\ 2\) is non-relativistic, meaning that, as for neutron stars, \(\omega_{122}\) is parametrically smaller than \(c/R\). The orbital frequency which determines the frequency of the driving tidal force is determined by the Kepler law. It follows, as explained in the Introduction, that as the smaller object gets closer to the central object, the orbital and internal frequencies can match. When the frequency of one of the interior modes of the central, larger, object, matches the orbital frequency of the companion, it is resonantly excited and efficiently absorbs energy from the orbital motion. The instantaneous energy absorption increases the orbital velocity and shortens the inspiral duration, thus leading to a phase difference in the emitted GW waveform, when compared to the emitted waveform in the absence of a resonance. To calculate the dephasing of the GW waveform, we adopt the derivation in [39, 43], resulting in the following phase evolution, \[\begin{cases}\Phi(t)=\Phi_{PP}(t)&t<t_{R}\\ \Phi(t)=\Phi_{PP}(t+\Delta t)-\Delta\Phi&t>t_{R}+\Delta t,\end{cases} \tag{1}\] where \(\Phi_{PP}(t)\) is the point particle phase, \(t_{R}\) is the time at which the resonance starts, \(\Delta t\) is the resonance duration and \(\Delta\Phi\) is the instantaneous resonance phase difference, which in general depends on the object's properties as demonstrated below. The point particle phase \(\Phi_{PP}\), is independent, by definition, on the object's composition. In particular, it has the same value for a GR black hole and one endowed with an internal excitation spectrum, such as the objects we are discussing. Then, assuming that the resonance duration is short compared to the inspiral duration and under adiabatic evolution, we arrive at the frequency domain resonance phase [39, 43], \[\Phi(f)\ =\ \Phi(f)_{PP}+\Theta(f-f_{R})\left(\frac{f}{f_{R}}-1\right)\Delta \Phi_{Res}\, \tag{2}\] where \(f_{R}\) is the internal mode frequency which satisfies the resonance condition \(2\pi f_{R}=m\Omega\), \(\Omega\) being the orbital angular velocity. Resonance corrections to the phase \(\Delta\Phi_{Res}\), are composed of two terms; a dominant term that enters at 2.5PN order higher than the leading order point-particle term and a subleading 4PN-higher contribution. The dominant contribution, which is frequency independent and proportional to \(\Delta\Phi\), originates from the instantaneous energy absorption during resonance. The subleading term, which is proportional to the frequency, is a secular effect that increases towards the late stages of the inspiral. ### The phase shift Fluid perturbations of compact objects are described by the displacement vector \(\xi^{i}\), of a fluid element from its unperturbed position, which is given by the orthonormal base decomposition, \[\xi^{i}\ =\ \sum_{n}a_{n}\xi^{i}_{n}, \tag{3}\] \(\xi_{n}\) being the normal displacement vectors, and \(a_{n}\) are the dimensionless displacement amplitudes.1 In the presence of tidal forces, the fluid modes satisfy the damped-driven harmonic oscillator equation [38, 44], Footnote 1: We use relativistic units \(G,c=1\). \[\ddot{a}_{nlm}+2\gamma_{n}\dot{a}_{nlm}+\omega_{n}^{2}a_{nlm}\ =\ \mathcal{F}(t)_{nlm}, \tag{4}\] where \(\gamma_{n}=-\text{Im}\ \omega_{n}\) is the damping rate of the mode. The source of the damping and its precise magnitude are irrelevant for the resulting resonant excitation and dephasing. So, \(\gamma\) can be neglected altogether (see below). The external periodic force \(\mathcal{F}(t)_{nlm}\) excites the \(n\)th mode interior fluid mode is given by \[\mathcal{F}(t)_{nlm}\ =\ N_{lm}\frac{\mathcal{E}_{l}Q_{nl}}{MR^{2}}e^{-im \phi(t)}\, \tag{5}\] where \(M\) and \(R\) are the mass and radius of the central object. The order unity factor \(N_{lm}\) is proportional to the Wigner function and is specified below. The tidal field of the \(l\) mode is denoted by \(\mathcal{E}_{l}\), which for the \(l=2,m=\pm 2\) satisfies \(\mathcal{E}_{ij}x^{i}x^{j}=\mathcal{E}r^{2}Y_{2\pm 2}\). The mass moment of the quadrupolar \(n\)th mode \(Q_{n}\) is given by the overlap integral [39], \[Q_{n}=-\int d^{3}r\delta\rho_{n}r^{2}\, \tag{6}\] where \(\delta\rho_{n}\) is the corresponding energy density perturbation. Next, we aim to find the instantaneous phase shift \(\Delta\Phi\) and the corresponding phase evolution in Eq. (1). We start by solving Eq. (4) for the amplitudes \(a_{n}\), which at resonance is given by [44], \[a_{n}(t)\ =\ \left(\frac{\pi}{m\ddot{\phi}}\right)^{1/2}\frac{\mathcal{F}(t)_{nlm} }{\gamma_{nl}-i\omega_{nl}}e^{-i\omega_{nl}t}, \tag{7}\] where \(\ddot{\phi}\) denotes the rate of change of the orbital frequency at resonance. The transferred energy to the mode \(nlm\) during the resonance is a sum of kinetic and potential terms [38, 44], \[E_{nlm}(t)\ =\ \left(\frac{1}{2}\dot{a}_{nlm}(t)^{2}+\frac{1}{2}\omega_{nl}^ {2}a_{nlm}^{2}(t)\right)MR^{2}. \tag{8}\] The total energy absorbed by the mode, neglecting \(\gamma_{nl}\), is given by \[\Delta E_{nlm}\ =\ \ N_{lm}^{2}\frac{\pi}{4m\ddot{\phi}}\frac{(\mathcal{E}_ {l}Q_{nl})^{2}}{MR^{2}}. \tag{9}\] The resonance excitations lead to a phase shift, since the orbital energy decreases as it excites the interior modes. Accordingly, the orbital velocity increases and the inspiral duration decreases by a time \(\Delta t\). To estimate \(\Delta t\), we follow [43]. The energy absorbed by the central objects decreases the energy of the orbit by the same amount. In the absence of resonance, such a decrease in energy can only occur by the emission of GW and the time that it would take the orbit to emit GW with such energy \(\Delta t\) would be determined by the equality \(\dot{E}_{GW}\Delta t=\Delta E_{nlm}\). The rate of GW emission \(\dot{E}_{GW}\) is, to a very good approximation, the same rate as in the absence of resonance, which to leading order is given by \(\dot{E}_{GW}=\frac{32}{5}(\mathcal{M}_{c}\ \Omega)^{10/3}\), with \(\mathcal{M}_{c}\) being the chirp mass. The resulting phase shift \(\Delta\Phi=m\Omega\Delta t\) is the following, \[\Delta\Phi_{nlm}\ =\ m\Omega\frac{\Delta E_{nlm}}{\dot{E}_{GW}}=\frac{5}{32} \ m\Omega\ \frac{\Delta E_{nlm}}{(\mathcal{M}_{c}\ \Omega)^{10/3}}. \tag{10}\] For IMRIs or EMRIs \(\mathcal{M}_{c}\approx M\) and \(\dot{E}_{GW}\sim v^{10}\). Using Eq. (9), we may calculate the phase shift induced by the leading order quadrupolar mode \(l=m=2\)[39, 59], \[\Delta\Phi_{n22}\ =\ \frac{25\pi}{1024q(1+q)}\frac{1}{R_{1}^{5}}\frac{|Q_{n2}|^{2} }{M_{1}\omega_{22}^{2}R_{1}^{2}}=\frac{25\pi}{2048q(1+q)}\frac{1}{R_{1}^{5}} \frac{|Q_{n2}|^{2}}{\Delta E^{int}}, \tag{11}\] where we used that \(N_{22}=\sqrt{3/32}\). Here \(q=M_{2}/M_{1}\) is the mass ratio and \(\Delta E^{int}=\frac{1}{2}M_{1}\omega_{22}^{2}R_{1}^{2}\) is the internal energy of oscillations which is related to the energy stored in the \(n\)th mode by \(\Delta E^{int}=\sum\limits_{n}\Delta E_{n22}\), [54]. We wish to justify our estimate of \(\Delta t\) using only \(\dot{E}_{GW}\) and neglecting other dissipation effects. In general, the time difference \(\Delta t\) should include all types of dissipation channels, mainly the dominant dissipation due to tidal friction and the subleading tidal deformation. However, the rate of work of tidal friction is given by [66, 67]\(\dot{E}_{TF}=\frac{1}{2}Q_{ij}\dot{\mathcal{E}}^{ij}\sim k_{2}v^{15}\nu/M\), where \(\nu\) is the kinematic viscosity giving rise to viscous dissipation. In [67], it is demonstrated that, under reasonable assumptions, the contribution of viscous dissipation is negligibly small compared to the leading order GW emission and, therefore, can be ignored. For example, for cold Neutron stars, considered to be highly viscous \(\nu/M\approx 10^{-7}\), whereas for BHs \(\nu/M=1\)[68]. During the inspiral, when the orbital velocity is non-relativistic the ratio of the different emission rates scales as \(\dot{E}_{TF}/\dot{E}_{GW}\sim v^{5}\ll 1\), which shows that the internal dissipation effects can indeed be neglected. ## 3 Fluid-origin Love numbers Here we follow [29, 30] to determine the relationship between the Love number and the spectrum of internal fluid modes. We focus on the static tidal Love number, ignoring dissipative effects. Following [30] (see also [54, 56]), we wish to find the static response of the object to an external tidal field. At low frequencies, away from resonance, the amplitude in Eq. (7) reduces to \[a_{n}=\frac{\mathcal{E}Q_{n}}{M\omega_{n}^{2}R^{2}}. \tag{12}\] Then, using the definition of the Love number, \(k_{2}R^{5}=3Q/(2\mathcal{E})\), we apply the normal mode decomposition identities \(Q=\sum_{n}a_{n}Q_{n}\), and \(k_{2}=\sum_{n}a_{n}k_{2n}\), where the n\(th\) mode Love number, which is associated with the n\(th\) mode quadrupolar moment, is given by \[k_{2n}R^{5}=\frac{3Q_{n}}{2\mathcal{E}}. \tag{13}\] when substituting the explicit form of \(a_{n}\) from Eq. (12), the Love number becomes \[k_{2}\ =\ \sum_{n}\frac{3}{2R^{5}}\frac{Q_{n}^{2}}{M\omega_{n}^{2}R^{2}}. \tag{14}\] We now approximate \(k_{2}\) by the first term in the sum in Eq. (14) relying on a physically motivated assumption. The sum in Eq. (14) is dominated by the fundamental \(n=1\) mode. The justification is that the number of nodes in the overlap integral in Eq. (6) increases as \(n\) increases. It follows that the contribution of \(Q_{n}\) decreases as \(n\) increases. Using the \(l=2\)-mode excitation energy \(\Delta E_{n}^{int}=\frac{1}{2}M\omega_{n2}^{2}R^{2}\), the sum in Eq. (14) can be approximated as \[k_{2}\ \simeq\ \frac{3}{4R^{5}}\frac{Q_{1}^{2}}{\Delta E_{1}^{int}}. \tag{15}\] We now observe that a similar expression to the one in Eq. (15), appears in Eq. (11) which determines the phase shift \(\Delta\Phi_{122}\). This allows to express \(\Delta\Phi_{122}\) in terms of \(k_{2}\), \[\Delta\Phi_{Res}\ =\ \frac{25\pi}{1536}\frac{k_{2}}{q(1+q)}. \tag{16}\] We are interested in the case of small mass ratios, \(q\lesssim 1/1000\) and a small but not extremely small \(k_{2}\), \(k_{2}\lesssim 1/10\). Then we can parameterize the resonance dephasing by \[\Delta\Phi_{Res}\ \simeq\ 5\times\left(\frac{k_{2}}{10^{-1}}\right)\left( \frac{q}{10^{-3}}\right)^{-1}. \tag{17}\] The resonance-induced dephasing is governed by the dimensionless tidal Love number and the companion's mass ratio. Generally, the detection threshold for the instantaneous phase jump requires \(\Delta\Phi_{Res}\gtrsim 1\)[69]. Thus, for typical values of Love numbers \(k_{2}\lesssim 10^{-1}\), it is more likely to observe resonances for moderate to extreme mass-ratio binaries \(10^{-3}\leq q\leq 10^{-5}\). We can also express \(k_{2}\) in terms of the frequency \(\omega_{12}\equiv\omega_{2}\) of the \(n=1\), \(l=2\) mode. At resonance, from Eq. (6), \(Q\sim\Delta E^{int}\), where \(\Delta E^{int}=\frac{1}{2}M\omega_{2}^{2}R^{2}\) is the energy of the oscillating star at resonance. Thus, on dimensional grounds, we get \(Q\sim\Delta E^{int}R^{2}\). For example, for a constant energy density perturbation \(Q=\frac{3}{5}\Delta E^{int}R^{2}\), while typical non-constant energy density profiles result in a numerical prefactor \(\lesssim 1\)[56] (see also [29]). Substituting the expressions for \(Q\) and \(\Delta E^{int}\), we arrive at our final result for the Love number \[k_{2}\ \simeq\ \mathcal{N}\omega_{2}^{2}R^{2}\, \tag{18}\] where \(\mathcal{N}\) is an order unity dimensionless number that depends on the object's energy density profile and contains the numerical factors in the definition of the Love number [29]. We will use Eq. (18) to determine the detectability of \(k_{2}\) in the next section. Remarkably, in [29], it is shown that the gravitational polarizability of objects which possess a discrete spectrum of quantum mechanical energy levels is similar to that of classical stars. This follows from the fact that the wavelength of the oscillation is of order of the star radius. We shall refer these objects as "quantum black holes" (QBHs) to mean the quantum state that corresponds to a classical BH. The idea is justified on the grounds of the Bohr correspondence principle, where at macroscopic excitations, expectation values are replaced by classical observables. Therefore, an excited quantum macroscopic object can be treated as a semi-classical oscillating fluid-like object that satisfy Eq. (4). Using standard time-independent quantum perturbation theory, the Love number of QBHs is given by [29, 30] \[k_{2}\simeq\frac{3}{4R^{5}}\frac{|\langle\Psi_{0}|\hat{Q}|n=1,l=2\rangle|^{2}} {\Delta E_{1}^{int}}. \tag{19}\] where \(\Psi_{0}\) is the QBH ground state, \(\hat{Q}\) is the mass moment operator that obeys the no-hair theorem; \(\langle\Psi_{0}|\hat{Q}|\Psi_{0}\rangle=0\). The definition of Eq. (15) is restored by applying the Bohr correspondence principle and replacing expectations values with classical observables, \(\langle\Psi_{0}|\hat{Q}|n,l=2\rangle\leftrightarrow Q_{n}\). In this form, Eq. (19) can be treated in a similar way to the classical treatment of Eqs. (15),(18), which eventually recovers the result \(k_{2}\simeq{\cal N}\omega_{2}^{2}R^{2}\). The result is valid for any object of radius \(R\), quantum or classical, which has a quadrupole internal mode whose non-relativistic frequency is \(\omega_{2}\). ## 4 Detectability In this section, using the Fisher method, we give a quantitative estimation of the statistical error in measuring the Love number. We discuss the prospects for detection of a non-vanishing Love number with the future space LISA detector and demonstrate that during the inspiral, it is more likely to detect the Love number with resonances rather than tidal deformability. We evaluate the detectability of the Love numbers through resonant excitations with the planned space telescope LISA, which according to [9], could track and observe moderate to extreme mass-ratio binaries from the early stages of the inspiral and up to the merger with high SNR. Before addressing the precise statistical analysis, we wish to emphasize that for most of the range of the binary masses and spins and for Love numbers \(k_{2}\lesssim 10^{-1}\), the leading order 2.5PN resonance phase term is comparable to the other effects entering at 2.5PN, such as the PP 2.5PN term and the leading order tidal heating term. For smaller values of \(q\), the resonance phase term becomes significant. Since it is established that LISA can detect the other 2.5PN effects, we expect that LISA could be able to detect the Love numbers with high confidence. To evaluate the statistical error, we employ the Fisher information method. Assuming a signal \(s(t)=h(t,\theta^{i})+n\), with the uncorrelated noise \(n\), a model signal \(h(t,\theta^{i})\) with model parameters \(\theta^{i}\). For high SNR events, the posterior distribution takes the form \[p(\theta^{i}|s)\propto e^{-\frac{1}{2}\Delta\theta^{i}\Delta\theta^{j}\Gamma_{ ij}}. \tag{20}\] where \(\Gamma_{ij}\) is the Fisher matrix defined as \[\Gamma_{ij}\ =\ \left(\frac{\partial h}{\partial\theta^{i}}\Big{|}\frac{ \partial h}{\partial\theta^{j}}\right). \tag{21}\] with the inner product defined by \((h_{1}|h_{2})=4\text{Re}\int_{f_{min}}^{f_{max}}\frac{\tilde{h}_{1}(f)\tilde{h}_{ 2}^{*}(f)}{S_{n}(f)}df\), and \(S_{n}(f)\) is LISA's design noise spectral density. We choose \(f_{max}=f_{ISCO}(\chi)\), where \(f_{ISCO}\) is the orbital frequency at the innermost stable circular orbit(ISCO) and \(f_{min}=10^{-5}\text{Hz}\) as the lowest frequency in the LISA frequency band. The model parameters are \(\theta^{i}=(\ln\mathcal{A},\ln\mathcal{M}_{c},\eta,\Phi_{c},t_{c},\chi_{1}, \chi_{2},k_{2})\), where \(\mathcal{A}\) is the amplitude, \(\mathcal{M}_{c}\) is the chirp mass, \(\eta\) is the symmetric mass-ratio, \(\Phi_{c}\) and \(t_{c}\) are the phase and time at coalescence, \(\chi_{i}\) are the companions spin parameter and \(k_{2}\) is the Love number given in Eq. (18). The statistical error in measuring \(k_{2}\) is related to the Fisher matrix, \[\sigma_{k_{2}}\ =\ \sqrt{\langle\Delta k_{2}\rangle^{2}}\ =\ \sqrt{(\Gamma^{-1})_{k_{2}k_{2}}} \tag{22}\] We consider quasi-circular orbits and employ the analytical frequency domain post-Newtonian approximation TaylorF2, which accurately describes the binary evolution of the inspiral up to the ISCO [70, 71, 72]. The frequency domain GW waveform describing the binary inspiral is of the form \(\tilde{h}(f,\theta_{i})\ =\ \mathcal{A}e^{i\Phi}\), where \(\Phi\) is the phase evolution in Eq. (2). From Eq. (18), for \(q\ll 1\), the instantaneous phase shift at resonance becomes \[\Delta\Phi_{Res}\ \approx\ \mathcal{N}\ \frac{\omega_{2}^{2}R^{2}}{20q}. \tag{23}\] In our analysis we included correction terms up to 3PN order and neglected the higher order tidal deformability terms that depend on the Love number and enter at 5PN and 6PN order (See Sec. 4.1). Additionally, since our model is valid only until the ISCO, the frequency range \(\omega_{2}>\omega_{ISCO}\) is not included in our analysis. Consequently, it is beneficial to parameterize the oscillation frequencies in terms of the ISCO frequency \(\omega_{2}=\alpha\omega_{ISCO}\), where \(0<\alpha\leq 1\), and \(\omega_{ISCO}(\chi)\) is spin-dependent. This also means that resonance at the ISCO sets the maximal value of the Love number that can be detected \(k_{2}^{max}=\mathcal{N}\omega_{ISCO}^{2}R^{2}\). We consider moderate to extreme mass-ratio binaries with \(q=[10^{-3},10^{-4},10^{-5}]\), where the central object mass is \(M_{1}=10^{6}M_{\odot}\), and small to moderate Kerr spin parameters \(\chi^{i}=[0,0.1,0.2,0.3,0.4,0.5]\), at a luminosity distance \(D_{l}=2\)Gpc. We also average over the sky location parameters [72]. We assume equal spins \(\chi_{1}=\chi_{2}\) that are aligned with the orbital angular velocity vector. For the model-dependent order unity coefficient \(\mathcal{N}\), we use the estimation derived in [29], and consider \(\mathcal{N}\in[0.1,1]\). In Fig.1, the purple region shows the analytical Love-resonance-spin relation described in Eq. (18) that is determined by our model, where a given Love number corresponds to a specific resonance frequency and a spin parameter. This region describes the parameter space accessible to our model and is independent of the detector properties. In our analysis, the largest accessible \(k_{2}\) is reached for \(\mathcal{N}=1\), \(\alpha=1\) and \(\chi=0.5\), resulting in \(k_{2}^{max}\approx 0.159\), larger values are Figure 1: The solid blue lines correspond to a potential measurement of \(k_{2}\) for a given mass ratio \(q\), with relative error \(\sigma_{k_{2}}/k_{2}=1/3\). The region above each solid line corresponds potential measurement of \(k_{2}\) with a relative error smaller than \(1/3\). As anticipated by Eq. (16), for a smaller mass ratio, the error on measuring a specific \(k_{2}\) is smaller and it is possible to measure smaller values of \(k_{2}\). The purple region describes the parameter space accessible to our model for values of the spin parameter between \(0\) and \(0.5\), taking into account Love-resonance-spin relation: \(k_{2}\propto\omega_{ISCO}(\chi)R(\chi)\), such that a given Love number corresponds to a specific resonance frequency and spin parameter. The gray region describes the parameter space which is not accessible to our model for these values of the spin parameters. inaccessible to our model. The gray region is the parameter space region that our model cannot describe. ### Comparison to Tidal-deformability We now turn to estimate the relative magnitude of the resonance phase shift effects compared to the magnitude of tidal deformation effects on the phase evolution. To leading PN order, the tidal deformability contribution to the phase for \(q\ll 1\) takes the form \(\Phi_{TD}(f)\sim k_{2}v^{5}/q\), where \(v=(\pi Mf)^{1/3}\) is the orbital velocity. The accumulated phase throughout the inspiral is given by \[\Delta\Phi_{TD}\ =\ \int_{f_{min}}^{f_{ISCO}}f\frac{d^{2}\Phi_{TD}(f)}{df^{2}}df \sim\frac{k_{2}}{q}v_{ISCO}^{5}. \tag{24}\] Figure 2: The figure displays the relative statistical errors in measuring \(k_{2}\) with resonance excitations and tidal deformation. For a given \(k_{2}\) and \(\chi\) we calculate \(\sigma_{k_{2}}^{\rm res}\) without tidal deformation effects and \(\sigma_{k_{2}}^{TD}\) without resonances. The results show a preference for detecting \(k_{2}\) with resonance effects. The preference is more apparent for a smaller mass ratio \(q\). The colored regions enclosed by the solid and the dashed lines mark the additional parameter space that resonances can probe compared to tidal deformation. For a case for which the central object mass is \(M_{1}=10^{6}M_{\odot}\) and for small to moderate spin parameters, we find \(v_{ISCO}^{5}\sim 0.01\). Comparing to the instantaneous resonance phase jump Eq. (11), \(\Delta\Phi_{TD}/\Delta\Phi_{Res}\sim v_{ISCO}^{5}\). Therefore, we would expect to have a larger error in the measurement of the Love number relying on tidal deformability. We calculated the statistical error in measuring the Love number through tidal deformability and compared it to a measurement via resonance effects and found that the previous estimate is indeed correct. We repeated the statistical evaluation performed above, excluding resonance effects and including the leading tidal deformation terms entering the phase at 5PN and 6PN order [73, 74, 35]. The results of the calculation of the ratio of the relative errors in measuring the Love numbers, denoted by \(\sigma_{k_{2}}^{Res}/\sigma_{k_{2}}^{TD}\) for different spin parameters \(0\leq\chi\leq 0.5\) are presented in Fig. 2. ## 5 Summary and conclusion The future measurement of GWs produced during BBH inspirals by the planned GW detector LISA will present an unprecedented opportunity to test GR. Hypothetical tidal interactions between the inspiraling objects would affect the waveform of the emitted GWs in a way that could only be possible if astrophysical BHs were actually ultra-compact objects possesing an internal structure rather than the structureless objects predicted by GR. We discussed how the resonant excitation of the hypothetical non-relativistic interior modes of astrophysical BHs changes the phase of the emitted GW waveform when compared to the phase predicted by GR. The non-relativistic nature of the modes was crucial to the possibility of resonantly exciting them, because in this case they could be excited when the two objects are still far apart. In this case, the resonance occurs a long time before the ISCO is reached and leads to a significant dephasing. We find that regardless the specific details of the primary's interior composition, the phase shift is governed by a single intrinsic quantity - the dimensionless tidal Love number \(k_{2}\). We evaluated the statistical error in measuring the Love number \(k_{2}\) by LISA using the resonance effect. We concluded that the smallness of the resulting statistical error indicates that \(k_{2}\) could actually be detected by LISA with impressive accuracy by observing intermediate and extreme mass-ratio inspirals. We compared the statistical error for detection of the Love number relying on tidal deformation effects with the error when using resonance effects and concluded that prospects of measuring \(k_{2}\) using resonance effects are much better. The results reveal additional sensitivity-enhancement factors whose origin is the Love-resonance-spin relation. First, the statistical error in measuring the Love number reduces for BHs with higher spin, because for such BHs, the inspiral duration is longer. Second, the statistical error in measuring the Love number reduces if the inspiral includes a range of higher orbital velocities, which could lead to excitation of higher internal frequencies, which, in turn, correspond to the BH having a larger Love number. Our conclusion is that the effects of resonant excitation of astrophysical BHs during intermediate and extreme mass-ratio inspirals provide the best opportunity for putting bounds on, or detecting, the tidal Love number of astrophysical BHs and thus providing evidence of physics beyond GR. Nevertheless, we stress that the results of our statistical analysis should be viewed as preliminary estimates for the detection prospects. A comprehensive statistical treatment requires more accurate waveform modeling and should consider LISA's ability to track and discriminate several EMRIs simultaneously [9]. Our analysis is based on a general theoretical framework which only requires the existence of a set of non-relativistic internal modes, and does not require specifying the detailed properties of the central object. The entire dependence on the interior composition is parameterized in terms of the dimensionless tidal Love numbers. Therefore our results can be applied to a wide range of ultra-compact objects or BHs mimickers. ## Acknowledgments We thank Vitor Cardoso, Tanja Hinderer and Ely Kovetz for useful discussions. The research is supported by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant "Holography and the Swampland" and by VATAT (Israel planning and budgeting committee) grant for supporting theoretical high energy physics.
2305.00423
Tropical mirror for toric surfaces
We describe the tropical mirror for complex toric surfaces. In particular we provide an explicit expression for the mirror states and show that they can be written in enumerative form. Their holomorphic germs give an explicit form of good section for Landau-Ginzburg-Saito theory. We use an explicit form of holomorphic germs to derive the divisor relation for tropical Gromov-Witten invariants. We interpret the deformation of the theory by a point observable as a blow up of a point on the toric surface. We describe the implication of such interpretation for the tropical Gromov-Witten invariants.
Andrey Losev, Vyacheslav Lysov
2023-04-30T08:23:00Z
http://arxiv.org/abs/2305.00423v2
# Tropical mirror for toric surfaces ###### Abstract We describe the tropical mirror for complex toric surfaces. In particular we provide an explicit expression for the mirror states and show that they can be written in enumerative form. Their holomorphic germs give an explicit form of good section for Landau-Ginzburg-Saito theory. We use an explicit form of holomorphic germs to derive the divisor relation for tropical Gromov-Witten invariants. We interpret the deformation of the theory by a point observable as a blow up of a point on the toric surface. We describe the implication of such interpretation for the tropical Gromov-Witten invariants. ###### Contents * 1 Introduction * 2 Geometry of toric surfaces * 2.1 Projective toric surface * 2.2 Rays and stars * 2.3 Intersection of rays and stars * 3 Tropical mirror for toric surfaces * 3.1 Mirror relation * 3.2 Mirror states and holomorphic germs * 3.3 Tropical good section * 3.4 Mirror state for point observable * 3.5 Mirror state for star-observable * 3.6 Mirror for tropical curve observable * 4 Divisor relation * 4.1 Divisor relation for Gromov-Witten invariants * 4.2 Tropical divisor relation from LGS * 5 Mirror for selected toric surfaces * 5.1 \(\mathbb{P}^{2}\) * 5.2 \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) * 5.3 Blow up of a point on \(\mathbb{P}^{2}\) * 6 Recursion for point observables * 6.1 Recursion for point observables on \(\mathbb{P}^{2}\) * 6.2 Enumerative description of recursion * 6.3 Double deformation and contact terms * 6.4 Conclusion and open questions ## 1 Introduction Tropical mirror symmetry has all features of the mirror symmetry while providing a much simpler description for most of them. In particular, holomorphic curves become graphs and topological string theory becomes topological quantum mechanics. In our paper we argue that the same level of simplification holds for the mirror of the evaluation observables. The conventional mirror symmetry [1] focuses on the superpotential and the choice of special coordinates on its space of deformations. The choice of special coordinates is encoded as a solution to a certain dynamical system (starting from pioneering work by K. Saito [2]), which can be phrased as flatness and torsionless condition for some connection. The Christoffel symbols for this connection can be encoded as a contact terms determined by K. Saito's good section. Using this method, in order to evaluate the \(n\)-point invariant we need to differentiate the 3-point invariant, given by the residue formula, \(n-3\) times with respect to the special coordinates. In our approach to tropical mirror we focus on observables rather than the superpotential. The contact terms naturally emerge as distinguished deformation of the mirror states in topological quantum mechanics. Such distinguished deformations for polynomial superpotentials were constructed in [3], for a holomorphic germination of harmonic form states. Hence, we can immediately describe the tropical good section for Landau-Ginzburg-Saito theory. Moreover, we can directly evaluate the correlation functions using the mirror states for the evaluation observables. Given various simplifications of the mirror map in the tropical approach we can expect that the mirror states could also have an explicit description. In our work [4, 5] we provided an integral representation for the mirror states. Moreover, for the case of \(\mathbb{P}^{1}\) the integrals evaluate into the indicator functions. However, the simplicity of the answers might be the feature of simplest example, hence we evaluated the mirror states for the observables on a 2-dimensional toric surfaces. In this paper we show that the mirror states can be written using the indicator functions on cones, which are standard objects in toric (algebraic) geometry [6, 7]. Moreover, we showed that the sum over the indicator functions can be rewritten as a weighted sum over intersection points of particular graphs. Similar sums were introduced by Mikhalkin [8, 9, 10] to define the intersection number for tropical curves. Given an explicit form of the holomorphic germs we can use the Landau-Ginzburg-Saito theory to check one of the universal relations for the Gromov-Witten invariants: the divisor relation. In present paper we derive the divisor relation from the recursion formula for the correlation functions in Landau-Ginzburg-Saito theory. In particular, we use our expression for the holomorphic germs of the hyperplane observables to show that they change moduli of the superpotential, while preserving the topology of the toric space. An explicit form of holomorphic germs allows us to give an explicit form of the tropical good section for the toric surfaces. Note that already for polynomial superpotentials with more than one variable it is possible to have more than one good sections, so it might be hard to choose one, relevant for the mirror symmetry. Our construction of the good section uses the holomorphic germs of the mirror states. The last but not least application of the mirror states in explicit form allows us to describe a (novel?) relation between the Gromov-Witten invariants on \(\mathbb{P}^{2}\) and the \(Bl_{0}(\mathbb{P}^{2})\). We call it the "cutting corners" relation. The relation is similar to the divisor relation. The \((n+1)\)-point function with a point observable on \(\mathbb{P}^{2}\) is related to the \(n\)-point function on \(Bl_{0}(\mathbb{P}^{2})\). The structure of our paper is as follows: In section 2 we briefly review the relevant information on the geometry of smooth complex toric surfaces. In section 3 we briefly review the tropical mirror map and describe the mirror states and holomorphic germs of the observables on toric surface. In section 4 we derive the divisor relation from the recursion formula in Landau-Ginzburg-Saito theory and our explicit expression for the holomorphic germ of the hypersurface observable. In section 5 we describe the mirror for the several simples toric surfaces \(\mathbb{P}^{2},\mathbb{P}^{1}\times\mathbb{P}^{1}\) and \(Bl_{0}(\mathbb{P}^{2})\). In section 6 we present the cutting corners procedure for the \(\mathbb{P}^{2}\) and formulate the related open questions and conjectures. ## 2 Geometry of toric surfaces In this section we will briefly review the geometry for 2-dimensional toric varieties, equivalently complex toric surfaces. ### Projective toric surface Toric surface \(X\) is a compactification of \(\mathbb{C}^{*2}\). We can represent \(\mathbb{C}^{*2}=\mathbb{R}^{2}\times\mathbb{T}^{2}\) in the form of the radial part \(\mathbb{R}^{2}\), equipped with standard coordinates \(r^{i},\ i=1,2\) and angular part, 2-dimensional torus \(\mathbb{T}^{2}=S^{1}\times S^{1}\), with standard angular coordinates \(\phi^{i}\). Equivalently, we can say that the \(\mathbb{C}^{*2}\) is a trivial 2-dimensional toric fibration over \(\mathbb{R}^{2}\). We describe the compactification of \(\mathbb{C}^{*2}\) using the fibration data. * The radial part \(\mathbb{R}^{2}\) is compactified by _convex rational polytope_. We will describe a polytope by a collection of _supporting hyperplanes_. * Each hyperplane is given in terms of the inside-pointing 2-dimensional (normal) vector with components \(b^{i},i=1,2\). For rational polytope each vector has integer components i.e. \(b^{i}\in\mathbb{Z}\). For toric space \(X\) we will denote the set of corresponding vectors by \(B_{X}\). * In order to get a compactification of a complex manifold, we require that one of the circles \(S^{1}\subset\mathbb{T}^{2}\) inside the toric fibration shrinks to zero when we approach each of the compactifying hypersurfaces. The choice of a circle is given by a class in \(\pi_{1}(\mathbb{T}^{2})\) defined by a normal vector \(\vec{b}\) of the hyperplane. In toric geometry [6, 7] the collection of normal vectors \(B_{X}\) define a _fan_ of \(X\), hence we will adopt this notation for \(B_{X}\). Let us order vectors \(\vec{b}\in B_{X}\) in counterclockwise order on \(\mathbb{R}^{2}\). The consecutive pairs form cones of a fan for \(X\). A 2-dimensional cone, formed by pair of vectors \(\vec{b}_{1}\) and \(\vec{b}_{2}\) is \[\text{Cone}(\vec{b}_{1},\vec{b}_{2})=\{\vec{b}_{1}\;t_{1}+\vec{b}_{2}\;t_{2} \mid t_{1},t_{2}\in\mathbb{R}^{\geq 0}\}\subset\mathbb{R}^{2}. \tag{2.1}\] We will restrict our consideration to the smooth toric surfaces. The fan for a smooth toric surface consists of smooth cones. Smoothness of \(\text{Cone}(\vec{b},\vec{c})\) requires that the generating vectors form a basis in \(\mathbb{Z}^{2}\), what is equivalent to \[\det(\vec{b},\vec{c})=\pm 1. \tag{2.2}\] It is convenient to introduce a cross product for two vectors, so that \[\det(\vec{b},\vec{c})=b^{1}c^{2}-b^{2}c^{1}=\vec{b}\times\vec{c}. \tag{2.3}\] Note that the sign of the cross product \(\vec{b}\times\vec{c}\) is determined by the relative orientation of the two vectors. The sign is positive if we can rotate (angle less than \(\pi\)) from \(\vec{b}\) to \(\vec{c}\) in counterclockwise direction and negative otherwise. ### Rays and stars By construction a genus-0 smooth tropical curve is an embedding of a 3-valent tree into \(\mathbb{R}^{2}\) by a piece-wise linear map. For more details see Mikhalkin [8, 9, 10]. The leaves of a tree map to infinite rays along the normal vectors of compactifying polytope. Moreover, each tropical curve requires a balance condition: the sum of all vectors on the leaves equals to zero. Below there are four examples of tropical curves, drawn in corresponding compactifying polytopes. For any tropical curve we can construct its (maximally) degenerate version by shrinking the images of all internal edges of a tree to zero size. The resulting tropical curve will have a star shape. Below we provide the results of shrinking procedure for the four curves above. **Definition**: Given a point \(\vec{\rho}\) and a vector \(\vec{l}\) we define a ray \(R_{l,\rho}\), starting at \(\rho\) and directed along \(\vec{l}\), i.e. \[R_{l,\rho}=\vec{\rho}+\mathbb{R}^{+}\vec{l}=\left\{(\rho^{1}+t\;l^{1},\rho^{2 }+t\;l^{2})\in\mathbb{R}^{2}\;|\;t\in\mathbb{R}^{+}\right\}. \tag{2.4}\] A ray \(R_{l,\rho}\) describes a holomorphic disc with Poincare-dual form \[\gamma_{R_{l,\rho}}=\frac{1}{(2\pi)^{2}}\int_{S^{1}}\int_{0}^{\infty}\;\delta^ {2}(\vec{r}-\vec{\rho}-\vec{l}t)(dr^{1}-l^{1}dt)(d\phi^{1}-l^{1}d\varphi)(dr^{ 2}-l^{2}dt)(d\phi^{2}-l^{2}d\varphi), \tag{2.5}\] which can be simplified into \[\gamma_{R_{l,\rho}}=\frac{1}{2\pi}(\vec{l}\times d\vec{r})(\vec{l}\times d \vec{\phi})\int_{0}^{\infty}dt\;\delta^{2}(\vec{r}-\vec{\rho}-\vec{l}\;t). \tag{2.6}\] **Definition**: A _star_\(S_{\rho}\) on complex toric surface \(X\) is the union of rays from common end point \(\vec{l}\) from \(\vec{\rho}\) \[S_{\rho}=\bigcup_{\vec{l}\in S_{\rho}}R_{l,\rho}\;\;\;, \tag{2.7}\] such that each vector of s star \(\vec{l}=-\vec{b}\) for some \(\vec{b}\in B_{X}\) and the sum of all vectors equals to zero. \[\sum_{\vec{l}\in S_{\rho}}\vec{l}=0. \tag{2.8}\] The equality (2.8) is known as the _balancing condition_. Note that there could be multiple rays in the same direction as depicted in examples above. The Poincare-dual of a star is a sum of the Poincare-duals of all its rays, i.e. \[\gamma_{S_{\rho}}=\sum_{\vec{l}\in S_{\rho}}\gamma_{R_{l,\rho}}\;. \tag{2.9}\] ### Intersection of rays and stars Pair of rays \(R_{l,\rho}\) and \(R_{n,0}\) on a plane \(\mathbb{R}^{2}\) may intersect at most at one point. We can express the number of intersection points using Poincare-duals for the rays \[\begin{split} R_{l,\rho}\cdot_{\mathbb{R}}R_{n,0}& =\int_{\mathbb{R}^{2}}(\vec{l}\times d\vec{r})\int_{0}^{\infty}dt_ {1}\;\delta(\vec{r}-\vec{\rho}-\vec{l}\;t_{1})\wedge(\vec{n}\times d\vec{r}) \int_{0}^{\infty}dt_{2}\;\delta(\vec{r}-\vec{n}\;t_{2})\\ &=(\vec{l}\times\vec{n})\int_{(\mathbb{R}^{+})^{2}}dt_{1}\;dt_{2} \;\delta(\vec{\rho}+\vec{l}\;t_{2}-\vec{n}\;t_{1})\;=\frac{(\vec{l}\times\vec {n})}{|\vec{l}\times\vec{n}|}\chi_{-\vec{l},\vec{n}}(\vec{\rho}\;)\;,\end{split} \tag{2.10}\] where we introduced an indicator function, which equals to one inside the cone and to zero outside \[\chi_{\vec{l}_{1},\vec{l}_{2}}(\vec{\rho}\;)=\left\{\begin{array}{ll}1,& \rho\in\text{Cone}(\vec{l_{1}},\vec{l_{2}})\\ 0,&\rho\notin\text{Cone}(\vec{l_{1}},\vec{l_{2}}).\end{array}\right. \tag{2.11}\] The denominator in (2.10) is due to the Jacobian for the change of variables in the integral representation of the indicator function \[\chi_{\vec{l}_{1},\vec{l}_{2}}(\vec{r})=\int_{\text{Cone}(\vec{l}_{1},\vec{l}_ {2})}d^{2}\vec{s}\;\delta(\vec{r}-\vec{s})=|\vec{l}_{1}\times\vec{l}_{2}|\; \int_{0}^{\infty}\int_{0}^{\infty}dt_{1}dt_{2}\;\delta(\vec{r}-\vec{l}_{1}\;t _{1}-\vec{l}_{2}\;t_{2}). \tag{2.12}\] The sign factor for the intersection number (2.10) is common feature for the intersection of real cycles in real spaces. Our formula (2.10) tells us that the question of intersection for two rays is the same as a question whether \(\vec{\rho}\) belongs to a cone \(\text{Cone}(\vec{n},-\vec{l})\). Below we present the graphical proof of the relation (2.10). \(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho From the picture we see that all vectors \(\vec{l}_{+}\) are related to \(\vec{l}^{\prime}_{0}\) by a counterclockwise rotation, while all \(\vec{l}_{-}\) by a clockwise rotation, hence \[\vec{l}_{+}\times\vec{l}^{\prime}_{0}>0,\ \ \vec{l}_{-}\times\vec{l}^{\prime}_{0}<0\ . \tag{2.16}\] We can rewrite the difference of intersection numbers \[S_{\rho}\cdot S^{\prime}_{0}-S_{\rho^{\prime}}\cdot S^{\prime}_{0}=\sum_{\vec{ l}_{+}}\vec{l}_{+}\times\vec{l}^{\prime}_{0}-\sum_{\vec{l}_{-}}-(\vec{l}_{-} \times\vec{l}^{\prime}_{0})=\sum_{\vec{l}\in S}(\vec{l}\times\vec{l}^{\prime}_ {0})=0. \tag{2.17}\] The last equality is due to the balancing condition (2.8) for the star \(S\). \(\blacksquare\) We can use relation (2.10) to rewrite sum (2.14) over indicator functions on cones as a sum over intersection points of corresponding rays to give an enumerative description for the intersection number. Hence an intersection number \(S_{\rho}\cdot S^{\prime}_{0}\) becomes the weighted sum over intersection points \(p\in S_{\rho}\cap S^{\prime}_{0}\) for pairs of corresponding rays. The weight at intersection point \(p\) is equal to the absolute value of the cross product for directional vectors \(\vec{l}_{p}\) and \(\vec{l}^{\prime}_{p}\) of the two rays intersecting at \(p\), i.e. \[S_{\rho}\cdot S^{\prime}_{0}=\sum_{p\;\in\;S_{\rho}\cap S^{\prime}_{0}}|\vec{ l}_{p}\times\vec{l}^{\prime}_{p}|. \tag{2.18}\] **Example**: On the picture below we present the intersection of two stars. There are three intersection points and we zoomed in the circled region around on of the points and labeled the vectors of the two rays intersection at this point. The absolute value of the wedge product for the two rays of the circled point equal to one. the same true for the remaining two points. Hence we conclude that the intersection number for two stars equals to 3. **Remark**: The enumerative expression for the intersection of stars can be naturally extended to the intersection of two tropical curves. For more details see Mikhalkin [8, 9, 10]. **Remark**: We can refine a self-intersection for tropical curve \(\vec{\Gamma}\) from being defined only on cohomology classes to the representative. The self-intersection number for a curve \(\vec{\Gamma}\) is the weighted union of vertex points \(V(\vec{\Gamma})\). ## 3 Tropical mirror for toric surfaces In this section we will adopt the construction of the tropical mirror from [4, 5] to toric surfaces. In particular, we will describe the mirror superpotential, mirror states, holomorphic germs and tropical good section. ### Mirror relation The mirror of the complex toric surface \(X\) is a non-compact 2-dimensional Calabi-Yau \(X^{\vee}=\mathbb{C}^{*2}\) with holomorphic superpotential. We will used the toric representation \(\mathbb{C}^{*2}=\mathbb{R}^{2}\times\mathbb{T}^{2}\) with radial coordinates \(r^{j}\) and angular (holomorphic) coordinates \(Y_{j}\). The holomorphic top form in these coordinates \[\Omega=dY_{1}\wedge dY_{2}. \tag{3.1}\] The mirror superpotential is \[W_{X}=\sum_{\vec{b}\in B_{X}}q_{\vec{b}}\ e^{i\langle\vec{b},\vec{Y}\rangle}. \tag{3.2}\] where we used the pairing \[\langle\vec{b},Y\rangle=b^{1}Y_{1}+b^{2}Y_{2}. \tag{3.3}\] The form (3.1) is invariant under \(SL(2,\mathbb{Z})\), the linear transformations with determinant equal to one and integer coefficients. Let us arrange vectors of the fan \(B_{X}\) in a counter-clockwise order and label them \(\vec{b}_{1},\vec{b}_{2},...\). A smooth projective toric variety is a collection of smooth cones \(\text{Cone}(\vec{b}_{k},\vec{b}_{k+1})\), i.e. cones with \(|\vec{b}_{k}\times\vec{b}_{k+1}|=1\). Hence, we can use an \(SL(2,\mathbb{Z})\)-rotation to rotate the pair of vectors \(\vec{b}_{1},\vec{b}_{2}\) to the standard basis of \(\mathbb{Z}^{2}\), i.e. \[\vec{b}_{1}\rightarrow(1,0),\ \ \ \vec{b}_{2}\rightarrow(0,1),\ \ \vec{b}_{k} \rightarrow\vec{b}_{k}^{\prime},\ \ k>2. \tag{3.4}\] The superpotential in new basis becomes \[W_{X}=q_{\vec{b}_{1}}\ e^{iY_{1}}+q_{\vec{b}_{2}}\ e^{iY_{2}}+\sum_{k>2}q_{ \vec{b}_{k}}\ e^{i\langle\vec{b}_{k}^{\prime},Y\rangle}. \tag{3.5}\] The holomorphic top form (3.1) is also invariant under constant shifts of \(Y\)-variables, hence we can use \[Y_{1}\to Y_{1}-i\ln q_{\vec{b}_{1}},\ \ \ Y_{2}\to Y_{2}-i\ln q_{\vec{b}_{2}} \tag{3.6}\] to simplify the superpotential into \[W_{X}=e^{iY_{1}}+e^{iY_{2}}+\sum_{k>2}q^{\prime}_{\vec{b}_{k}}\ e^{i(\vec{U}_{k },Y)}. \tag{3.7}\] The new toric moduli \[q^{\prime}_{\vec{b}_{k}}=\frac{q_{\vec{b}_{k}}}{q_{\vec{b}_{1}}^{b_{1}}\cdot q _{\vec{b}_{2}}^{b_{2}}} \tag{3.8}\] refine Kahler moduli of \(X\). If we formally set all Kahler moduli to zero we arrive into superpotential for the non-compact toric variety \(\mathbb{C}^{2}\). Hence, the superpotential in the form (3.7) describes toric variety \(X\) as a compactification of \(\mathbb{C}^{2}\). ### Mirror states and holomorphic germs **Definition**: The Jacobi ring for superpotential \(W\) is \[J_{W}=R_{\mathbb{C}^{*2}}/I_{W}, \tag{3.9}\] where \(R_{\mathbb{C}^{*2}}\) is the ring of holomorphic functions on \(\mathbb{C}^{*2}\). In our coordinates \(R_{\mathbb{C}^{*2}}\) is the ring of periodic functions of \(Y\). The \(I_{W}\) is the ideal generated by the partial derivatives of \(W\) \[I_{W}=\left\{\frac{\partial W}{\partial Y_{j}}\right\}. \tag{3.10}\] Let us consider a graded vector space of Landau-Ginzburg-Saito theory \[V_{LGS}=R_{\mathbb{C}^{*2}}\otimes\mathbb{C}[\psi_{\Phi}^{i}] \tag{3.11}\] for parity-odd variables \(\psi_{\Phi}^{i}\). On \(V_{LGS}\) there is a pair of graded-commuting differentials \[\mathbf{Q}_{W}=\frac{\partial W}{\partial Y_{j}}\frac{\partial}{\partial\psi_ {\Phi}^{j}},\ \ \ \mathbf{G}_{-}=\frac{\partial}{\partial Y_{j}}\frac{\partial}{\partial\psi_{ \Phi}^{j}}. \tag{3.12}\] The mirror map for observables is the map from the de Rahm cohomology of toric space \(X\) to \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology, i.e \[\Phi:H^{*}_{dR}(X)\to H^{*}({\bf Q}_{W}+z{\bf G}_{-}):\gamma\mapsto\Phi_{\gamma}. \tag{3.13}\] The mirror map is constructed in the following way: We turn an observable \(\gamma\) into A-type HTQM state \(\Psi_{\gamma}\), then construct the corresponding mirror state \(\Psi^{X}_{\gamma}\) and take its holomorphic germ \(\Phi_{\gamma}\). Let us introduce the notation \(\Psi^{\vec{b}}\) for dressing of a state \(\Psi\) by a single compactifying divisor, labeled by \(\vec{b}\), i.e. \[\Psi^{\vec{b}}=2\pi KG_{-}\mu_{2}(\Psi_{\vec{b}},\Psi)=2\pi\int e^{-tH}dt\;G_{+ }G_{-}\mu_{2}(\Psi_{\vec{b}},\Psi). \tag{3.14}\] The double dressing by vectors \(\vec{b}_{1},\vec{b}_{2}\) in these notations is \[\Psi^{\vec{b}_{1},\vec{b_{2}}}=2\pi KG_{-}\mu_{2}(\Psi_{\vec{b}_{2}},\Psi^{\vec {b}_{1}}). \tag{3.15}\] The mirror state on the toric surface is given by \[\Psi^{X}_{\gamma}=\Psi_{\omega}+\sum_{\vec{b}_{1}\in B_{X}}\Psi^{\vec{b}_{1}}_ {\gamma}+\sum_{\vec{b}_{1},\vec{b}_{2}\in B_{X}}\Psi^{\vec{b}_{1},\vec{b}_{2}} _{\gamma}. \tag{3.16}\] The holomorphic germ \(\Phi_{\gamma}\) for a mirror state \(\Psi^{X}_{\gamma}\) is lowest component in \(\psi\)-expansion evaluated at \(\vec{r}=0\), i.e \[\Phi_{\gamma}=\Psi^{X}_{\gamma}\Big{|}_{\psi=0,r=0}. \tag{3.17}\] ### Tropical good section The construction of Jacobi ring comes with canonical projection \(\pi_{W}:R_{{\mathbb{C}}^{*2}}\to J_{W}\). Given a pair of homolorphic functions \(\Phi_{1}\) and \(\Phi_{2}\) we can project their product \(\Phi_{1}\Phi_{2}\) to the class \(\pi_{W}(\Phi_{1}\Phi_{2})\) in Jacobi ring \(J_{W}\). The section (which inverts \(\pi_{W}\)) \(S_{W}:J_{W}\to R_{{\mathbb{C}}^{*2}}\) turns this class into holomorphic function \(S_{W}\;\pi_{W}(\Phi_{1}\Phi_{2})\). The difference \[\Phi_{1}\Phi_{2}-S_{W}\;\pi_{W}(\Phi_{1}\Phi_{2}) \tag{3.18}\] is trivial in Jacobi ring. An isomorphism between the \(J_{W}\) and \(H^{*}({\bf Q}_{W})\) means that there exists a map \({\bf\Sigma}_{W}:R_{{\mathbb{C}}^{*2}}\to V_{LGS}\) such that \[\Phi_{1}\Phi_{2}-S_{W}\pi_{W}(\Phi_{1}\Phi_{2})={\bf Q}_{W}{\bf\Sigma}_{W}(\Phi_ {1}\Phi_{2}), \tag{3.19}\] and \[{\bf\Sigma}_{W}S_{W}=0. \tag{3.20}\] The choice of such \({\bf\Sigma}_{W}\) is known as the choice of homotopy for \({\bf Q}_{W}\). **Definition**: We define a contact term fo \(\Phi_{1}\) and \(\Phi_{2}\) in LGS theory with section \(S_{W}\) \[C^{S}_{W}(\Phi_{1},\Phi_{2})={\bf G}_{-}{\bf\Sigma}_{W}(\Phi_{1}\Phi_{2}). \tag{3.21}\] In other terms the product of two functions \(\Phi_{1}\Phi_{2}\) can be decomposed into the sum of the image of \(S_{W}\) and a linear combination of \(\partial^{1}W,\partial^{2}W\), i.e. \[\Phi_{1}\Phi_{2}=S_{W}\pi_{W}(\Phi_{1}\Phi_{2})+\sigma_{k}\partial^{k}W \tag{3.22}\] The \({\bf\Sigma}_{W}(\Phi_{1}\Phi_{2})\) has the form \(\sigma_{k}(Y)\psi^{k}_{\Phi}\), so \({\bf G}_{-}\)-action on it is \[{\bf G}_{-}{\bf\Sigma}_{W}(\Phi_{1}\Phi_{2})=\frac{\partial\sigma_{k}(Y)}{ \partial Y_{k}}, \tag{3.23}\] i.e. just a divergence of the vector field \(\sigma_{k}(Y)\partial_{Y_{k}}\). Note that for a given \(S_{W}\) the decomposition in (3.22) does not uniquely fixes the \(\sigma_{k}(Y)\). The freedom of choice \(\sigma\) is fixed by the choice of homotopy \({\bf\Sigma}_{W}\). Note that the dependence of contact term \(C_{W}\) on the choice of homotopy \({\bf\Sigma}_{W}\) is \(({\bf Q}_{W}+z{\bf G}_{-})\)-exact. It was shown that the correlation functions are well-defined in \(H^{*}({\bf Q}_{W}+z{\bf G}_{-})\), so the choice of homotopy does not affect the recursion formula. The tropical good for Landau-Ginzburg-Saito theory is a linear space spanned by identity germ \(\Phi_{1}^{X}=1\), point germ \(\Phi_{\rho}^{X}\), germs \(\Phi_{S}^{X}\) for a basis in a space of stars. \[{\rm Im}\ S^{trop}={\mathbb{C}}\langle 1,\Phi_{\rho}^{X},\Phi_{S}^{X}\rangle. \tag{3.24}\] ### Mirror state for point observable The A-model state for the \(U(1)^{2}\)-invariant Poincare-dual of the point evaluation observable located at a point \(\rho\) is \[\Psi_{\rho}=\frac{1}{(2\pi)^{2}}\delta^{2}(\vec{r}-\vec{\rho}\;)\;\psi^{1}_{ \Phi}\psi^{1}_{R}\psi^{2}_{\Phi}\psi^{2}_{R}. \tag{3.25}\] The single dressing of the state \(\Psi_{\rho}\) by a divisor state is \[\begin{split}\Psi^{\vec{b}}_{\rho}&=2\pi\int e^{-tH }dt\;G_{+}G_{-}\mu_{2}(\Psi_{\vec{b}},\Psi_{\rho})\\ &=\frac{1}{2\pi}q_{\vec{b}}\;e^{i(\vec{b},Y)}(\vec{b}\times\vec{ \psi}_{R})(\vec{b}\times\vec{\psi}_{\Phi})\int_{0}^{\infty}dt\;\delta^{2}(\vec {r}-\vec{\rho}-\vec{b}t).\end{split} \tag{3.26}\] We used \[G_{-}\left(\psi^{2}_{\Phi}\psi^{1}_{\Phi}\;e^{i(\vec{b},Y)}\right)=(b^{2}\psi^ {1}_{\Phi}-b^{1}\psi^{2}_{\Phi})\;e^{i(\vec{b},Y)}=(\vec{\psi}_{\Phi}\times \vec{b})\;e^{i(\vec{b},Y)} \tag{3.27}\] and similar relation for \(G_{+}\). The integral of a delta function implies that the single dressed state \(\Psi^{\vec{b}}_{\rho}\) has support on the ray \(R_{b,\rho}\). Moreover, the inclusion of \(\psi\)-dependence describes the \(\Psi^{\vec{b}}_{\rho}\) as the multiple of the state for Poincare-dual (2.6) of the ray \(R_{b,\rho}\), hence we can write \[\Psi^{\vec{b}}_{\rho}=q_{\vec{b}}\;e^{i(\vec{b},Y)}\Psi_{R_{b,\rho}}\;\;. \tag{3.28}\] We can represent the dressing of the state \(\Psi_{\rho}\) by all divisors from the fan \(B_{X}\), i.e. \[\sum_{\vec{b}\in B_{X}}\Psi^{\vec{b}}_{\rho}=\sum_{\vec{b}\in B_{X}}q_{\vec{b }}\;e^{i(\vec{b},Y)}\Psi_{R_{b,\rho}} \tag{3.29}\] as the evaluation state for the quasi-star (no balancing condition) \(S_{\rho}\) with rays identical to the rays of \(B_{X}\), equipped with holomorphic functions. The ray along vector \(\vec{b}\) is equipped with the function \(q_{\vec{b}}\;e^{i(\vec{b},Y)}\). The dressing of \(\Psi_{\rho}\) by two divisor states \[\begin{split}\Psi^{\vec{b}_{1},\vec{b}_{2}}_{\rho}& =q_{\vec{b}_{1}}q_{\vec{b}_{2}}\;e^{i(\vec{b}_{1}+\vec{b}_{2},Y)}( \vec{b}_{1}\times\vec{b}_{2})^{2}\int_{0}^{\infty}\int_{0}^{\infty}dt_{1}dt_{2 }\;\delta(\vec{r}-\vec{\rho}-\vec{b}_{1}t_{1}-(\vec{b}_{1}+\vec{b}_{2})t_{2})\\ &=q_{\vec{b}_{1}}q_{\vec{b}_{2}}\;e^{i(\vec{b}_{1}+\vec{b}_{2},Y )}|\vec{b}_{1}\times\vec{b}_{2}|\chi_{\vec{b}_{1},\vec{b}_{1}+\vec{b}_{2}}( \vec{r}-\vec{\rho}\;).\end{split} \tag{3.30}\] We used an integral representation (2.12) for indicator function on a cone and \[\vec{b}_{1}\times(\vec{b}_{2}+\vec{b}_{1})=\vec{b}_{1}\times\vec{b}_{2}. \tag{3.31}\] Note that the dressing is not symmetric under exchange of \(\vec{b}_{1}\) and \(\vec{b}_{2}\), because the indicator functions have support at different regions, i.e \[\chi_{\vec{b}_{1},\vec{b}_{1}+\vec{b}_{2}}(\vec{r}\,)\neq\chi_{\vec{b}_{2},\vec {b}_{1}+\vec{b}_{2}}(\vec{r}\,). \tag{3.32}\] We can notice that two orders of performing the double dressing have the same holomorphic function, so we can naturally simplify the sum using an equality \[\chi_{\vec{b}_{1},\vec{b}_{1}+\vec{b}_{2}}(\vec{r}\,)+\chi_{\vec{b}_{2},\vec{b }_{1}+\vec{b}_{2}}(\vec{r}\,)=\chi_{\vec{b}_{1},\vec{b}_{2}}(\vec{r}\,)\quad. \tag{3.33}\] The graphical reprsenation of this equality is given on the picture above. The holomorphic germ for the point observable is \[\Phi^{X}_{\rho}=\Psi^{\vec{b}_{1},\vec{b}_{2}}_{\rho}\Big{|}_{r=0}=\frac{1}{2} \sum_{\vec{b},\vec{b}^{\prime}\in B_{X}}|\vec{b}\times\vec{b}^{\prime}|\;q_{ \vec{b}}q_{\vec{b}^{\prime}}\;e^{(\vec{b}+\vec{b}^{\prime},Y)}\chi_{\vec{b}, \vec{b}^{\prime}}(-\vec{\rho}\,)\;. \tag{3.34}\] Our construction for the holomorphic germ gives different holomorphic functions depending on the location of the point \(\rho\) on \(\mathbb{R}^{2}\). However, different holomorphic germs represent the same cohomology class. **Proposition (cone crossing)**: Holomorphic germs (3.34) represent the same class in \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology for all values of \(\rho\). **Proof**: The holomorphic germ (3.34) changes each time the point \(\rho\) crosses cone of the fan \(B_{X}\). Let us consider the change of the germ under crossing of a single vector \(\vec{b}_{0}\). On the picture below we colored in blue all vectors \(\vec{b}_{+}\) such that the cones \({\rm Cone}(\vec{b}_{0},\vec{b}_{+})\) give non-zero contribution to the \(\Phi^{X}_{-\rho}\). We colored in green all vectors \(\vec{b}_{-}\), such that the cones \({\rm Cone}(\vec{b}_{0},\vec{b}_{-})\) contribute to \(\Phi^{X}_{-\rho^{\prime}}\). The difference between two functions is given by \[\Phi^{X}_{-\rho}-\Phi^{X}_{-\rho^{\prime}}=\sum_{\vec{b}_{+}}|\vec{b}_{+}\times \vec{b}_{0}|\;q_{\vec{b}_{+}}q_{\vec{b}_{0}}\;e^{(\vec{b}_{0}+\vec{b}_{+},Y)}- \sum_{\vec{b}_{-}}|\vec{b}_{-}\times\vec{b}_{0}|\;q_{\vec{b}_{0}}q_{\vec{b}_{- }}\;e^{i(\vec{b}_{0}+\vec{b}_{-},Y)}. \tag{3.35}\] All vectors \(\vec{b}_{+}\) are related to \(\vec{b}_{0}\) by a counterclockwise rotation, while \(\vec{b}_{-}\) are related by clockwise rotation hence \[\vec{b}_{+}\times\vec{b}_{0}>0,\;\;\;\vec{b}_{-}\times\vec{b}_{0}<0, \tag{3.36}\] and we can rewrite \[\begin{split}\Phi^{X}_{-\rho}-\Phi^{X}_{-\rho^{\prime}}& =\sum_{\vec{b}_{+}}(\vec{b}_{+}\times\vec{b}_{0})\;q_{\vec{b}_{+}} q_{\vec{b}_{0}}\;e^{(\vec{b}_{0}+\vec{b}_{+},Y)}-\sum_{\vec{b}_{-}}-(\vec{b}_{-} \times\vec{b}_{0})\;q_{\vec{b}_{0}}q_{\vec{b}_{-}}\;e^{i(\vec{b}_{0}+\vec{b}_ {-},Y)}\\ &=\sum_{\vec{b}\in B_{X}}(\vec{b}\times\vec{b}_{0})\;q_{\vec{b}_ {0}}q_{\vec{b}_{0}}\;e^{(\vec{b}+\vec{b}_{0},Y)}.\end{split} \tag{3.37}\] We can express the sum ver \(\vec{b}\) as a derivative of superpotential (3.2), i.e. \[\begin{split}\Phi^{X}_{-\rho}-\Phi^{X}_{-\rho^{\prime}}& =\sum_{\vec{b}\in B_{X}}(\vec{b}\times\vec{b}_{0})\;q_{\vec{b}}q_{ \vec{b}_{0}}\;e^{(\vec{b}+\vec{b}_{0},Y)}=q_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y) }(-i\vec{\partial_{Y}}\vec{W}\times\vec{b}_{0})=\mathbf{Q}_{W}\chi_{\vec{b}_{ 0}},\end{split} \tag{3.38}\] for the state \[\chi_{\vec{b}_{0}}=-iq_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}(\vec{\psi}_{\Phi} \times\vec{b}_{0}). \tag{3.39}\] The state \(\chi_{\vec{b}_{0}}\) is \(\mathbf{G}_{-}\)-closed, i.e. \[\mathbf{G}_{-}\chi_{\vec{b}_{0}}=\frac{\partial}{\partial Y_{k}}\frac{\partial }{\partial\psi^{k}_{\Phi}}\left(q_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}(\vec{ \psi}_{\Phi}\times\vec{b}_{0})\right)=iq_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}( \vec{b}_{0}\times\vec{b}_{0})=0. \tag{3.40}\] Hence the change of the holomorphic germ under the crossing of a single ray along \(\vec{b}_{0}\) is exact, i.e \[\Phi^{X}_{-\rho}-\Phi^{X}_{-\rho^{\prime}}=(\mathbf{Q}_{W}+z\mathbf{G}_{-}) \chi_{\vec{b}_{0}}. \tag{3.41}\] For general translation from \(\rho\) to \(\rho^{\prime}\) we may need to perform several cone-crossings. **Proposition (Enumerative expression for germs)**: The holomorphic germ for the point observable can be constructed in the following way: We perform the weighted sum over the intersection points \(p\) of the fan \(B_{X}\) at origin and the fan \(-B_{X}^{-\rho}\) at point \(-\rho\). The weights are determined by the direction vectors of the intersecting rays. \[\Phi_{\rho}^{X}=\frac{1}{2}\sum_{p\;\in\;B_{X}\cap-B_{X}^{-\rho}}q_{\vec{b}_{p }}q_{\vec{b}_{p}}|\vec{b}_{p}\times\vec{b}_{p}^{\prime}|\;e^{i(\vec{b}_{p}+ \vec{b}_{p}^{\prime},Y)}. \tag{3.42}\] **Proof**: The relation (2.13) tells us that the step functions in a sum (3.34) describe the intersection points two rays: one from \(B_{X}\), while the other from \(B_{X}^{-\rho}\). Hence the sum over step functions is the same as the sum over intersection points. The multiplicative factors in front of the indicator functions give us the weight factors in (3.42). Hence the proof is complete. \(\blacksquare\) ### Mirror state for star-observable The A-model state \(\Psi_{R_{l,\rho}}\) for a ray \(R_{l,\rho}\) is constructed from the the Poincare-dual form (2.6) by replacement \(dr\to\psi_{R}\) and \(d\phi\to\psi_{\Phi}\). Namely, \[\Psi_{R_{l,\rho}}=\frac{1}{2\pi}(\vec{l}\times\vec{\psi}_{R})(\vec{l}\times \vec{\psi}_{\Phi})\int_{0}^{\infty}dt\;\delta^{2}(\vec{r}-\vec{\rho}-t\;\vec{l}). \tag{3.43}\] The single divisor dressing of the state \[\Psi_{R_{l,\rho}}^{\vec{b}}=(\vec{l}\times\vec{b})^{2}q_{\vec{b}}\;e^{i(\vec{ b},Y)}\int_{0}^{\infty}dt\int_{0}^{\infty}ds\;\;\delta^{2}(\vec{r}-\vec{\rho}-s\; \vec{l}-t\;\vec{b}). \tag{3.44}\] The integral is the step function on a cone (2.13), hence we can further simplify \[\Psi_{R_{l,\rho}}^{\vec{b}}=|\vec{l}\times\vec{b}|\;q_{\vec{b}}\;e^{i(\vec{b},Y)}\;\chi_{\vec{l},\vec{b}}(\vec{r}-\vec{\rho}\;). \tag{3.45}\] The mirror state for the ray-observable \[\Psi^{X}_{R_{l,\rho}}=\Psi_{R_{l,\rho}}+\sum_{\vec{b}\in B_{X}}\Psi^{\vec{b}}_{R_ {l,\rho}}=\Psi_{R_{l,\rho}}+\sum_{\vec{b}\in B_{X}}|\vec{l}\times\vec{b}|\;q_{ \vec{b}}\;e^{i(\vec{b},Y)}\chi_{\vec{l},\vec{b}}(\vec{r}-\vec{\rho}\;). \tag{3.46}\] The A-model state for the star observable \(S_{\rho}\) is a sum of the states for its rays, i.e. \[\Psi_{S_{\rho}}=\sum_{\vec{l}\in S}\Psi_{R_{l,\rho}}, \tag{3.47}\] while corresponding mirror state \[\Psi^{X}_{S_{\rho}}=\Psi_{S_{\rho}}+\sum_{\vec{l}\in S}\sum_{\vec{b}\in B_{X}} |\vec{l}\times\vec{b}|\;q_{\vec{b}}\;e^{i(\vec{b},Y)}\chi_{\vec{l},\vec{b}}( \vec{r}-\vec{\rho}\;). \tag{3.48}\] The holomorphic germ for the star observable \(S_{\rho}\) is \[\Phi^{X}_{S_{\rho}}=\Psi^{X}_{S_{\rho}}\Big{|}_{\psi=r=0}=\sum_{\vec{l}\in S} \sum_{\vec{b}\in B_{X}}|\vec{l}\times\vec{b}|\;q_{\vec{b}}\;e^{i(\vec{b},Y)} \chi_{\vec{l},\vec{b}}(-\vec{\rho}\;). \tag{3.49}\] **Proposition**: Holomorphic germs \(\Phi^{X}_{S_{\rho}}\) in (3.49) represent the same class in \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology for all values of \(\rho\). **Proof**: The holomorphic germ (3.49) changes each time the point \(\rho\) crosses either ray of fan \(B_{X}\) or ray of star \(S_{\rho}\). Let us consider the change of function under crossing the vector \(\vec{l}_{0}\in S_{\rho}\). On the picture below we colored in blue all vectors \(\vec{b}_{+}\), such that the cones \({\rm Cone}(\vec{l}_{0},\vec{b}_{+})\) give non-zero contribution to \(\Phi^{X}_{S_{-\rho}}\). We colored in green all vectors \(\vec{b}_{-}\), such that the cones \({\rm Cone}(\vec{l}_{0},\vec{b}_{-})\) contribute to \(\Phi^{X}_{S_{-\rho^{\prime}}}\). \(b_{-}\)\(b_{+}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{-}\)\(b_{+} All vectors \(\vec{b}_{+}\) are related to \(\vec{l}_{0}\) by a counterclockwise rotation, while \(\vec{b}_{-}\) by a clockwise one, hence \[\vec{b}_{+}\times\vec{l}_{0}>0,\ \ \vec{b}_{-}\times\vec{l}_{0}<0. \tag{3.51}\] We can simplify the absolute values in the sum \[\begin{split}\Phi^{X}_{S_{-\rho}}-\Phi^{X}_{S_{-\rho^{\prime}}}& =\sum_{\vec{b}_{+}}(\vec{b}_{+}\times\vec{l}_{0})\;q_{\vec{b}_{+}} \;e^{i(\vec{b}_{+},Y)}-\sum_{\vec{b}_{-}}-(\vec{b}_{-}\times\vec{l}_{0})\;q_{ \vec{b}_{-}}\;e^{i(\vec{b}_{-},Y)}\\ &=\sum_{\vec{b}\in B_{X}}(\vec{b}\times\vec{l}_{0})\;q_{\vec{b}} \;e^{(\vec{b},Y)}.\end{split} \tag{3.52}\] We can express the sum ver \(\vec{b}\) as a derivative of superpotential (3.2), i.e. \[\sum_{\vec{b}\in B_{X}}(\vec{b}\times\vec{l}_{0})\;q_{\vec{b}}\;e^{(\vec{b},Y )}=-i\overline{\partial_{Y}W_{X}}\times\vec{l}_{0}={\bf Q}_{W}\chi_{\vec{l}_{ 0}}, \tag{3.53}\] for the state \[\chi_{\vec{l}_{0}}=-i(\vec{\psi}_{\Phi}\times\vec{l}_{0}). \tag{3.54}\] The state \(\chi_{\vec{l}_{0}}\) is \({\bf G}_{-}\)-closed, i.e. \[{\bf G}_{-}\chi_{\vec{l}_{0}}=\frac{\partial}{\partial Y_{k}}\frac{\partial}{ \partial\psi_{\Phi}^{k}}\left(\vec{\psi}_{\Phi}\times\vec{l}_{0}\right)=0, \tag{3.55}\] hence \[\Phi^{X}_{S_{-\rho}}-\Phi^{X}_{S_{-\rho^{\prime}}}=({\bf Q}_{W}+z{\bf G}_{-}) \chi_{\vec{l}_{0}}. \tag{3.56}\] The other possibility is the crossing of some vector \(\vec{b}_{0}\in B_{X}\). The difference between two functions is given by \({\rm Cone}(\vec{l},\vec{b}_{0})\)-contributions. We can repeat the analysis for orientations to simplify the absolute values \[\begin{split}\Phi^{X}_{S_{-\rho}}-\Phi^{X}_{S_{-\rho^{\prime}}}& =\sum_{\vec{l}_{+}}|\vec{l}_{+}\times\vec{b}_{0}|\;q_{\vec{b}_{0} }\;e^{i(\vec{b}_{0},Y)}-\sum_{\vec{l}_{-}}|\vec{l}_{-}\times\vec{b}_{0}|\;q_{ \vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}\\ &=\sum_{\vec{l}_{+}}(\vec{l}_{+}\times\vec{b}_{0})\;q_{\vec{b}_{0 }}\;e^{i(\vec{b}_{0},Y)}-\sum_{\vec{l}_{-}}-(\vec{l}_{-}\times\vec{b}_{0})\;q_{ \vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}\\ &=\;q_{\vec{b}_{0}}\;e^{i(\vec{b}_{0},Y)}\sum_{\vec{l}\in S}( \vec{l}\times\vec{b}_{0})=0.\end{split} \tag{3.57}\] The last equality is due to the balancing condition (2.8) for the star-observable. The general translation of the star \(S\) from \(\rho\) to \(\rho^{\prime}\) can be split into finitely many crossings of a single vector either from \(S\) of \(B_{X}\). Since single crossing preserves the class of holomorphic germ in \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology, so it is true for the finitely-many crossings. \(\blacksquare\) ### Mirror for tropical curve observable We can use a relation (2.10) to replace indicator functions by intersection points of rays to give an enumerative formulation for the holomorphic germ (3.49) for a star observable: The sum over the intersection points of a star \(S_{\rho}\) and the reflection of the fan \(-B_{X}\), weighted cross-product of corresponding vectors and holomorphic function \(q_{\vec{b}}\,e^{i\langle\vec{b},Y\rangle}\). \[\Phi^{X}_{S_{\rho}}=\sum_{p\;\in\;S_{\rho}\cap-B_{X}}|\vec{l}_{p}\times\vec{b} _{p}|\;q_{\vec{b}}\,e^{i\langle\vec{b}_{p},Y\rangle}. \tag{3.58}\] We can generalize the formula for the holomorphic germ from the star (maximally degenerate tropical curve) to an arbitrary tropical curve (possibly of higher genus). Namely, \[\Phi^{X}_{\Gamma}=\sum_{p\;\in\;\Gamma\cap-B_{X}}|\vec{l}_{p}\times\vec{b}_{p }|\;q_{\vec{b}_{p}}\;e^{i\langle\vec{b}_{p},Y\rangle}. \tag{3.59}\] Each intersection point \(p\) is an intersection of a ray along \(\vec{b}_{p}\) from \(B_{X}\) and an edge of a graph, representing tropical curve \(\Gamma\) equipped with integer vector \(\vec{l}_{p}\). **Proposition**: The class of holomorphic germ \(\Phi^{X}_{\Gamma}\) in \(({\bf Q}_{W}+z{\bf G}_{-})\)-cohomology is independent of the moduli of tropical curve. **Proof**: There are two types of events which are can change the holomorphic germ as we change the moduli of the tropical curve: * **Ray of \(B_{X}\) crosses vertex of \(\Gamma\)**: The change in holomorphic germ is controlled by the cones \({\rm Cone}(\vec{b},\vec{l})\) for vector \(\vec{b}\) on the ray of \(B_{X}\) and vectors \(\vec{l}\) connecting to the vertex \(V\) of \(\Gamma\). The analysis of the change is identical to the analysis we did for stars in section 3.5. In particular, the change was proportional to the balancing condition for a star. In the case of tropical curve the difference will be proportional to the balancing condition for the vertex \(V\), i.e. \[\Phi^{X}_{\Gamma}-\Phi^{X}_{\Gamma^{\prime}}=q_{\vec{b}}\:e^{i(\vec{b},Y)}\; \sum_{\vec{l}\in V}\left(\vec{l}\times\vec{b}\right)=0.\] (3.60) The last equality is due to balancing condition which holds for all vertices of tropical curve \(\Gamma\). For more details see Mikhalkin [8, 9, 10] * **Edge of \(\Gamma\) crosses vertex of \(B_{X}\)**: The change in holomorphic germ is controlled by the cones \({\rm Cone}(\vec{b},\vec{l})\) for vector \(\vec{l}\) assigned to the edge of \(\Gamma\) and vectors \(\vec{b}\) from the fan of \(X\). The analysis of the change is identical to the analysis we did for stars in section 3.5. In particular, the change of a holomorphic germ is given by \[\Phi^{X}_{\Gamma}-\Phi^{X}_{\Gamma^{\prime}}=({\bf Q}_{W}+z{\bf G}_{-})\chi_{ \vec{l}}\,,\;\;\;\chi_{\vec{l}}\!=-i(\vec{\psi}_{\Phi}\times\vec{l}).\] (3.61) Both events change the holomorphic germ by at most \(({\bf Q}_{W}+z{\bf G}_{-})\)-exact term, hence preserve the cohomology class of a germ. Any change of moduli for a tropical curve is a chain of the finitely many crossing events, hence the proof is complete. \(\blacksquare\) **Example**: On the pictures below we present the intersection of the green tropical curve and the toric fan depicted in blue. Each consecutive picture describe a translation of the toric fan to the left. First, second and third picture describe a crossing for the (vertical) ray of the fan and vertices of the curve. The holomorphic germ does not change since the intersection points lie on the very same rays of the green curve and the cross of corresponding ray-vectors are the same. On the fourth and fifth pictures we observe the crossing of the green vertex and blue edges of the tropical curve. The intersection points on horizontal rays of the tropical fan move from one ray to another. Hence the holomorphic germ changes. ## 4 Divisor relation ### Divisor relation for Gromov-Witten invariants Let us recall the following property of the Gromov-Witten invariants. For a hypersurface \(H\) with Poincare-dual form \(\gamma_{H}\) and classes \(\gamma_{1},..,\gamma_{n}\in H^{*}(X)\) the following relation holds \[\langle\gamma_{H},\gamma_{1},..,\gamma_{n}\rangle_{0,\beta}^{X}=\left(\int_{ \Sigma_{\beta}}\gamma_{H}\right)\cdot\langle\gamma_{1},..,\gamma_{n}\rangle_{ 0,\beta}^{X}, \tag{4.1}\] where \(\beta\) is the degree of curve and \(\Sigma_{\beta}\) is a curve representing class \(\beta\) in the Kahler cone of \(H_{2}(X)\). Let us give an equivalent formulation of the divisor relation for tropical mirror of toric surfaces. The hyperplane \(H\) in 2 dimensions becomes a tropical curve. Moreover, we can turn a tropical curve into a star by shrinking the lengths of internal edges. A tropical curve and the corresponding star are in the same cohomology class on \(X\), hence without loss of generality we will assume that \(H\) is a star. Our expression (2.18) for the intersection number implies that two stars \(S,S^{\prime}\) have positive intersection number \(S\cdot S^{\prime}\). Stars form a cone under the union operation: we can take a union of two stars. Equivalently, we can add the corresponding Poincare-dual forms \[\gamma_{S\cup S^{\prime}}=\gamma_{S}+\gamma_{S^{\prime}}. \tag{4.2}\] Hence, we can use stars to define a Kahler cone for \(X\) and express the intersection number in (4.1) as an intersection number for two stars \[\int_{\Sigma_{\beta}}\gamma_{H}=S_{\beta}\cdot H. \tag{4.3}\] The weighted sum over the Gromov-Witten invariants can be written \[\sum_{\beta}\langle\gamma_{1},..,\gamma_{n}\rangle^{X}_{0,\beta}\;q^{\beta}=\sum_ {S_{\beta}}\langle\gamma_{1},..,\gamma_{n}\rangle^{X}_{0,\beta}\;\prod_{\vec{l} \in S_{\beta}}q_{\vec{l}}\;\;, \tag{4.4}\] where \(q_{\vec{l}}\) are toric moduli associated to the rays \(\vec{l}\) of a star \(S_{\beta}\). ### Tropical divisor relation from LGS The tropical mirror [5] allows us to express the weighted sum in terms of correlation function in B-type HTQM \[\sum_{\beta}q^{\beta}\langle\gamma_{S},\gamma_{1},..,\gamma_{n}\rangle^{X}_{0,\beta}=\langle\Psi^{X}_{\gamma_{S}},\Psi^{X}_{\gamma_{1}},..,\Psi^{X}_{\gamma_ {n}}\rangle_{Q_{W}}. \tag{4.5}\] We can replace the mirror state for a star observable \(\Psi^{X}_{\gamma_{S}}\) by its holomorphic germ \(\Phi_{S}\) and use the recursion relation in B-type HTQM to arrive into \[\langle\Psi^{X}_{\gamma_{S}},\Psi^{X}_{\gamma_{1}},..,\Psi^{X}_{\gamma_{n}} \rangle_{Q_{W}}=\langle\Phi_{S},\Psi^{X}_{\gamma_{1}},..,\Psi^{X}_{\gamma_{n}} \rangle_{Q_{WX}}=\frac{d}{d\epsilon}\Big{|}_{\epsilon=0}\langle\Psi^{\epsilon} _{\gamma_{1}},..,\Psi^{\epsilon}_{\gamma_{n}}\rangle_{Q_{W\vec{X}}}, \tag{4.6}\] for deformed superpotential \[W^{\epsilon}_{X}=W_{X}+\epsilon\;\Phi^{X}_{S}, \tag{4.7}\] and deformed states \[\Psi^{\epsilon}_{\gamma_{k}}=\Psi^{W^{\epsilon}_{X}}_{\gamma_{k}},\;\;\;k=1,..,n. \tag{4.8}\] The holomorphic germ of the mirror state for the star observable \[\Phi^{X}_{S_{\rho}}=\sum_{\vec{l}\in S}\sum_{\vec{b}\in B_{X}}|\vec{l}\times \vec{b}|\;q_{\vec{b}}\;e^{i\langle\vec{b},Y\rangle}\chi_{\vec{l},\vec{b}}(-\rho) \tag{4.9}\] gives us the deformed superpotential \[W^{\epsilon}_{X}(q_{\vec{b}})=\sum_{\vec{b}\in B_{X}}q_{\vec{b}}\;e^{i\langle \vec{b},Y\rangle}+\epsilon\sum_{\vec{l}\in S}\sum_{\vec{b}\in B_{X}}|\vec{l} \times\vec{b}|\;q_{\vec{b}}\;e^{i\langle\vec{b},Y\rangle}\chi_{\vec{l},\vec{b }}(-\rho)=W_{X}(q^{\epsilon}_{\vec{b}}). \tag{4.10}\] The deformed superpotential has the same \(e^{i\langle\vec{m},Y\rangle}\)-terms, but with \(\epsilon\)-dependent coefficients. Hence, it describes the same toric fan \(B_{X}\), but modified toric moduli. In particular, moduli before and after deformation are related by the multiplicative factor \[q^{\epsilon}_{\vec{b}}=q^{\epsilon}_{\vec{b}}\left(1+\epsilon\sum_{\vec{l}\in S}| \vec{l}\times\vec{b}|\;\chi_{\vec{l},\vec{b}}(-\rho)\right). \tag{4.11}\] The recursion relation (4.6) is an equality between polynomials in \(q\), which implies equality between coefficients of corresponding monomials. Let us look at coefficients for monomial \[q^{\beta}=\prod_{\vec{b}\in S_{\beta}}q_{\vec{b}}. \tag{4.12}\] Note that the monomials \((q^{\epsilon})^{\beta}\) and \(q^{\beta}\) are related by the multiplicative factor \[(q^{\epsilon})^{\beta}=\prod_{\vec{b}\in S_{\beta}}q^{\epsilon}_{\vec{b}}=q^{ \beta}\prod_{\vec{b}\in S_{\beta}}\left(1+\epsilon\sum_{\vec{l}\in S}|\vec{l} \times\vec{b}|\;\chi_{\vec{l},\vec{b}}(-\rho)\right). \tag{4.13}\] Hence the coefficients in polynomial expansion are related by the multiplicative factor as well. The coefficient in the expansion of the first expression in (4.6), by construction, is the tropical \((n+1)\)-pont Gromov-Witten invariant, while the coefficients of the last expression in (4.6) is the multiple of \(n\)-point Gromov-Witten invariant. In particular \[\langle\gamma_{S},\gamma_{1},..,\gamma_{n}\rangle^{X}_{0,\beta}=\langle\gamma _{1},..,\gamma_{n}\rangle^{X}_{0,\beta}\left.\frac{d}{d\epsilon}\right|_{ \epsilon=0}\prod_{\vec{b}\in S_{\beta}}\left(1+\epsilon\sum_{\vec{l}\in S}| \vec{l}\times\vec{b}|\;\chi_{\vec{l},\vec{b}}(-\rho)\right). \tag{4.14}\] The derivative evaluates into the intersection number for stars \[\sum_{\vec{b}\in S_{\beta},\ \vec{l}\in S}|\vec{l}\times\vec{b}|\;\chi_{\vec{l}, \vec{b}}(-\rho)=S_{\beta}\cdot S. \tag{4.15}\] Hence, we derived the divisor relation for tropical Gromov-Witten invariants on toric surface from the recursion formula in B-type HTQM. ## 5 Mirror for selected toric surfaces ### \(\mathbb{P}^{2}\) The compactification polytope for \(\mathbb{P}^{2}\) and the corresponding fan are presented below \[\includegraphics[width=142.26378pt]{figures/1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1 There are three holomorphic germs of the mirror state (5.4), labeled by three cones \[\Phi_{S^{FS}_{-\rho}}=\left\{\begin{array}{ll}q_{3}\;e^{i\langle\vec{b}_{3},Y \rangle}=q_{3}\;e^{-iY_{1}-iY_{2}},&\vec{\rho}\in\text{Cone}(\vec{l}_{1},\vec{ l}_{2});\\ q_{1}\;e^{i\langle\vec{b}_{1},Y\rangle}=q_{1}\;e^{iY_{1}},&\vec{\rho}\in\text{ Cone}(\vec{l}_{2},\vec{l}_{3});\\ q_{2}\;e^{i\langle\vec{b}_{2},Y\rangle}=q_{2}\;e^{iY_{2}},&\vec{\rho}\in\text{ Cone}(\vec{l}_{1},\vec{l}_{3}).\end{array}\right. \tag{5.6}\] The enumerative description of the holomorphic germ for the star observable \(S\) is constructed from the diagrams below All three functions represent the same cohomology class i.e \[\Phi_{S^{FS}_{\rho}}=q_{1}\;e^{iY_{1}}=q_{2}\;e^{iY_{2}}=q_{3}\;e^{-iY_{1}-iY_ {2}}\in H^{*}(\mathbf{Q}_{\mathbb{P}^{2}}+z\mathbf{G}_{-}). \tag{5.7}\] Indeed, we can perform the cone crossings to determine the exact terms \[\begin{split}&(\mathbf{Q}_{\mathbb{P}^{2}}+z\mathbf{G}_{-})(-i \psi^{1}_{\Phi})=-i\partial_{1}W=q_{1}\;e^{iY_{1}}-q_{3}\;e^{-iY_{1}-iY_{2}},\\ &(\mathbf{Q}_{\mathbb{P}^{2}}+z\mathbf{G}_{-})(-i\psi^{2}_{\Phi}) =-i\partial_{2}W=q_{2}\;e^{iY_{2}}-q_{3}\;e^{-iY_{1}-iY_{2}}.\end{split} \tag{5.8}\] The three possible choices of holomorphic germs give us the following deformations of toric moduli \[\begin{split}\text{Cone}(\vec{l}_{1},\vec{l}_{2}):& \;(q_{1},q_{2},q_{3})\rightarrow(q_{1},q_{2},q_{3}(1+\epsilon)),\\ \text{Cone}(\vec{l}_{2},\vec{l}_{3}):&\;(q_{1},q_{2},q _{3})\rightarrow(q_{1}(1+\epsilon),q_{2},q_{3}),\\ \text{Cone}(\vec{l}_{1},\vec{l}_{3}):&\;(q_{1},q_{2},q _{3})\rightarrow(q_{1},q_{2}(1+\epsilon),q_{3}).\end{split} \tag{5.9}\] The three deformations above describe the same Kahler moduli deformation. Indeed, the weight factor for the degree-d curves, written in terms of toric moduli using the star representative \[q^{\beta}=(q_{1}q_{2}q_{3})^{d}=(q_{1}q_{2}q_{3})^{d}(1+\epsilon)^{d}=(q_{1}q _{2}q_{3})^{d}(1+d\cdot\epsilon+\mathcal{O}(\epsilon^{2})). \tag{5.10}\] The \({\cal O}(\epsilon)\) terms in the last equality match with intersection of degree star \(S_{\beta}=d\;S^{FS}\) with a star \(S=\epsilon\;S^{FS}\). Indeed, we can evaluate \[S_{\beta}\cdot S=d\cdot\epsilon\;S^{FS}\cdot S^{FS}=d\cdot\epsilon. \tag{5.11}\] In the last equality we used the self-intersection number for the Fubini-Study star \[S^{FS}\cdot S^{FS}=\sum_{\vec{l},\;\vec{l}\;\in S^{FS}}|\vec{l}\times\vec{l}^{ \prime}|\;\chi_{\vec{l},-\vec{l}^{\prime}}(\rho)=1. \tag{5.12}\] There are three possible germs for the mirror state of the point observable at point \(-\vec{\rho}\), labeled by three cones \[\Phi_{-\rho}=\left\{\begin{array}{ll}|\vec{b}_{1}\times\vec{b}_{2}|\;q_{1}q _{2}\;e^{i(\vec{b}_{1}+\vec{b}_{2},Y)}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}},&\vec{ \rho}\in\mbox{Cone}(\vec{b}_{1},\vec{b}_{2});\\ |\vec{b}_{1}\times\vec{b}_{3}|\;q_{1}q_{3}\;e^{i(\vec{b}_{1}+\vec{b}_{3},Y)}= q_{1}q_{3}\;e^{-iY_{2}},&\vec{\rho}\in\mbox{Cone}(\vec{b}_{2},\vec{b}_{3});\\ |\vec{b}_{2}\times\vec{b}_{3}|\;q_{2}q_{3}\;e^{i(\vec{b}_{2}+\vec{b}_{3},Y)}= q_{2}q_{3}\;e^{-iY_{1}},&\vec{\rho}\in\mbox{Cone}(\vec{b}_{1},\vec{b}_{3}). \end{array}\right. \tag{5.13}\] We can perform the cone crossing to derive the relations \[\begin{split}&({\bf Q}_{\mathbb{P}^{2}}+z{\bf G}_{-})(-iq_{2}e^ {iY_{2}}\psi^{1}_{\Phi})=-iq_{2}e^{iY_{2}}\partial_{1}W=q_{1}q_{2}e^{iY_{1}+iY _{2}}-q_{2}q_{3}e^{-iY_{1}},\\ &({\bf Q}_{\mathbb{P}^{2}}+z{\bf G}_{-})(-iq_{1}e^{iY_{1}}\psi^{2}_{ \Phi})=-iq_{1}e^{iY_{1}}\partial_{2}W=q_{1}q_{2}e^{iY_{1}+iY_{2}}-q_{1}q_{3}e^ {-iY_{2}}.\end{split} \tag{5.14}\] which imply that all three holomorphic germs are in the same class \[q_{2}q_{3}\;e^{-iY_{1}}=q_{1}q_{3}\;e^{-iY_{2}}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}} \in H^{*}({\bf Q}_{\mathbb{P}^{2}}+z{\bf G}_{-}). \tag{5.15}\] Using holomorphic germs for trivial, point and hyperplane observables we can describe the tropical good section \[\mbox{Im}\;S^{trop}_{\mathbb{P}^{2}}=\mathbb{C}\langle 1,\Phi_{S^{FS}},\Phi_{ \rho}\rangle=\mathbb{C}\langle 1,q_{3}\;e^{-iY_{1}-iY_{2}},q_{1}q_{2}\;e^{iY_{1}+iY _{2}}\rangle. \tag{5.16}\] ### \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) The compactifying polyhedron and the fan for \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) are presented below The generators of the fan \[B_{\mathbb{P}^{1}\times\mathbb{P}^{1}}=\{\vec{b}_{1}=(1,0),\vec{b}_{2}=(0,1), \vec{b}_{3}=(-1,0),\vec{b}_{4}=(0,-1)\} \tag{5.17}\] give us the mirror superpotential (3.2) of the form \[W^{\mathbb{P}^{1}\times\mathbb{P}^{1}}=q_{1}\;e^{iY_{1}}+q_{2}\;e^{iY_{2}}+q_{ 3}\;e^{-iY_{1}}+q_{4}\;e^{-iY_{2}}. \tag{5.18}\] The \(H^{2}_{dR}(\mathbb{P}^{1}\times\mathbb{P}^{1})\) is 2-dimensional and we can use the Fubini-Study forms on \(\mathbb{P}^{1}\)-factors as a basis. The tropical limit of the Fubini-Study forms is the pair of 2-ray stars: horizontal labeled by \(h\), depicted in blue and vertical, labeled by \(v\), depicted in green on the picture above. The corresponding A-model states \[\begin{split}\Psi_{v}&=\delta(r^{1})\;\psi^{1}_{ \Phi}\psi^{1}_{R},\\ \Psi_{h}&=\delta(r^{2})\;\psi^{2}_{\Phi}\psi^{2}_{R}. \end{split} \tag{5.19}\] The holomorphic germs are determined from the four intersections depicted below There is a single intersection point in all four cases, so corresponding holomorphic contain single term. The straightforward evaluation gives us \[\Phi_{v}=\left\{\begin{array}{ll}q_{1}\;e^{i\langle\vec{b}_{1},Y\rangle}=q_ {1}\;e^{iY_{1}},&\rho^{1}<0;\\ q_{3}\;e^{i\langle\vec{b}_{3},Y\rangle}=q_{3}\;e^{-iY_{1}},&\rho^{1}>0;\end{array}\right. \tag{5.20}\] and \[\Phi_{h}=\left\{\begin{array}{ll}q_{2}\;e^{i\langle\widetilde{b}_{2},Y\rangle}=q _{2}\;e^{iY_{2}},,&\rho^{2}<0;\\ q_{4}\;e^{i\langle\widetilde{b}_{4},Y\rangle}=q_{4}\;e^{-iY_{2}},&\rho^{2}>0. \end{array}\right. \tag{5.21}\] The cone crossing procedure gives us the relations between pairs of germs \[\begin{split}&(\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z \mathbf{G}_{-})(-i\psi_{\Phi}^{2})=-i\partial_{Y_{2}}W^{\mathbb{P}^{1}\times \mathbb{P}^{1}}=q_{2}\;e^{iY_{2}}-q_{4}\;e^{-iY_{2}},\\ &(\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-})(-i \psi_{\Phi}^{1})=-i\partial_{Y_{1}}W^{\mathbb{P}^{1}\times\mathbb{P}^{1}}=q_{ 1}\;e^{iY_{1}}-q_{3}\;e^{-iY_{1}}.\end{split} \tag{5.22}\] Indeed we can see that the holomorphic germs belong to the two classes \[\begin{split}&\Phi_{v}=q_{1}\;e^{iY_{1}}=q_{2}\;e^{-iY_{1}}\in H ^{*}(\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-}),\\ &\Phi_{h}=q_{3}\;e^{iY_{2}}=q_{4}\;e^{-iY_{2}}\in H^{*}(\mathbf{ Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-}).\end{split} \tag{5.23}\] We can deform the mirror superpotential by the holomorphic herms of the vertical and horizontal stars, i.e. \[W_{\mathbb{P}^{1}\times\mathbb{P}^{1}}\to W_{\mathbb{P}^{1}\times\mathbb{P}^ {1}}+\epsilon_{v}\Phi_{v}+\epsilon_{h}\Phi_{h}+\mathcal{O}(\epsilon^{2}). \tag{5.24}\] The choice holomorphic germs gives us four different deformations of toric moduli \[\begin{split}&\rho^{1}<0,\rho^{2}<0\;:\;(q_{1},q_{2},q_{3},q_{4}) \rightarrow((1+\epsilon_{v})q_{1},(1+\epsilon_{h})q_{2},q_{3},q_{4}),\\ &\rho^{1}>0,\rho^{2}<0\;:\;(q_{1},q_{2},q_{3},q_{4})\rightarrow( q_{1},(1+\epsilon_{h})q_{2},(1+\epsilon_{v})q_{3},q_{4}),\\ &\rho^{1}<0,\rho^{2}>0\;:\;(q_{1},q_{2},q_{3},q_{4})\rightarrow( (1+\epsilon_{v})q_{1},q_{2},q_{3},(1+\epsilon_{h})q_{4}),\\ &\rho^{1}>0,\rho^{2}>0\;:\;(q_{1},q_{2},q_{3},q_{4})\rightarrow( q_{1},q_{2},(1+\epsilon_{v})q_{3},(1+\epsilon_{h})q_{4}).\end{split} \tag{5.25}\] The four deformations above describe the same Kahler moduli deformation. The degree vector \(\beta\) is two-dimensional and we will parametrize it by \(\beta=(d_{v},d_{h})\). The star basis representative for the degree, i.e. \(S_{\beta}=d_{v}H_{v}+d_{h}H_{h}\). The weight factor evaluates into \[\begin{split} q^{\beta}&=(q_{1}q_{3})^{d_{h}}(q_{ 2}q_{4})^{d_{v}}=(q_{1}q_{3}(1+\epsilon_{v}))^{d_{h}}(q_{2}q_{4}(1+\epsilon_{h }))^{d_{v}}\\ &=(q_{1}q_{3})^{d_{h}}(q_{2}q_{4})^{d_{v}}(1+d_{h}\epsilon_{v}+d_ {v}\epsilon_{h}+\mathcal{O}(\epsilon^{2})).\end{split} \tag{5.26}\] The \(\mathcal{O}(\epsilon)\) terms in the last equality match with intersection of degree star \(S_{\beta}\) with a star \(S=\epsilon_{v}S_{v}+\epsilon_{h}S_{h}\). Indeed we can evaluate \[\beta\cdot S=d_{v}\epsilon_{v}\;S_{v}\cdot S_{v}+(d_{h}\epsilon_{v}+d_{v} \epsilon_{h})\;S_{h}\cdot S_{v}+d_{h}\epsilon_{h}\;S_{h}\cdot S_{h}=d_{h} \epsilon_{v}+d_{v}\epsilon_{h}. \tag{5.27}\] The intersection numbers for the star observables \(S_{v}\) and \(S_{h}\) are \[\begin{split} S_{v}\cdot S_{v}&=\sum_{\vec{l},\vec{l} \in H_{v}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{l}^{\prime}}(\rho) =0,\\ S_{h}\cdot S_{h}&=\sum_{\vec{l},\vec{l}^{\prime}\in H _{h}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{l}^{\prime}}(\rho)=0, \\ S_{v}\cdot S_{h}&=\sum_{\vec{l}\in H_{v},\ \vec{l}\in H _{h}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{l}^{\prime}}(\rho)=1. \end{split} \tag{5.28}\] The holomorphic germ for the point observable \[\begin{split}\Phi_{\rho}&=q_{1}q_{2}\ \chi_{\vec{b}_{1},\vec{b}_{2}}(-\vec{\rho}\ )\ e^{iY_{1}+iY_{2}}+q_{1}q_{4}\ \chi_{\vec{b}_{1},\vec{b}_{4}}(-\vec{\rho}\ )\ e^{iY_{1}-iY_{2}}\\ &\qquad+q_{2}q_{3}\ \chi_{\vec{b}_{3},\vec{b}_{2}}(-\vec{\rho}\ )\ e^{-iY_{1}+iY_{2}}+q_{3}q_{4}\ \chi_{\vec{b}_{2},\vec{b}_{4}}(-\vec{\rho}\ )\ e^{-iY_{1}-iY_{2}}.\end{split} \tag{5.29}\] Let us provide the exact term for pairs \[\begin{split}(\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z \mathbf{G}_{-})(-ie^{\pm iY_{1}}\psi_{\Phi}^{2})&=-ie^{\pm iY_{1 }}\partial_{Y_{2}}W_{\mathbb{P}^{1}\times\mathbb{P}^{1}}=q_{2}e^{\pm iY_{1}+iY _{2}}-q_{4}e^{\pm iY_{1}-iY_{2}},\\ (\mathbf{Q}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-})( -ie^{\pm iY_{2}}\psi_{\Phi}^{1})&=-ie^{\pm iY_{2}}\partial_{Y_{1 }}W_{\mathbb{P}^{1}\times\mathbb{P}^{1}}=q_{1}e^{iY_{1}\pm iY_{2}}-q_{3}e^{-iY _{1}\pm iY_{2}}.\end{split}\] Using lemma we conclude that holomorphic germ can be written in the form \[\Phi_{\rho}=q_{1}q_{2}\ e^{iY_{1}+iY_{2}}=q_{2}q_{3}\ e^{-iY_{1}+iY_{2}}=q_{1} q_{4}\ e^{iY_{1}-iY_{2}}=q_{3}q_{4}\ e^{-iY_{1}-iY_{2}}\in H^{*}(\mathbf{Q}_{ \mathbb{P}^{1}\times\mathbb{P}^{1}}+z\mathbf{G}_{-}).\] Tropical good section \[\text{Im}\ S_{\mathbb{P}^{1}\times\mathbb{P}^{1}}^{trop}=\mathbb{C}\langle 1, \Phi_{v},\Phi_{h},\Phi_{\rho}\rangle=\mathbb{C}\langle 1,q_{1}\ e^{iY_{1}},q_{2}\ e^{iY_{2}},q_{1}q_{2 }\ e^{iY_{1}+iY_{2}}\rangle. \tag{5.30}\] ### Blow up of a point on \(\mathbb{P}^{2}\) We can depict the blow-up of a point on \(\mathbb{P}^{2}\) by cutting a corner on compactifying polyhedron for \(\mathbb{P}^{2}\). Similarly the corresponding fan is a refinement of the fan for \(\mathbb{P}^{2}\). The compactifying divisors for \(\widehat{\mathbb{P}^{2}}\) are \[B_{\widehat{\mathbb{P}^{2}}}=\{\vec{b}_{1}=(1,0),\vec{b}_{2}=(0,1),\vec{b}_{3}=(- 1,-1),\vec{b}_{4}=(1,1)\}. \tag{5.31}\] The mirror superpotential (3.2) is \[W_{\widehat{\mathbb{P}^{2}}}=q_{1}\ e^{iY_{1}}+q_{2}\ e^{iY_{2}}+q_{3}\ e^{-iY_{1}-iY_{2}}+q_{4}\ e^{iY_{1}+iY_{2}}. \tag{5.32}\] The size of \(\mathbb{P}^{1}\) at blow up point is controlled by \(q_{4}\). The limit \(q_{4}\to 0\) describes a blow down of \(\widehat{\mathbb{P}^{2}}\) to \(\mathbb{P}^{2}\), while the superpotential in the limit becomes the mirror superpotential for \(\mathbb{P}^{2}\). Second Betti number \(\dim H^{2}(\widehat{\mathbb{P}^{2}})=2\), hence there are two independent hypersurface observables. We can choose a basis consisting of Fubini-Study star \(S^{FS}\) on \(\mathbb{P}^{2}\) (depicted in blue) and a two ray star \(S^{bl}\), related to the blow up, depicted in green. The holomorphic germ for \(S^{bl}\)-observable \[\Phi_{S^{bl}_{-\rho}}=\left\{\begin{array}{ll}q_{1}\ e^{i\langle\vec{b}_{1},Y\rangle}=q_{1}e^{iY_{1}},&\rho^{2}>\rho^{1}\\ q_{2}\ e^{i\langle\vec{b}_{2},Y\rangle}=q_{2}e^{iY_{2}},&\rho^{2}<\rho^{1} \end{array}\right. \tag{5.34}\] The cone crossing relations are \[\begin{split}(\mathbf{Q}_{\widehat{\mathbb{P}^{2}}}+z\mathbf{G}_ {-})(-i\psi^{1})&=-i\partial_{1}W_{\widehat{\mathbb{P}^{2}}}=q_{1}e^{iY_{1} }-q_{3}e^{-iY_{1}-iY_{2}}+q_{4}e^{iY_{1}+iY_{2}}\\ (\mathbf{Q}_{\widehat{\mathbb{P}^{2}}}+z\mathbf{G}_{-})(-i\psi^{2})&=-i \partial_{2}W_{\widehat{\mathbb{P}^{2}}}=q_{2}e^{iY_{2}}-q_{3}e^{-iY_{1}-iY_{ 2}}+q_{4}e^{iY_{1}+iY_{2}}\end{split} \tag{5.35}\] Hence we can express the holomorphic germs for the line observables in the form \[\begin{split}\Phi_{S^{FS}}=& q_{3}e^{-iY_{1}-iY_{2}}=q_{1}e ^{iY_{1}}+q_{4}e^{iY_{1}+iY_{2}}=q_{2}e^{iY_{2}}+q_{4}e^{iY_{1}+iY_{2}},\\ \Phi_{S^{bl}}=& q_{1}e^{iY_{1}}=q_{2}e^{iY_{2}}.\end{split} \tag{5.36}\] We can deform the mirror superpotential by the holomorphic herms of the vertical and horizontal stars, i.e. \[W_{\overline{\mathbb{P}^{2}}}\to W_{\overline{\mathbb{P}^{2}}}+\epsilon\; \Phi_{S^{FS}}+\epsilon_{bl}\;\Phi_{S^{bl}}+\mathcal{O}(\epsilon^{2}). \tag{5.37}\] Hence we have six possible deformations of toric moduli depending on the choice of holomorphic germs \[\begin{split}(q_{1},q_{2},q_{3},q_{4})&\to(q_{1}(1 +\epsilon_{bl}),q_{2},q_{3}(1+\epsilon),q_{4}),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1}(1+\epsilon_{bl} )(1+\epsilon),q_{2},q_{3},q_{4}(1+\epsilon)),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1}(1+\epsilon_{bl} ),q_{2}(1+\epsilon),q_{3},q_{4}(1+\epsilon)),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1},q_{2}(1+\epsilon _{bl}),q_{3}(1+\epsilon),q_{4}),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1},q_{2}(1+\epsilon _{bl})(1+\epsilon),q_{3},q_{4}(1+\epsilon)),\\ (q_{1},q_{2},q_{3},q_{4})&\to(q_{1}(1+\epsilon),q_{2},q_{3}(1+\epsilon_{bl}),q_{4}(1+\epsilon)).\end{split} \tag{5.38}\] The four deformations above describe the same Kahler moduli deformation. The degree vector \(\beta\) is two-dimensional and we will parametrize it by \(\beta=(d,d_{bl})\). The star basis representative for the degree, i.e. \(S_{\beta}=d\;S^{FS}+d_{bl}\;S^{bl}\). The weight factor evaluates into \[\begin{split} q^{\beta}&=(q_{1}q_{2}q_{3})^{d}(q_{ 3}q_{4})^{d_{bl}}=(q_{1}q_{2}q_{3}(1+\epsilon)(1+\epsilon_{bl}))^{d}(q_{3}q_{4 }(1+\epsilon))^{d_{bl}}\\ &=(q_{1}q_{2}q_{3})^{d}(q_{3}q_{4})^{d_{bl}}(1+d\cdot\epsilon+d_ {bl}\cdot\epsilon+d\cdot\epsilon_{bl}+\mathcal{O}(\epsilon^{2}))\end{split} \tag{5.39}\] The \(\mathcal{O}(\epsilon)\) terms in the last equality match with intersection of degree star \(S_{\beta}\) with a star observable \(S=\epsilon\;S^{FS}+\epsilon_{bl}\;S^{bl}\). Indeed we can evaluate \[\begin{split} S_{\beta}\cdot S&=d\cdot\epsilon\; S^{FS}\cdot S^{FS}+(d\cdot\epsilon_{bl}+d_{bl}\cdot\epsilon)S^{FS}\cdot S^{bl}+d_{bl} \cdot\epsilon_{bl}\;S^{bl}\cdot S^{bl}\\ &=d\cdot\epsilon+d_{bl}\cdot\epsilon+d\cdot\epsilon_{bl}.\end{split} \tag{5.40}\] The intersection numbers for the star observables \(S_{v}\) and \(S_{h}\) are \[\begin{split} S^{bl}\cdot S^{bl}&=\sum_{\vec{l},\vec{l }^{\prime}\in S^{bl}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{p}}( \rho)=0,\\ S^{FS}\cdot S^{FS}&=\sum_{\vec{l},\vec{l}^{\prime} \in S^{FS}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{l}^{\prime}}( \rho)=1,\\ S^{FS}\cdot S^{bl}&=\sum_{\vec{l}\in S^{FS},\ \vec{l}^{\prime} \in S^{bl}}|\vec{l}\times\vec{l}^{\prime}|\chi_{\vec{l},-\vec{p}}(\rho)=1. \end{split} \tag{5.41}\] The holomorphic germ for the point observable at point \(\vec{\rho}\), labeled by four cones \[\Phi_{\rho}=\left\{\begin{array}{ll}q_{1}q_{2}\ e^{iY_{1}+iY_{2}}+q_{2}q_{4} \ e^{iY_{1}+2iY_{2}},&\vec{\rho}\in\text{Cone}(\vec{b}_{2},\vec{b}_{4});\\ q_{1}q_{2}\ e^{iY_{1}+iY_{2}}+q_{1}q_{4}\ e^{2iY_{1}+iY_{2}},&\vec{\rho}\in \text{Cone}(\vec{b}_{1},\vec{b}_{4});\\ q_{1}q_{3}\ e^{-iY_{2}},&\vec{\rho}\in\text{Cone}(\vec{b}_{1},\vec{b}_{3}); \\ q_{2}q_{3}\ e^{-iY_{1}},&\vec{\rho}\in\text{Cone}(\vec{b}_{2},\vec{b}_{3}). \end{array}\right. \tag{5.42}\] The tropical good section \[\text{Im}\ S^{trop}_{\widehat{\mathbb{P}}^{2}}=\mathbb{C}\langle 1,\Phi_{ \rho},\Phi_{S^{FS}},\Phi_{S^{bl}}\rangle=\mathbb{C}\langle 1,q_{2}q_{3}\ e^{-iY_{1} },q_{3}\ e^{-iY_{1}-iY_{2}},q_{1}\ e^{iY_{1}}\rangle. \tag{5.43}\] ## 6 Recursion for point observables The holomorphic germs for hypersurface observables and point observables are quite similar. Both are linear combinations finitely-many factors \(e^{i\langle\vec{m},Y\rangle}\) with minor difference: In case of line observables vectors \(\vec{m}=\vec{b}\) belong to the fan \(B_{X}\) of \(X\), while in case of point observable \(\vec{m}=\vec{b}+\vec{b}^{\prime}\) is the sum of two vectors \(\vec{b},\vec{b}^{\prime}\in B_{X}\) from the fan of \(X\). The deformation of the superpotential by such holomorphic germs \[W_{X}\to W_{X}^{\epsilon}=W_{X}+\epsilon\Phi_{P}=\sum_{\vec{b}\in B_{X}}q_{ \vec{b}}\ e^{i\langle\vec{b},Y\rangle}+\sum_{\vec{b},\vec{b}^{\prime}\in B_{X} }c_{\vec{b}\vec{b}^{\prime}}\ e^{i\langle\vec{b}+\vec{b}^{\prime},Y\rangle} \tag{6.1}\] in some cases can be thought as a superpotential for different toric variety \(X_{\epsilon}\), defined by the extension of the fan \(B_{X}\) by vectors \(\vec{b}+\vec{b}^{\prime}\) for each non-zero \(c_{\vec{b}\vec{b}^{\prime}}\). An extension of the fan by sum of two vectors in some cases describe a blow up of a point in a toric variety. The simplest example of such phenomenon is the blow up of a point on \(\mathbb{P}^{2}\). Indeed the fan for \(\widehat{\mathbb{P}^{2}}\) is an extension of the fan for \(\mathbb{P}^{2}\) by adding a vector \(\vec{b}_{4}=\vec{b}_{1}+\vec{b}_{2}\) as shown on picture below In the rest of this section we provide an explicit example of the superpotential deformation by the holomorphic germ for the point observable and discuss a potential implications for the tropical Gromov-Witten invariants. ### Recursion for point observables on \(\mathbb{P}^{2}\) Let us consider a 4-point tropical Gromov-Witten invariant: the number of tropical curves (of degree-1 and genus-0) passing through 2 distinct points \(P_{1},P_{2}\) and two hypersurfaces \(H_{3},H_{4}\) in \(\mathbb{P}^{2}\). We can use the divisor relation to express the 4-point Gromov-Witten invariant via 3- and 2- point invariants \[\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d=1} ^{\mathbb{P}^{2}}=1\cdot\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}} \rangle_{d=1}^{\mathbb{P}^{2}}=1\cdot\langle\gamma_{P_{1}},\gamma_{P_{2}} \rangle_{d=1}^{\mathbb{P}^{2}}=1. \tag{6.2}\] Below we provide the enumerative proof of the relation (6.2). The tropical hypersufaces \(H_{3}\) and \(H_{4}\) are 3-valent stars depicted in green and black. From the picture we observe that both stars always intersect the tropical curve of degree-1, depicted in blue, at a single point. Hence the 4-point invariant is determined by the tropical curves passing through points \(P_{1},P_{2}\) and there is only one such curve. We can express the 4-point Gromov-Witten invariant via B-model correlation function \[\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{ \mathbb{P}^{2}}=q_{1}q_{2}q_{3}\cdot\langle\gamma_{P_{1}},\gamma_{P_{2}}, \gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d=1}^{\mathbb{P}^{2}}=\langle\Psi_{1}, \Psi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}} \tag{6.3}\] of four mirror states \[\Psi_{1}=\Psi_{P_{1}}^{W},\ \ \Psi_{2}=\Psi_{P_{2}}^{W},\ \ \Psi_{3}=\Psi_{H_{3}}^{W},\ \ \Psi_{4}=\Psi_{H_{4}}^{W}. \tag{6.4}\] We can use the invariance of B-model correlation functions discussed in [5] \[\langle\Psi_{1},\Psi_{2},\Psi_{3},\Psi_{4}+Q_{W}\chi\rangle_{Q_{W}}=\langle \Psi_{1},\Psi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}} \tag{6.5}\] to replace the mirror state \(\Psi_{2}\) by its holomorphic germ. In particular, let us choose the holomorphic germ in the form \[\Phi_{2}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}}, \tag{6.6}\] to rewrite the B-model correlation function in the following form \[\langle\Psi_{1},\Psi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}}=\langle\Psi_{1}, \Phi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}}. \tag{6.7}\] We can use the recursion formula from [5] to express the 4-point function as a derivative of 3-point function \[\langle\Psi_{1},\Phi_{2},\Psi_{3},\Psi_{4}\rangle_{Q_{W}}=\frac{d}{d\epsilon} \Big{|}_{\epsilon=0}\langle\Psi_{1}^{\epsilon},\Psi_{3}^{\epsilon},\Psi_{4}^{ \epsilon}\rangle_{Q_{W^{\epsilon}}} \tag{6.8}\] in B-model with deformed superpotential \[W_{\mathbb{P}^{2}}^{\epsilon}=W_{\mathbb{P}^{2}}+\epsilon\;\Phi_{2}=q_{1}\;e^ {iY_{1}}+q_{2}\;e^{iY_{2}}+q_{3}\;e^{-iY_{1}-iY_{2}}+\epsilon q_{1}q_{2}\;e^{ iY_{1}+iY_{2}}=W_{X_{\epsilon}}. \tag{6.9}\] The deformed superpotential is the mirror superpotential for the different toric manifold \(X_{\epsilon}=\widehat{\mathbb{P}^{2}}\). The polytopes for \(\mathbb{P}^{2}\) and \(X_{\epsilon}\) are depicted below We showed in [5] that the deformed mirror states in correlation function (6.8) are mirror states in deformed theory, i.e. \[\Psi^{\epsilon}_{\alpha}=\Psi^{W}_{\alpha}+2\pi K_{W}G_{-}\mu_{2}(\Psi^{W}_{ \alpha},\epsilon\;\Phi_{2})=\Psi^{W^{\epsilon}}_{\alpha}. \tag{6.10}\] Hence we can represent the 3-point function \(\langle\Psi^{\epsilon}_{1},\Psi^{\epsilon}_{3},\Psi^{\epsilon}_{4}\rangle_{Q_{ W_{\epsilon}}}\) as the sum of A-model amplitudes for three observables \(P_{1},H_{3},H_{4}\) in the HTQM for \(X^{\epsilon}=\widehat{\mathbb{P}^{2}}\) and then convert them into tropical Gromov-Witten invariants on \(X^{\epsilon}=\widehat{\mathbb{P}^{2}}\). Namely \[\langle\Psi^{\epsilon}_{P_{1}},\Psi^{\epsilon}_{H_{3}},\Psi^{\epsilon}_{H_{4} }\rangle_{Q_{W^{\epsilon}}}=\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4 }}\rangle^{\widehat{\mathbb{P}^{2}}}. \tag{6.11}\] \(X_{\epsilon}=\widehat{\mathbb{P}^{2}}\) is a toric space, hence the Gromov-Witten invariant is a polynomial in toric moduli \(q_{1},q_{2},q_{3}\) and new module \(q_{4}(\epsilon)=\epsilon q_{1}q_{2}\). The \(\mathbb{P}^{2}\)-invariants are polynomials in Kahler module \(q=q_{1}q_{2}q_{3}\). The \(\widehat{\mathbb{P}^{2}}\)-invariants are polynomials in Kahler moduli \(q,q_{bl}\) where \(q_{bl}=q_{4}q_{3}\) in additional Kahler module on \(\widehat{\mathbb{P}^{2}}\). Hence we can expand \[\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{P }^{2}}}=\sum_{d,d_{\epsilon}=0}^{\infty}q^{d}q_{bl}^{d_{bl}}\langle\gamma_{P_{ 1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{P}^{2}}}_{d,d_{bl }} \tag{6.12}\] The product \(q_{bl}=q_{4}q_{3}\) is a Kahler module of \(\widehat{\mathbb{P}^{2}}\) associated to the size of the blow-up \(\mathbb{P}^{1}\). The derivative at \(\epsilon=0\) picks up monomials, linear in \(\epsilon\), hence linear in \(q_{4}(\epsilon)\) and Kahler module \(q_{bl}\), i.e. \[\frac{d}{d\epsilon}\bigg{|}_{\epsilon=0}\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{P}^{2}}}=\frac{d}{d\epsilon}q_{bl} \cdot\sum_{d=0}^{\infty}q^{d}\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{ 4}}\rangle^{\widehat{\mathbb{P}^{2}}}_{d,d_{bl}=1}=q\langle\gamma_{P_{1}}, \gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{P}^{2}}}_{0,d_{bl}=1}, \tag{6.13}\] where we used \[\frac{d}{d\epsilon}q_{bl}=q_{3}\frac{d}{d\epsilon}q_{4}=q_{1}q_{2}q_{3}=q \tag{6.14}\] and the degree selection argument. The dimension of moduli space of tropical curves of bi-degree \((d,d_{bl})\) on \(\widehat{\mathbb{P}^{2}}\) with 3 marked points should be equal to the total degree of three observables, which implies that \[3d+2d_{bl}+2=\sum_{\alpha=1}^{3}\deg\gamma_{\alpha}=4. \tag{6.15}\] Hence the Gromov-Witten invariant \(\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{\widehat{\mathbb{ P}^{2}}}\) is non-zero only for bi-degree \(0,d_{bl}=1\). The result of this procedure relates the 4pt degree-1 Gromov-Witten invariant on \(\mathbb{P}^{2}\) to 3pt invariant on \(\widehat{\mathbb{P}^{2}}\) of bi-degree \((0,1)\), i.e \[\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d=1}^ {\mathbb{P}^{2}}=\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d =0,d_{bl}=1}^{\widehat{\mathbb{P}^{2}}}. \tag{6.16}\] ### Enumerative description of recursion We can give an enumerative interpretation of the relation (6.16) as a _cutting corners procedure_ for tropical Gromov-Witten invariants. In case point \(P_{2}\) is close to the corner, formed by hyperplanes supported on \(\vec{b}_{1}\) and \(\vec{b}_{2}\) the tropical Gromov-Witten invariant \(\langle\gamma_{P_{1}},\gamma_{P_{2}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle^{ \mathbb{P}^{2}}\) is supported by the diagram below. Let us cut the corner together with the part of tropical curve and marked point \(P_{2}\). The result of the cutting is the polyhedron for \(\widehat{\mathbb{P}^{2}}\) with a tropical curve and remaining three observables: point \(P_{1}\) and two hyperplanes \(H_{1}\) and \(H_{2}\). The remaining tropical curve is a curve of bi-degree \((d,d_{bl})=(0,1)\). The moduli space of such curve is \(\mathbb{R}^{1}\times S^{1}\): the radial part corresponds to the parallel translation of the curve as shown on a picture below The three remaining observables completely fix the moduli hence there exists a tropical Gromov-Witten invariant on \(\widehat{\mathbb{P}^{2}}\), which counts the number of degree-\((0,1)\) tropical curves \(\Gamma\) which pass through the point \(P_{1}\) and two hyperplanes \(H_{3}\) and \(H_{4}\). Both hyperplanes \(H_{3}\) and \(H_{4}\) have degree-\((1,0)\) and intersect all curves \(\Gamma\) with intersection numbers \(H_{3}\cdot\Gamma=H_{4}\cdot\Gamma=1\) hence we can reduce the original problem to counting curves \(\Gamma\) through point \(P_{1}\). There is a unique such tropical curve, hence \[\langle\gamma_{P_{1}},\gamma_{H_{3}},\gamma_{H_{4}}\rangle_{d=0,d_{bl}=1}^{ \widehat{\mathbb{P}^{2}}}=1. \tag{6.17}\] We can apply the cutting corner procedure to other tropical Gromov-Witten invariants. For example we can consider a degree-2 curves through 5 distinct points on \(\mathbb{P}^{2}\). Among the 4 distinct tropical curves of degree-2 let us consider the one presented below. We performed a single corner cut to reduce the number of marked points to 4 and changed the target to \(\widehat{\mathbb{P}^{2}}\). We can continue the procedure and cut one more corner to reduce the number of points to 3. There are two possible cuts (up to an isomorphism) that we can perform: * **two points are far away**: The resulting polytope describes the toric variety \(X_{\epsilon_{1}\epsilon_{2}}\) which is a blow up of \(\mathbb{P}^{2}\) at two points. In particular we have a network of blow down maps \(\pi_{1},\pi_{2}:X_{\epsilon_{1}\epsilon_{2}}\to\widehat{\mathbb{P}^{2}}\) which can be applied in any order. The cycles which are pre-images of blow up points do not intersect. * **two points are nearby**: The resulting polyhedron describes the toric variety \(X_{\epsilon_{1}\epsilon_{2}}\) which is a bi-rational transformation of \(\mathbb{P}^{2}\) obtained by two consecutive blow-ups. We have a single chain of blow down maps. ### Double deformation and contact terms Let us give a detailed description of the geometry for two cuts of \(\widehat{\mathbb{P}^{2}}\). We can use the polytopes to construct the corresponding fans However in order to describe the toric moduli \(q_{1},...,q_{5}\) in terms of deformation parameters \(\epsilon_{1},\epsilon_{2}\) and toric moduli of the base \(\mathbb{P}^{2}\) we need to construct the corresponding mirror superpotentials. Both superpotentials are \(\epsilon_{2}\)-deformation of the superpotential (6.9), i.e. \[W^{\epsilon_{1}\epsilon_{2}}_{\mathbb{P}^{2}}=W^{\epsilon_{1}}_{\mathbb{P}^{2} }+\epsilon_{2}\;\Phi_{4}^{\epsilon_{1}}. \tag{6.18}\] where \(\Phi_{4}^{\epsilon_{1}}\) is the holomorphic germ on \(\widehat{\mathbb{P}^{2}}\). We can express the holomorphic germs on \(\widehat{\mathbb{P}^{2}}\) using the holomorphic germs on \(\mathbb{P}^{2}\) then the double deformed superpotential takes the form \[W^{\epsilon_{1}\epsilon_{2}}_{\mathbb{P}^{2}}=W_{\mathbb{P}^{2}}+\epsilon_{1} \;\Phi_{5}+\epsilon_{2}\;\Phi_{4}+\epsilon_{1}\epsilon_{2}\;C^{trop}_{W}(\Phi_ {4},\Phi_{5}). \tag{6.19}\] The second equality describes a double deformation of superpotential by a pair of holomorphic functions. The \(\epsilon_{1}\epsilon_{2}\)-term is the tropical contact term defined in section 3.3 The two cutting corners cases correspond to the different choices of the holomorphic germs \(\Phi_{4},\Phi_{5}\) for point observable on \(\mathbb{P}^{2}\). We can use our analysis from section 5.3 for holomorphic germs to perform the superpotential analysis. * **two points are far away**: The holomorphic germ for \(P_{4}\) observable is the same for \(\mathbb{P}^{2}\) and \(\widehat{\mathbb{P}^{2}}\) \[\Phi_{3}^{\epsilon_{1}}=\Phi_{3}=q_{1}q_{3}\;e^{-iY_{2}}\] (6.20) hence the double deformation of the mirror superpotential is \[W^{\epsilon_{1}\epsilon_{2}}_{\mathbb{P}^{2}}=W^{\epsilon_{1}}_{\mathbb{P}^{2 }}+\epsilon_{2}\Phi_{3}^{\epsilon_{1}}=q_{1}\;e^{iY_{1}}+q_{2}\;e^{iY_{2}}+q_ {3}\;e^{-iY_{1}-iY_{2}}+\epsilon_{1}q_{1}q_{2}\;e^{iY_{1}+iY_{2}}+\epsilon_{2} q_{1}q_{3}\;e^{-iY_{2}}.\] (6.21) There is no quadratic terms in \(\epsilon\) in our expression hence we expect that the contact term between \(\Phi_{3}\) and \(\Phi_{5}\) vanishes. Indeed the product \(\Phi_{3}\Phi_{5}\) is in image of good section (5.16) \[\Phi_{5}\cdot\Phi_{3}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}}\cdot q_{1}q_{3}\;e^{-iY_{2 }}=q_{1}^{2}q_{2}q_{3}\;e^{iY_{1}}\in\text{Im}\;S^{trop}_{\mathbb{P}^{2}},\] (6.22) hence contact term between two deformations is trivial, i.e. \[C_{W}^{trop}(\Phi_{5},\Phi_{3})=C_{W}^{trop}(e^{iY_{1}+iY_{2}},e^{-iY_{2}})=0.\] (6.23) * **two points are nearby**: The holomorphic germs \[\Phi_{4}^{\epsilon_{1}}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}}+q_{4}(\epsilon_{1})q_{2} \;e^{iY_{1}+2iY_{2}}=q_{1}q_{2}\;e^{iY_{1}+iY_{2}}+\epsilon_{1}q_{1}q_{2}^{2} \;e^{iY_{1}+2iY_{2}}\] (6.24) \[\Phi_{4}=\Phi_{4}^{\epsilon_{1}}\Big{|}_{\epsilon_{1}=0}=q_{1}q_{2}\;e^{iY_{1} +iY_{2}}\] (6.25) gives us a mirror superpotential \[W_{\mathbb{P}^{2}}^{\epsilon_{1}\epsilon_{2}}=q_{1}\;e^{iY_{1}}+q_{2}\;e^{iY _{2}}+q_{3}\;e^{-iY_{1}-iY_{2}}+(\epsilon_{1}+\epsilon_{2})q_{1}q_{2}\;e^{iY_{ 1}+iY_{2}}+\epsilon_{1}\epsilon_{2}q_{1}q_{2}^{2}\;e^{iY_{1}+2iY_{2}}.\] (6.26) Note that the \(\epsilon_{1}\) and \(\epsilon_{2}\) enter symmetrically. The quadratic term is a contact term for two (identical) deformations \[C_{W}^{trop}(\Phi_{5},\Phi_{4}) =C_{W}^{trop}(q_{1}q_{2}\;e^{iY_{1}+iY_{2}},q_{1}q_{2}\;e^{iY_{1}+ iY_{2}})\] \[=\mathbf{G}_{-}\mathbf{\Sigma}_{W}(\Phi_{4}\Phi_{5}-S_{W}\pi_{W}( \Phi_{4}\Phi_{5}))=\mathbf{G}_{-}(q_{1}^{2}q_{2}\;e^{2iY_{1}+iY_{2}}i\psi_{ \Phi}^{2})\] (6.27) \[=e^{2iY_{1}+iY_{2}}.\] We used \[\pi_{W}(\Phi_{4}\Phi_{5})=\pi_{W}(q_{1}^{2}q_{2}^{2}\;e^{2iY_{1}+2iY_{2}})=q_{ 1}^{2}q_{2}q_{3}\;e^{iY_{1}}\] (6.28) and \[\Phi_{4}\Phi_{5}-S_{W}\pi_{W}(\Phi_{4}\Phi_{5})=q_{1}^{2}q_{2}^{2}\;e^{2iY_{1 }+2iY_{2}}-q_{1}^{2}q_{2}q_{3}\;e^{iY_{1}}=\mathbf{Q}_{W}(q_{1}^{2}q_{2}e^{2iY _{1}+iY_{2}}i\psi_{\Phi}^{2}).\] (6.29) We explicitely checked that the two ways (6.18) and (6.19) of constructing the double deformed mirror superpotential for \(\mathbb{P}^{2}\) give identical results when we use the tropical good section (5.16) for the contact terms. ### Conclusion and open questions We described the cutting corners procedure and its application to 4- and 5- point tropical Gromov-Witten invariants on \(\mathbb{P}^{2}\). It is reasonable to conjecture that the cutting corners relation (6.16) for 4-point correlation function generalizes to \(n\)-point functions \[\langle\gamma_{1},\gamma_{2},...,\gamma_{n},\gamma_{P}\rangle_{d}^{\mathbb{P}^{2} }=\langle\gamma_{1},\gamma_{2},...,\gamma_{n}\rangle_{d-1,d_{bl}=1}^{\widetilde {\mathbb{P}^{2}}}. \tag{6.30}\] In our examples we cut up to two corners, but we can conjecture that the procedure can be iterated. If so, then we can repeat the cutting corners procedure till we are down to three point correlation function, which we can evaluate using the residue formula in Landau-Ginzburg-Saito theory. In particular, it would be interesting to perform the five cutting corners to evaluate the first nontrivial Gromov-Witten invariant, which is 12 degree-3 genus-0 curves passing through generic 8 points on \(\mathbb{P}^{2}\). There is a famous isomorphism between the blow up of two points on \(\mathbb{P}^{2}\) and the blow up of one point on \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). Such relation implies that the iterated cutting corners procedure after first few steps will give us the same toric spaces. Hence, we can use this relation as consistency check of the tropical Gromov-Witten invariants evaluation through the cutting corners procedure. In case we can repeat the cutting corners procedure indefinitely we can use it give a non-perturbative definition of the Gromov-Witten invariants for point observables in way similar to what was done for the hyperplane observables. ### Acknowledgments We are grateful to Yasha Neiman for many discussions on the topics presented in this paper. The work A.L. is supported by Wu Wen-Tsun Key Lab of Mathematics. The work of V.L. is supported by the Quantum Gravity Unit of the Okinawa Institute of Science and Technology Graduate University (OIST).
2309.16194
Thin current sheets in the magnetotail at lunar distances: statistics of ARTEMIS observations
The magnetotail current sheet's spatial configuration and stability control the onset of magnetic reconnection - the driving process for magnetospheric substorms. The near-Earth current sheet has been thoroughly investigated by numerous missions, whereas the midtail current sheet has not been adequately explored. This is especially the case for the long-term variation of its configuration in response to the solar wind. We present a statistical analysis of 1261 magnetotail current sheet crossings by the Acceleration, Reconnection, Turbulence and Electrodynamics of Moon's Interaction with the Sun (ARTEMIS) mission orbiting the moon (X~-60 RE), collected during the entirety of Solar Cycle 24. We demonstrate that the magnetotail current sheet typically remains extremely thin, with a characteristic thickness comparable to the thermal ion gyroradius, even at such large distances from Earth's dipole. We also find that a substantial fraction (~one quarter) of the observed current sheets have a partially force-free magnetic field configuration, with a negligible contribution of the thermal pressure and a significant contribution of the magnetic field shear component to the pressure balance. Further, we quantify the impact of the changing solar wind driving conditions on the properties of the midtail around the lunar orbit. During active solar wind driving conditions, we observe an increase in the occurrence rate of thin current sheets, whereas quiet solar wind driving conditions seem to favor the formation of partially force-free current sheets.
S. R. Kamaletdinov, A. V. Artemyev, A. Runov, V. Angelopoulos
2023-09-28T06:32:35Z
http://arxiv.org/abs/2309.16194v1
# Thin current sheets in the magnetotail at lunar distances: statistics of ARTEMIS observations ###### Abstract We present a statistical analysis of magnetotail current sheets collected by the ARTEMIS mission for 11 years of observations at \(\sim 60\) R\({}_{E}\) tail We observe a large population (\(\sim 56\%\)) of ion-kinetic scale current sheets and a smaller population of partially force-free current sheets (\(\sim 24\%\)) We show that the occurrence rates of intense current sheets and partially force-free current sheets correlate with the solar wind parameters
2301.13816
Execution-based Code Generation using Deep Reinforcement Learning
The utilization of programming language (PL) models, pre-trained on large-scale code corpora, as a means of automating software engineering processes has demonstrated considerable potential in streamlining various code generation tasks such as code completion, code translation, and program synthesis. However, current approaches mainly rely on supervised fine-tuning objectives borrowed from text generation, neglecting unique sequence-level characteristics of code, including but not limited to compilability as well as syntactic and functional correctness. To address this limitation, we propose PPOCoder, a new framework for code generation that synergistically combines pre-trained PL models with Proximal Policy Optimization (PPO) which is a widely used deep reinforcement learning technique. By utilizing non-differentiable feedback from code execution and structure alignment, PPOCoder seamlessly integrates external code-specific knowledge into the model optimization process. It's important to note that PPOCoder is a task-agnostic and model-agnostic framework that can be used across different code generation tasks and PLs. Extensive experiments on three code generation tasks demonstrate the effectiveness of our proposed approach compared to SOTA methods, achieving significant improvements in compilation success rates and functional correctness across different PLs.
Parshin Shojaee, Aneesh Jain, Sindhu Tipirneni, Chandan K. Reddy
2023-01-31T18:02:26Z
http://arxiv.org/abs/2301.13816v4
# Execution-based Code Generation using Deep Reinforcement Learning ###### Abstract The utilization of programming language (PL) models, pretrained on large-scale code corpora, as a means of automating software engineering processes has demonstrated considerable potential in streamlining various code generation tasks such as code completion, code translation, and program synthesis. However, current approaches mainly rely on supervised fine-tuning objectives borrowed from text generation, neglecting specific sequence-level features of code, including but not limited to compilability as well as syntactic and functional correctness. To address this limitation, we propose PPOCoder, a new framework for code generation that combines pretrained PL models with Proximal Policy Optimization (PPO) deep reinforcement learning and employs execution feedback as the external source of knowledge into the model optimization. PPOCoder is transferable across different code generation tasks and PLs. Extensive experiments on three code generation tasks demonstrate the effectiveness of our proposed approach compared to SOTA methods, improving the success rate of compilation and functional correctness over different PLs. Our code can be found at [https://github.com/reddy-lab-code-research/PPOCoder](https://github.com/reddy-lab-code-research/PPOCoder). ## 1 Introduction Recent years have seen a surge of attention towards the use of deep learning and neural language models to automate code generation and other software engineering processes, as a means to enhance developer productivity. The software development process encompasses a variety of code generation tasks, including code completion (Code2Code) [19], code translation (Code2Code) [46], and program synthesis (NL2Code) [20]. Inspired by the great performance of pre-trained neural language models (LMs) in different natural language processing (NLP) tasks, these pretraining techniques have been recently employed on large-scale code corpuses to automate code generation tasks. Examples of such pretrained models include CodeBERT [11], CodeGPT [23], PLABRT [1], and CodeT5 [40]. However, the code domain faces some unique challenges. For example, given that the generated code is intended for machine execution as opposed to human comprehension, it is imperative that the generated code maintains syntactic and functional correctness, i.e., being able to pass compilation and unit tests. Despite the advancements of pretrained code models, they are heavily influenced by NLP's self-supervised masked language modeling (MLM) and often struggle to ensure the syntactic and functional correctness of the generated codes. Authors of [9] have shown that up to 70% of codes generated by these models can be non-compilable. To improve code generation towards syntactic and functional correctness, several approaches are followed: \((i)\) filtering and repairing the non-compilable synthesized programs [17], \((ii)\) using energy-based generation models with compilability constraints [16], and \((iii)\) using reinforcement learning (RL) finetuning mechanisms [38, 44, 18]. However, existing approaches are often tailored to a specific programming language (PL) or task and are not easily transferable to other different code generation tasks and PLs. To tackle this challenge, we propose **PPOCoder**, illustrated in Fig.1, a PPO-based RL framework for code generation that employs compiler feedback (i.e., syntactic or functional correctness) as the external source of knowledge in model optimization. PPOCoder utilizes the PPO [34] algorithm for RL optimization which is based on Figure 1: An overview of the proposed PPOCoder framework. The actor and critic networks are first initialized from the pretrained PL model for the desired task. Following the sampling of a synthetic program from the stochastic policy, the reward is determined using the execution feedback and the ground truth target code. The values are estimated by the critic network. Finally, both actor and critic networks are updated based on the obtained values and returns. the proximal actor-critic advantage policy gradient objective and a trust region mechanism, making the model optimization more stable and less sensitive to new environments (tasks or datasets). Also, PPOCoder integrates discrete compiler feedback with the syntactic and semantic matching scores between the generated codes and executable targets. This integration reduces the sparsity of the reward function, leading to a better guidance of the policy to generate code that is more closely aligned with the correct targets. To control explorations and prevent large deviations from the distributions learned by the pretrained PL model, PPOCoder incorporates the KL-divergence penalty. This penalty helps to reduce the chance of memorization, which is often caused by the cross-entropy loss in previous approaches during pretraining and finetuning, resulting in a more controlled and efficient exploration that can generalize well to different code generation tasks and PLs. To summarize, the major contributions of this paper are as follows: * We present a PPO-based RL framework for code generation, PPOCoder, that utilizes compiler feedback (i.e., syntactic or functional correctness) as the external source of knowledge in model optimization. PPOCoder provides a more stable and generalizable model optimization that is less sensitive to new environments (tasks, PLs, or datasets). * We develop a new reward function based on the discrete compiler feedback (compilation or unit test signal when available) received at the end of the generation episode as well as the syntactic and semantic matching scores between the AST sub-trees and DFG edges of the sampled generations and the correct targets. * We reduce the chance of memorization by incorporating a KL-divergence penalty into reward instead of a cross-entropy loss used in earlier works to control explorations and prevent deviations from the pretrained model. * We demonstrate the effectiveness of PPOCoder through an extensive set of experiments across diverse code generation tasks (code completion, code translation, code synthesis) and PLs (C++, Java, Python, C#, PHP, C). PPOCoder outperforms the SOTA baselines, improving the compilation rate and functional correctness over different PLs. We also investigate the benefits of PPOCoder's reward elements and PPO optimization through ablation study. The organization of the remainder of this paper is as follows: In Section 2, existing code generation methods utilizing pretrained models, structure-based approaches, and RL methods for sequence generation are summarized. Section 3 delves into the specifics of our proposed PPOCoder method, including its various components. The experimental evaluation of our method on three code generation tasks: code completion, code translation, and program synthesis tasks, as well as the ablation study and case study, can be found in Section 4. Finally, the paper concludes in Section 5. ## 2 Related Work ### Pretrained Models for Code Generation Recent research has focused on using pretrained neural language models (LMs) in natural language processing (NLP) to automate code generation tasks using large-scale code corpus data from open-source repositories [23, 43, 25]. Notable examples of these pretrained models include CodeBERT [11] with encoder-only, CodeGPT [23] with decoder-only as well as PLABRT [1] and CodeT5 [40] with encoder-decoder transformer architectures. However, these pretrained PL models tend to rely heavily on self-supervised MLM for text generation and still struggle to ensure the syntactic and functional correctness of the generated codes. ### Leveraging Structure in Code Generation Recently, there has been a growing interest in incorporating logical constructs such as abstract syntax trees (ASTs) [15, 29, 39], code sketches [26], and data-flow graphs (DFGs) [42, 12]. For example, GraphCodeBERT [12] uses DFGs to incorporate semantic information, but its decoder is completely unaware of the code structures. StructCoder [36] introduces a pretrained structure-aware encoder-decoder architecture. Despite these efforts, many code generation models still struggle to ensure the syntactic and functional correctness of the generated codes. ### RL for Sequence Generation RL has been used to optimize non-differentiable metrics in sequence generation tasks [31, 3], such as using the REINFORCE [41] algorithm to improve BLEU [27] and ROUGE [21] scores in translation and summarization models. Unlike text generation, code generation requires not only syntactic but also functional correctness as the generated code must pass compilation and unit tests for machine execution. Recently, execution-guided approaches [7, 10, 8] and RL-based finetuning mechanisms [38, 44, 18] are used to enhance the quality of generated codes. For example, [18] has recently studied the integration of RL with unit test signals in the finetuning of the program synthesis models. However, existing RL-based methods still encounter several limitations. They are often designed for a particular task (e.g., only program synthesis) or a particular PL (e.g., only Python), receive a sparse and discrete compiler signal only at the end of the generation episode, and are susceptible to memorization and poor performance on unseen data due to the use of cross-entropy loss with the policy gradient objective in the RL optimization. Our model, PPOCoder, makes the RL framework transferable to diverse code generation tasks and PLs by incorporating a PPO-based framework that integrates compiler feedback with the syntactic and semantic matching scores in the reward and utilizes a KL-divergence penalty to prevent large deviations, while reducing the chance of memorization. ## 3 PPOCoder PPOCoder provides a systematic mechanism for finetuning code generation models using deep reinforcement learning (RL) by effectively and efficiently incorporating compiler feedback as extra knowledge into the model optimization, thereby enhancing the quality of the generated codes in terms of code-specific sequence-level features such as syntactic and functional correctness. Fig. 2 shows the general structure of our proposed PPOCoder model with the policy network (actor) \(\pi_{\theta}\) responsible for code generation actions and the value function (critic) \(V_{\pi}\) responsible for the return estimations. They are both learned with the proximal policy optimization (PPO) approach taking reward \(\mathcal{R}\). As shown in Fig. 2, the total reward is composed of four elements: (\(i\)) compiler feedback; (\(ii\)) syntactic match score; (\(iii\)) semantic match score; and (\(iv\)) KL-divergence penalty. We provide further details about each of these components in the subsections below. ### Problem Formulation The code generation procedure can be formulated as a sequential discrete finite-horizon Markov Decision Process (MDP) with the use of RL in which an agent interacts with the compiler over discrete horizon \(T\) which is equivalent to the maximum number of generated code tokens. The proposed PPOCoder is formulated as follows: **State \(\mathcal{S}\):** The state of environment at each time-step, denoted as \(s_{t}=(\hat{y}_{<t},x),s_{t}\in\mathcal{S}\), is determined by the source PL/NL data \(x\), as well as the set of generated tokens before \(t\), \(\hat{y}_{<t}\). **Action \(\mathcal{A}\):** The PL model chooses the action at each time-step, denoted as \(a_{t}=\hat{y}_{t},a_{t}\in\mathcal{A}\), which is equivalent to the generated token at time-step \(t\). **Policy \(\pi_{\theta}(a_{t}|s_{t})\):** The stochastic policy network parameterized by \(\theta\) is the downstream code generation model that predicts the next token conditioned on the previously generated tokens and the source data, so, \(\pi_{\theta}(\hat{y}_{t}|\hat{y}_{<t},x):\mathcal{S}\rightarrow\Delta( \mathcal{A})\) where \(\Delta(\mathcal{A})\) denotes the probability distribution over all actions (e.g., target vocabulary). The next action \(\hat{y}_{t}\) will be decided based on the _top-k_ sampling from this probability distribution. Policy is initialized with the pretrained reference PL model \(\rho\), i.e., \(\pi_{\theta}^{0}(.)=\rho\). **Reward \(\mathcal{R}\):** The reward \(\mathcal{R}(\hat{y},x,y)\) will be obtained at the end of the generation episode (i.e., after generating the \(<endoftkens>\) token) based on the generated code's syntactic and functional correctness as well as its alignment with executable codes. The reward function \(\mathcal{R}(.)\) is composed of different components which are explained in Section 3.2. **Advantage \(\hat{A}_{\pi}^{t}\):** Inspired by the Generalized Advantage Estimator (GAE) [33], the advantage at time-step \(t\) is defined as follows. \[\hat{A}_{\pi}^{t}= \delta_{t}+\gamma\delta_{t+1}+\ldots+\gamma^{T-t+1}\delta_{T-1}, \tag{1}\] \[\delta_{t}= r_{t}-V_{\pi}(\hat{y}_{<t},x)+\gamma V_{\pi}(\hat{y}_{<t+1},x),\] where \(\gamma\) is the discount rate; \(r_{t}\) is the reward at time-step \(t\); and \(V_{\pi}(s_{t})\) is the state value function at \(t\) which can be approximated by a dense token-level value head on top of the hidden states of PL model. **Objective:** The objective of PPOCoder is to find a policy that Figure 2: Overview of the PPOCoder with actor and critic models. The action is sampled from the policy based on the given source data \(x\) (NL or PL). Then, a reward is obtained for each action to guide and control policy updates. The reward function is composed of four elements: (\(a\)) compiler feedback; (\(b\)) syntactic matching score based on ASTs; (\(c\)) semantic matching score based on DFGs; and (\(d\)) KL-divergence penalty between active policy and the reference pretrained model. The critic model estimates value based on the obtained reward and PPOCoder will be optimized withPPO, which takes into account both value and policy optimization. maximizes the expected reward of generated codes sampled from the policy. \[\max_{\theta}\mathbb{E}_{x\sim\mathcal{X},\hat{y}\sim\pi_{\theta}(.|x)}\big{[} \mathcal{R}(\hat{y},x,y)\big{]}, \tag{2}\] where \(\mathcal{X}\) is the training set of source data; \(\pi_{\theta}(.)\) is the policy network; and \(\mathcal{R}(.)\) is the reward function. We formulate the objective function as a maximization of the advantage instead of reward, as shown in Eq. (3), in order to reduce the variability of predictions. \[\max_{\theta}\mathbb{E}_{x\sim\mathcal{X},\hat{y}\sim\pi_{\theta}(.|x)}\left[ \sum_{t=0}^{T}\hat{A}_{\pi}^{t}\big{(}(\hat{y}_{<t},x),\hat{y}_{t}\big{)}\right], \tag{3}\] We adopt the policy gradient to estimate the gradient of non-differentiable reward-based objectives in Eqs. (2) and (3). Therefore, updating policy parameters for a given source data \(x\) can be derived as: \[\max_{\theta}\mathcal{L}_{\theta}^{PG}=\max_{\theta}\mathbb{E}_{ \hat{y}\sim\pi_{\theta}}\left[\sum_{t=0}^{T}\left(log\pi_{\theta}(\hat{y}_{t} |\hat{y}_{<t},x)\;\hat{A}_{\pi}^{t}\right)\right], \tag{4}\] \[\text{where}\;\;\nabla_{\theta}\mathcal{L}_{\theta}^{PG}=\mathbb{ E}_{\hat{y}\sim\pi_{\theta}}\left[\sum_{t=1}^{T}\left(\nabla_{\theta}log\pi_{ \theta}(\hat{y}_{t}|\hat{y}_{<t},x)\;\hat{A}_{\pi}^{t}\right)\right], \tag{5}\] where \(\nabla_{\theta}\mathcal{L}_{\theta}^{PG}\) refers to the estimated gradient of objective function based on the policy parameterized by \(\theta\). In order to further reduce the variations and avoid significantly changing the policy at each iteration, the objective function in Eq. (4) will be reformulated as shown in Eq. (6), called the conservative policy iteration. \[\mathcal{L}_{\theta}^{CPI}= \mathbb{E}_{\hat{y}\sim\pi_{\theta}}\left[\sum_{t=0}^{T}\left( \frac{log\pi_{\theta}(\hat{y}_{t}|\hat{y}_{<t},x)}{log\pi_{\theta_{old}}(\hat{ y}_{t}|\hat{y}_{<t},x)}\;\hat{A}_{\pi}^{t}\right)\right] \tag{6}\] \[= \mathbb{E}_{\hat{y}\sim\pi_{\theta}}\left[\sum_{t=0}^{T}\left(c_{ \pi}^{t}(\theta)\;\hat{A}_{\pi}^{t}\right)\right],\] where \(\theta_{old}\) is the policy parameters before the update; and \(c_{\pi}^{t}(\theta)\) is the ratio of log-probabilities from new and old policies. ### Reward Function Figure 2 illustrates that the reward of PPOCoder is composed of four different components which are designed to guide and control actions simultaneously towards generating more executable codes. These components are designed due to (1) the sparsity of compiler feedback which is only received at the end of code generation episode; and (2) the high chance of policy divergence from the pretrained PL models. (check Section 4.4 for the reward ablation results). Eq. (7) shows the combination of these different reward terms in the final reward vector \(\mathcal{R}(\hat{y},x,y)\in\;\mathbb{R}^{T}\) with \(T\) as the generation episode length. \[\mathcal{R}(\hat{y},x,y) =\{r_{t}:t=1,\ldots,T\}, \tag{7}\] \[r_{t} =\mathbb{1}(cond)\Big{[}R_{cs}(\hat{y})+\;R_{ast}(\hat{y},y)+\;R_ {dfg}(\hat{y},y)\] \[-\beta R_{kl}(x,\hat{y}_{<t})\Big{]}+\mathbb{1}\left(\neg cond \right)[-\beta R_{kl}(x,\hat{y}_{<t})]),\] \[cond=(\hat{y}_{t}==\langle endofttokens\rangle)\] where \(r_{t}\) is the combined reward at time-step \(t\); \(R_{cs}(.)\), \(R_{ast}(.)\), and \(R_{dfg}(.)\) are the compiler signal, syntactic match score, and the semantic match score reward terms, respectively. Note that, these terms will be received at the end of the generation episode where \(\hat{y}_{t}==\langle endofttokens\rangle\). The \(R_{kl}(x,\hat{y}_{<t})\) is a KL-divergence penalty between the reference pretrained model and the active policy which is imposed to reward at each time-step to control actions. \(\beta\) is also the coefficient of penalty to balance the combination of different reward terms. **Compiler Signal** For each source data \(x\), we sample multiple generated codes in the target language based on the current policy network, \(\hat{y}\sim\pi_{\theta}(.|x)\). Then, we pass these sampled codes \(\hat{y}\) to a compiler and determine the reward based on the parsing signal. In case unit tests are available for the source data, the reward is determined by the functional correctness of generated codes, i.e., passing all unit tests, as shown in Eq. (8). If unit tests are not provided, compiler returns the syntactic correctness of generated codes (i.e., compilable or non-compilable) as shown in Eq. (9). This reward term is designed to guide the model to take actions which can generate higher quality codes in terms of syntactic/functional correctness. _Functional Correctness:_ \[R_{cs}(\hat{y})=\begin{cases}+1\;\;,\;\text{if}\;\hat{y}\text{ passed all unit tests}\\ -0.3,\;\text{if}\;\hat{y}\text{ failed any unit test}\\ -0.6,\;\text{if}\;\hat{y}\text{ received RunTime error}\\ -1\;\;,\;\text{if}\;\hat{y}\text{ received Compile error}\end{cases} \tag{8}\] _Syntactic Correctness:_ \[R_{cs}(\hat{y})=\begin{cases}+1,\text{if}\;\hat{y}\text{ passed compilation test}\\ -1,\text{otherwise}\end{cases} \tag{9}\] **Syntactic Matching Score** Since the compiler signal alone is too sparse, we also add additional information to better control and guide the structure of policy samples. To do so, we define a syntactic matching score \(R_{ast}(\hat{y},y)\) between the generated hypothesis \(\hat{y}\sim\pi_{\theta}(.|x)\) and the parallel executable target \(y\). The goal is to maximize this matching score for better compilability or syntactic correctness. We use the abstract syntax tree (AST) to find a tree representation of the code's abstract syntax structure. Then, we compare the sub-trees extracted from the hypothesis and reference target ASTs, respectively, and calculate the syntactic match score as a percentage of matched AST sub-trees. \[R_{ast}(\hat{y},y)=Count(AST_{\hat{y}}\cap AST_{y})/Count(AST_{y}) \tag{10}\] where \(Count(AST_{\hat{y}}\cap AST_{y})\) is the number of matched AST sub-trees between the hypothesis \(\hat{y}\) and reference \(y\); and \(Count(AST_{y})\) is the total number of reference AST sub-trees. This score can assess the syntactic quality of code since the differences between ASTs can be affected by syntactic issues such as token missing and data type errors. ### Semantic Matching Score To improve the functional correctness, we need to also take into account the semantic matching between hypothesis \(\hat{y}\) and the executable target \(y\), in addition to their syntactic matching. In PLs, code semantics are closely related to the dependencies of its variables. As a result, in order to construct a semantic matching score, we make use of the data-flow graphs (DFGs), a graph representation of code in which the nodes stand in for variables and the edges for the sources of each variable's values. We denote DFG of a code \(Y\) as \(\mathcal{G}(Y)=(V;E)\) where \(V=\{v_{1},\ldots,v_{m}\}\) is the set of variables, and \(e_{i,j}=\langle v_{i},v_{j}\rangle\) is the \(i\to j\) edge showing that value of the \(j\)-th variable originates from the \(i\)-th variable. Then, we calculate the semantic match score as a percentage of matched data-flows in DFGs. \[R_{dfg}(\hat{y},y)=Count(\mathcal{G}(\hat{y})\cap\mathcal{G}(y))/Count( \mathcal{G}(y)) \tag{11}\] where \(Count(\mathcal{G}(\hat{y})\cap\mathcal{G}(y))\) represents the number of matched DFG edges between hypothesis \(\hat{y}\) and reference \(y\); and \(Count(\mathcal{G}(y))\) represents the total number of reference DFG edges. Maximizing this score can guide and control policy to generate codes which are more aligned with executable target codes in terms of variable relations, thus, enhancing the semantic quality and logical correctness of the generated codes. ### KL-Divergence Constraint We incorporate a negative KL-divergence penalty \(KL(\pi||\rho)\) into the reward to prevent the active policy \(\pi\) deviating away from the pretrained PL model \(\rho\). The KL-penalty at time \(t\) can be approximated as: \[R_{kl}\left(x,\hat{y}_{<t}\right)= KL\left(\pi||\rho\right)\approx\log\frac{\pi\left(.|x,\hat{y}_{<t} \right)}{\rho\left(.|x,\hat{y}_{<t}\right)} \tag{12}\] \[= \log\left(\pi\left(.|x,\hat{y}_{<t}\right)\right)-\log\left(\rho \left(.|x,\hat{y}_{<t}\right)\right)\] where \(\log\left(\pi\left(.|x,\hat{y}_{<t}\right)\right)\) and \(log\left(\rho\left(.|x,\hat{y}_{<t}\right)\right)\) are the log-probabilities obtained from the active policy \(\pi\) and pretrained model \(\rho\) at time \(t\) given source data \(x\) and the previously predicted tokens \(\hat{y}_{<t}\). This reward term can control actions and play the role of entropy bonus in controlling exploration and exploitation where greater \(\beta\) in Eq. (7) provides less exploration and more exploitation. ### Loss Function We employ proximal policy optimization (PPO) [34] and define the loss function of PPOCoder as follows. \[\mathcal{L}_{\theta}=-\mathcal{L}_{\theta}^{CPI}+\alpha\mathcal{L }_{\theta}^{VF} \tag{13}\] \[\mathcal{L}_{\theta}^{CPI}=\ \mathbb{E}_{y\sim\pi_{a}}\left[\sum_{t=0}^{T} \left(c_{\pi}^{t}(\theta)\hat{A}_{\pi}^{t},clip\left(c_{\pi}^{t}(\theta),1- \epsilon,1+\epsilon\right)\hat{A}_{\pi}^{t}\right)\right]\] (14) \[\mathcal{L}_{\theta}^{VF}=\ \mathbb{E}_{\hat{y}\sim\pi_{a}}\left[\sum_{t=0}^ {T}\left(V_{\pi}(\hat{y}_{<t},x)-\left(\hat{A}_{\pi}^{t}+V_{\pi_{ad}}(\hat{y}_ {<t},x)\right)\right)^{2}\right] \tag{15}\] where the loss function \(\mathcal{L}_{\theta}\) is the linear combination of surrogate policy objective function \(\mathcal{L}_{\theta}^{CPI}\) and the value function squared error term \(\mathcal{L}_{\theta}^{VF}\). Therefore, minimizing loss function leads to the maximization of the surrogate advantage policy objective (actor optimization) as well as the minimization of value error (critic optimization). In other words, the actor is guided to maximize the advantage policy objective which is correlated with maximizing the expected reward as explained in Eqs. (4)-(6); and the critic is enforced to minimize the token-level value estimation error which is defined based on the difference between the values of new policy \(V_{\pi}(\hat{y}_{<t})\) and the estimated dense returns of the old policy \(\hat{A}_{\pi}^{t}+V_{\pi_{ad}}(\hat{y}_{<t})\). In Eqs. (13)-(15), \(\epsilon\) is the proximal policy ratio clip range, and \(\alpha\) is the linear combination weight between loss terms of actor and critic. Algorithm 1 provides the pseudocode of PPOCoder. For each source-target pair \((x,y)\), we sample multiple translated hypotheses from the policy network \(\hat{y}\sim\pi_{\theta}(.|x)\). After generating each hypothesis, we find the integrated reward based on the reward function defined in Section 3.2, estimate the advantage, calculate the corresponding PPO loss function, and update the policy and value head parameters based on the final gradients (as shown in lines 5-19). ## 4 Experiments We evaluate PPOCoder on three different code generation tasks: \((i)\)_Code Completion_ automatically completes partial Python code snippets; \((ii)\)_Code Translation_ involves translating between any language-pair among six different PLs (Python, Java, C#, C++, PHP, C); and \((iii)\)_Program Synthesis_ (NL2Code) generates a Python function given a natural language (NL) description. ### Code Completion For this downstream task, we employ the Python corpus in CodeSearchNet (CSN) 1[14]. We extract \(50\)k compilable Python methods with sufficient length (at least 64 tokens) and randomly split the data to train/val/test sets with \(40\)k\(/5\)k\(/5\)k samples. We mask the last 25 tokens of the source code and ask the model to complete it. To evaluate the quality of generated codes, three metrics are used: \((i)\)_Exact Match_ (xMatch) which checks if the prediction is the same as the ground truth, \((ii)\)_Levenshtein Edit Similarity_ (Edit Sim) [23, 35] which measures the number of single-character edits needed to match the generated code with the correct target, and \((iii)\)_Compilation Rate_ (Comp Rate) [17] that shows the success rate of compilation among completed programs. Since unit tests are not provided, we focus on the syntactic correctness of the completed codes and take the compiler signal as reward. Table 1 shows the results of PPOCoder along with the baselines on the code completion task. In this table, the BiLSTM [24] and Transformer [37] models are not pretrained. The GPT-2 [30] model was pretrained on text corpus, while CodeGPT [23] and CodeT5 [40] models are pretrained on the large-scale source code corpus. The reported results for these pretrained models are after the finetuning step on the code completion task. More details of the experimental setup are provided in Appendix A.1 It can be observed that CodeGPT and CodeT5 have a compilation rate of \(46.84\) and \(52.14\), respectively, indicating that about half of the generated codes are not compilable. By employing our proposed PPOCoder framework on the finetuned CodeT5 model (PPOCoder + CodeT5), the compilation rate improves significantly from \(52.14\) to \(97.68\), demonstrating the importance of incorporating compiler feedback into the model's optimization and the effectiveness of PPOCoder in code completion. We can also see that the PPOCoder performs similarly to other SOTA models in terms of Edit sim and xMatch scores, showing that the actor model effectively explores without deviating much from the pretrained model distributions. ### Code Translation We use the XLCoST 2[45] dataset for the code translation task which is a parallel dataset that includes solutions for problems related to data structures and algorithms in six languages: C++, Java, Python, PHP, C, and C#. In our experiments, we only use the compilable filtered parallel data in source and target language pairs. Table 6 in Appendix A.2 shows the detailed statistics of these compilable filtered samples across all six PLs. To evaluate the quality of translated codes, we use two metrics: \((i)\)_Comp Rate_ that measures compilation success rate, and \((i)\)_CodeBLEU_[32] score which combines the weighted BLEU [28] based on the code-related keywords with the the syntactic and semantic alignment measures. As unit tests are not available for parallel language pairs, we focus on syntactic correctness with the help of compiler signal. Footnote 2: [https://github.com/reddy-lab-code-research/XLCoST](https://github.com/reddy-lab-code-research/XLCoST) Table 2 presents the results of PPOCoder on code translation along with the baselines. In this table, column and row headers represent the translation source and target PLs, respectively. The Naive Copy baseline [23] simply copies the source code as the output, showing how similar two PLs are. The reported results of pretrained CodeBERT and PLBART are after finetuning on the code translation task for each language pair. The experimental setup and implementation details are provided in Appendix A.1 Table 2 demonstrates that incorporating our proposed PPOCoder +CodeT5 improves the overall compilation rate across all language pairs, in comparison to the SOTA baseline CodeT5. Specifically, we observe an absolute increase of \(9.92\%\), \(22.22\%\), \(21.62\%\), \(13.20\%\), \(7.46\%\), and \(6.11\%\) in the compilation rate for C++, Java, Python, C#, PHP, and C target PLs, respectively. PPOCoder also obtains a comparable CodeBLEU score to other baselines, meaning that it does not deviate a lot from the pretrained code fluency distribution. Among high-resource languages, results show relatively greater compilation rate improvements for Python and Java as target PL. This is likely due to their high-level constructs, such as the absence of pointers and memory management constructs, which can be a source of errors in languages like C++ and C#. Additionally, Java and Python feature a more lenient compilation process and extensive runtime error checking, resulting in many errors that would cause C++ and C# compilation to fail, being detected only at runtime. The table shows a significantly lower compilation rate for code translation with C as target PL among all baselines. This is likely due to the limited number of samples with C as a target PL in the dataset (as shown in Table 6 in Appendix A.2). ### Program Synthesis In this task, we use the APPS [13] dataset comprising \(10\)k coding problems of varying difficulty levels, split 50/50 for \begin{table} \begin{tabular}{l c c c} \hline \hline Model & _xMatch_ & _Edit Sim_ & _Comp Rate_ \\ \hline BiLSTM & 20.74 & 55.32 & 36.34 \\ Transformer & 38.91 & 61.47 & 40.22 \\ GPT-2 & 40.13 & 63.02 & 43.26 \\ CodeGPT & 41.98 & 64.47 & 46.84 \\ CodeT5 & 42.61 & 68.54 & 52.14 \\ PPOCoder + CodeT5 & **42.63** & **69.22** & **97.68** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the code completion task for completing the last 25 masked tokens from CodeSearchNet. train/test sets. The dataset consists of Introductory, Interview, and Competition level problems with respective train/test samples of 2639/1000, 2000/3000, and 361/1000. Each problem has \(23\) Python solutions and \(21\) unit tests on average. To evaluate the generated codes, we employ the _pass@k_ metric [6] which calculates the percentage of problems for which all unit tests are passed using \(k\) synthetically generated programs per problem. Since unit tests are provided in APPS, we use them in the PPOCoder's reward (as defined in Eq. 9). Table 3 demonstrates the results of program synthesis on the APPS dataset along with other baselines reported in [13] including GPT-2 [30], GPT-3 [5], GPT-Neo [4], Codex [6], AlphaCode [20] and CodeRL [18]. The reported results for various models are post-finetuning on APPS, except for GPT-3 and Codex. For the experimental setup details of all methods, please refer to Appendix A.1 The results indicate that the smaller encoder-decoder architecture of CodeT5 outperforms larger models, and PPOCoder with CodeT5 further improves performance, surpassing even larger pretrained LMs such as GPTs. As demonstrated in Table 3, PPOCoder +CodeT5 exhibits comparable or even superior _pass@k_ performance than CodeRL+CodeT5, another RL-based finetuning mechanism for program synthesis. To further evaluate the generalizability of these models, the zero-shot performance of the APPS finetuned models was examined on the MBPP [2] program synthesis benchmark, which is a collection of 974 short (one sentence) problems, each including 1 correct Python solution and 3 corresponding unit tests. Table 4 shows the results of program synthesis on the MBPP benchmark. Both RL-based methods, PPOCoder +CodeT5 and CodeRL+CodeT5, finetuned on APPS, exhibit remarkable zero-shot performance on MBPP with a _pass@k_ of \(63\%\) and \(68\%\), respectively, surpassing even the largest GPT-137B's performance of \(61.4\%\). As observed in Table 4, the proposed PPOCoder +CodeT5 outperforms CodeRL+CodeT5 on MBPP by a significant margin of \(5.2\%\). This can be attributed to two factors. Firstly, CodeRL integrates the supervised cross-entropy loss to the RL policy gradient objective to maintain consistency in performance and prevent deviation from the pretrained model distribution. However, over-optimization of the supervised cross-entropy on synthetic data increases the chance of memorization on the training data and leads to inferior performance on unseen data. PPOCoder regulates deviation by employing the KL-divergence penalty for generation instead of the supervised cross-entropy loss. This can reduce the likelihood of memorization, resulting in improved generalizability on the MBPP benchmark. Secondly, CodeRL utilizes the actor-critic algorithm with REINFORCE reward policy gradient objective, while PPOCoder employs the PPO algorithm with actor-critic advantage policy gradient objective, and a trust region mechanism to ensure minimal deviation from the previous policy. This leads to a more stable and generalizable model optimization for new environments (tasks or datasets). ### Ablation Study To investigate the effect of different components of PPOCoder, we conduct ablation experiments with several variants of our model, including different reward terms, RL objective terms, action space size, and the number of synthetic samples. We take the Java-Python translation as a case study and present the results in Fig. 3. Please check Appendix A.3 for more ablation experiments with other target PLs. **Reward Elements.** Fig. 3(a) shows the effect of including different reward terms in the performance of PPOCoder. Models tested include CodeT5 without RL training, and with \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{High Resource} & \multicolumn{4}{c}{Low Resource} & \multicolumn{4}{c}{Overall} \\ \cline{2-13} \multicolumn{1}{c}{\multirow{-2}{*}{Model}} & \multicolumn{2}{c}{C++} & \multicolumn{2}{c}{Java} & \multicolumn{2}{c}{Python} & \multicolumn{2}{c}{C\#} & \multicolumn{2}{c}{PHP} & \multicolumn{2}{c}{C} & \multicolumn{2}{c}{C} & \multicolumn{2}{c}{C\#} \\ \cline{2-13} \multicolumn{1}{c}{} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} & \multicolumn{1}{c}{_CodeRLLU_} & \multicolumn{1}{c}{_CompRule_} \\ \hline \multicolumn{13}{c}{Nava Copy} & – & – & 42.56 & 30.28 & 17.81 & 37.7 & 47.28 & 37.25 & 19.38 & 5.21 & 5.394 & 4.62 & 38.08 & 12.22 \\ \multirow{-13}{*}{C++} & CodeRLBERT & – & – & 62.56 & 37.12 & 36.41 & 26.72 & 67.12 & 38.52 & 38.77 & 12.23 & 21.84 & 2.34 & 45.34 & 23.38 \\ & PIBART & – & – & 71.23 & 44.51 & 60.99 & 45.92 & **47.44** & 51.66 & 62.35 & 53.63 & 52.76 & 36.22 & 66.03 & 46.42 \\ & Cadex & – & – & 80.17 & 59.0 & 72.83 & 53.33 & 73.11 & 60.31 & 67.47 & 68.21 & 71.44 & 71.92 & 62.46 \\ & PPOCoder + CodeT5 & – & & **81.14** & **70.33** & **74.03** & **63.38** & 72.93 & **69.18** & **68.24** & **80.62** & 64.21 & **79.03** & **72.11** & **73.28** \\ \hline \multirow{13}{*}{Java} & Naive Copy & 52.32 & 14.50 & – & – & 36.51 & 22.16 & 69.08 & 41.55 & 39.91 & 2.10 & 54.18 & 2.10 & 50.39 & 16.38 \\ & CodeBERT & 69.21 & 30.21 & – & – & 45.41 & 43.51 & 74.86 & 55.01 & 48.33 & 10.72 & 19.53 & 0 & 51.28 & 27.89 \\ & PLBERT & 72.41 & 47.12 & – & – & 70.31 & 53.79 & 76.19 & 45.75 & 64.06 & 21.47 & 46.21 & 72.22 & 65.52 & 35.67 \\ & CadexT5 & 78.59 & 59.81 & – & – & 75.98 & 60.61 & 83.14 & 70.66 & 63.54 & 61.67 & 67.41 & 67.89 & 79.18 & 64.73 \\ & PPOCoder + CodeT5 & **79.14** & **82.80** & – & – & **76.65** & **92.14** & **85.66** & **86.09** & **64.16** & **90.88** & 60.52 & **81.66** & **73.22** & **86.95** \\ \hline \multirow{13}{*}{PHOCoder} & 37.41 & 21.47 & 39.72 & 17.27 & – & – & 38.52 & 10.71 & 43.91 & 16.84 & 35.11 & 0 & 38.93 & 13.26 \\ & CodeBERT & 69.93 & 42.15 & 45.76 & 38.10 & – & – & 40.23 & 26.10 & 52.12 & 31.74 & 18.32 & 0 & 45.07 & 27.62 \\ \cline{1-1} & PHOBERT & 74.49 & 61.20 & 63.23 & 54.59 & – & – & 67.35 & 44.65 & 69.86 & 66.76 & 39.15 & 6.12 & 62.93 & 46.66 \\ \cline{1-1} & CadexT5 & 79.86 & 74.11 & 74.15 & 62.74 & – & – & 75.54 & 58.26 & **79.83** & **80.56** & **56.38** & 70.81 & **73.24** & **69.19** \\ \cline{1-1} & POOCoder + CodeT5 & **80.34** & **87.72** & **71.62** & **92.70** & – & **76.99** & **83.33** & 79.65 & **93.51** & 52.15 & **95.80** & 72.67 & **90.81** \\ \cline{1-1} & Naive Copy & 45.14 & 10.74 & 17.61 & 13.14 & 400.09 & – & – & 37.79 & 42.14 & 601.77 & 42.52 & 80.36 \\ \cline{1-1} & CodeBERT & 74.51 & 18.02 & 81.25 & 27.88 & 50.83 & 37.05 & RL training utilizing different combinations of reward terms: (compiler feedback), _kl_ (KL-divergence penalty), _dfg_ (semantic matching score from DFGs), and _ast_ (syntactic matching score from ASTs). Results show that the discrete compiler feedback alone is insufficient, however, integrating it with the KL-divergence penalty as well as the syntactic/semantic matching score boosts the compilation rate. The best performance is achieved by utilizing all four reward terms. **Loss Elements.** Fig. 3(b) represents the results of PPOCoder with different objective configurations. We observe that the policy gradient objective alone (_+PG_), i.e., the REINFORCE algorithm, can boost the performance of the CodeT5 model. The compilation rate further improves by introducing the value function as critic (_+PG+VF_), i.e., A2C algorithm. Results show that the best performance is achieved by utilizing proximal conservative policy iteration with value optimization (_+CPI+VF_), indicating that the PPO algorithm performs superior to others on code generation. **Action Space Size.** We examine the effectiveness of action space size on PPOCoder's performance by adjusting the \(k\) parameter in the \(top-k\) policy synthetic sampling. Fig. 3(c) shows that when \(k=1\), PPOCoder may not be able to have enough exploration for the better possible policy updates. On the other hand, when \(k\) gets too large, PPOCoder may become overwhelmed by many different possible actions and struggle to learn the optimal policy, leading to degraded performance. Therefore, results reveal that a small value of \(k\) (\(k=1\)) may not provide sufficient exploration, while a large value (\(k=50265\) (vocab size) ) can hinder the learning of optimal policy. In the code generation experiments, we usually use the action space size \(5\) which provides a good balance for optimal exploration in most cases. **No. of Synthetic Samples.** The effect of synthetic policy sample size on PPOCoder's performance is examined by modifying the \(num\_samples\) in Alg. 1. Fig. 3(d) shows that an increase in \(num\_samples\) from \(1\) to \(10\) improves performance, but further increases lead to a decline in performance. This suggests that while additional synthetic samples can enhance the ability to identify underlying patterns, a large number of synthetic samples may not be representative of the general population and can negatively impact performance by causing confusion in model updates. ### Case Study Fig. 4 shows an example of Java to C++ translation for both CodeT5 and PPOCoder +CodeT5. Similar to the previous case, it can be observed that the compilation is improved by PPOCoder. For this example, CodeT5's translation has these issues: (1) CodeT5 generates a non-standard data type called subset which takes in a pair of integers. The use of the non-standard data-type without importing it or defining it causes a compilation error, while PPOCoder +CodeT5 generates the \begin{table} \begin{tabular}{l c c} \hline \hline Model & Size & State & _pass@80_ \\ \hline GPT & 224M & fine-tuned & 7.2 \\ GPT & 422M & fine-tuned & 12.6 \\ GPT & 1B & fine-tuned & 22.4 \\ GPT & 4B & fine-tuned & 33.0 \\ GPT & 8B & fine-tuned & 40.6 \\ GPT & 68B & fine-tuned & 53.6 \\ GPT & 137B & fine-tuned & 61.4 \\ CodeT5 & 60M & fine-tuned & 19.2 \\ CodeT5 & 220M & fine-tuned & 24.0 \\ CodeT5 & 770M & fine-tuned & 32.4 \\ \hline CodeRL+CodeT5 & 770M & zero-shot & 63.0 \\ PPOCoder +CodeT5 & 770M & zero-shot & **68.2** \\ \hline \hline \end{tabular} \end{table} Table 4: Results of the zero-shot transferability on MBPP. Both zero-shot models are finetuned on APPS and evaluated on MBPP in the zero-shot setting. Figure 3: Ablation experiment results on Java-Python translation with different configurations of (a) reward, (b) loss, (c) action space size, and (d) number of synthetic samples. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{_pass@1_} & \multicolumn{4}{c}{_pass@5_} & \multicolumn{4}{c}{_pass@1000_} \\ \cline{2-13} Model & Size & Intro & Inter & Comp & All & Intro & Inter & Comp & All & Intro & Inter & Comp & All \\ \hline Codex & 12B & 4.14 & 0.14 & 0.02 & 0.92 & 9.65 & 0.51 & 0.09 & 2.25 & 25.02 & 3.70 & 3.23 & 7.87 \\ AlphaCode & 1B & – & – & – & – & – & – & – & 17.67 & 5.24 & 7.06 & 8.09 \\ GPT-3 & 175B & 0.20 & 0.03 & 0.00 & 0.06 & – & – & – & – & – & – & – \\ GPT-2 & 0.1B & 1.00 & 0.33 & 0.00 & 0.40 & 2.70 & 0.73 & 0.00 & 1.02 & – & – & – & – \\ GPT-2 & 1.5B & 1.30 & 0.70 & 0.00 & 0.68 & 3.60 & 1.03 & 0.00 & 1.34 & 25.00 & 9.27 & 8.80 & 12.32 \\ GPT-Neo & 2.7B & 3.90 & 0.57 & 0.00 & 1.12 & 5.50 & 0.80 & 0.00 & 1.58 & 27.90 & 9.83 & 11.40 & 13.76 \\ CodeT5 & 60M & 1.40 & 0.67 & 0.00 & 0.68 & 2.60 & 0.87 & 0.10 & 1.06 & – & – & – & – \\ CodeT5 & 220M & 2.50 & 0.73 & 0.00 & 0.94 & 3.30 & 1.10 & 0.10 & 1.34 & – & – & – & – \\ CodeT5 & 770M & 3.60 & 0.90 & 0.20 & 1.30 & 4.30 & 1.37 & 0.20 & 1.72 & – & – & – & – \\ CodeRL+CodeT5 & 770M & 4.90 & **1.06** & **0.5** & 1.71 & 8.60 & **2.64** & 1.0 & 3.51 & **36.10** & 12.65 & 13.48 & 17.50 \\ PPOCoder +CodeT5 & 770M & **5.20** & 1.00 & **0.5** & **1.74** & **9.10** & 2.50 & **1.20** & **3.56** & 35.20 & **13.10** & **13.60** & **17.62** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of the program synthesis task on the APPS dataset.
2309.04590
Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components
In manufacturing processes, surface inspection is a key requirement for quality assessment and damage localization. Due to this, automated surface anomaly detection has become a promising area of research in various industrial inspection systems. A particular challenge in industries with large-scale components, like aircraft and heavy machinery, is inspecting large parts with very small defect dimensions. Moreover, these parts can be of curved shapes. To address this challenge, we present a 2-stage multi-modal inspection pipeline with visual and tactile sensing. Our approach combines the best of both visual and tactile sensing by identifying and localizing defects using a global view (vision) and using the localized area for tactile scanning for identifying remaining defects. To benchmark our approach, we propose a novel real-world dataset with multiple metallic defect types per image, collected in the production environments on real aerospace manufacturing parts, as well as online robot experiments in two environments. Our approach is able to identify 85% defects using Stage I and identify 100% defects after Stage II. The dataset is publicly available at https://zenodo.org/record/8327713
Arpit Agarwal, Abhiroop Ajith, Chengtao Wen, Veniamin Stryzheus, Brian Miller, Matthew Chen, Micah K. Johnson, Jose Luis Susa Rincon, Justinian Rosca, Wenzhen Yuan
2023-09-08T20:36:56Z
http://arxiv.org/abs/2309.04590v1
# Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components ###### Abstract In manufacturing processes, surface inspection is a key requirement for quality assessment and damage localization. Due to this, automated surface anomaly detection has become a promising area of research in various industrial inspection systems. A particular challenge in industries with large-scale components, like aircraft and heavy machinery, is inspecting large parts with very small defect dimensions. Moreover, these parts can be of curved shapes. To address this challenge, we present a 2-stage multi-modal inspection pipeline with visual and tactile sensing. Our approach combines the best of both visual and tactile sensing by identifying and localizing defects using a global view (vision) and using the localized area for tactile scanning for identifying remaining defects. To benchmark our approach, we propose a novel real-world dataset with multiple metallic defect types per image, collected in the production environments on real aerospace manufacturing parts, as well as online robot experiments in two environments. Our approach is able to identify 85% defects using Stage I and identify 100% defects after Stage II. The dataset is publicly available at [https://zenodo.org/record/8327713](https://zenodo.org/record/8327713). ## I Introduction Various large-scale manufacturing machinery and industries with large metal parts like aircraft components, experience various internal and external factors such as vibration, foreign objects debris, high temperature, friction, and corrosion. This can lead to fatigue or even part failure. Hence, to ensure safe operation, each industry requires surface inspection. For example, in the aircraft industry airplanes are inspected every 100 hours[1], according to Federal Aviation Administration(FAA) rules. The periodic inspection could extend the lifetime of the parts. However, human visual and touch inspection still accounts for more than 90% of inspection checks[2]. There is a significant interest in automating the surface defect detection process, as it allows for fast, repeatable, and cost-effective detection, as compared to the human expert inspection process. Surface defect detection on industrial parts is a fast-growing market [3]. Nowadays, more and more inspection systems use vision-based techniques combined with Deep Learning for defect detection[4][5]. However, aerospace and spacecraft industries have different inspection requirements - they have large metal parts which need to be scanned and the dimensions of the defects can be as small as 0.01mm. Instead of relying on a vision-only system, we propose a visuotactile 2-stage pipeline for surface defect detection. Our method combines the advantages of both vision and tactile sensing and avoids their limitations: vision has high prediction speed and can cover large surface area, but typically attains low accuracy since the visual appearance of defects can be influenced by many sources of noise; contrarily, high-resolution tactile sensing, give high accuracy but has low speed because of the small coverage area in a single scan. The first stage of our pipeline uses an RGB camera to collect an image of a segment of the specimen and uses deep learning to identify potential defect regions. The regions with low defect confidence are passed onto the second stage of the pipeline which leverages a high-resolution vision-based tactile sensor, the GelSight Mobile, for taking a tactile scan. This tactile data is used to identify and classify the surface defect. This approach allows the scanning of large surfaces for small anomalies efficiently. We implemented the whole system on a robot arm, to allow for inspection in a production environment. Using our method, we are able to identify defects 100% of the time in a fraction of the time as compared to the tactile-only approach and more accurately than the vision-only approach. We make 3 specific contributions in this work * We introduce the first aerospace defect detection dataset containing metallic surfaces with multiple defects in a single image * We propose a 2-stage defect detection approach using visuotactile sensing * We integrate our detection approach into a prototype system on an industrial robot arm We introduce the dataset and dataset collection details in Section (III), the visuotactile detection approach in Section (IV), and the integrated robot system for runtime defect detection in Section (V). Using our approach, we are able to achieve perfect recall in 70x less inspection as compared to the tactile-only approach. We successfully integrate our detection system in 2 separate environments(different arms, different illumination conditions, and different panels). The proposed techniques are widely applicable to various industries with large-scale components like ship hull inspection and heavy machinery. ## II Related Work This section surveys works that present novel defect detection techniques as well as works that propose datasets with industrial defects. _Defect detection methods_: This section covers various surface inspection techniques using different sensing techniques. In [5], authors used a depth camera to create a 3D reconstruction of the part under inspection, computer vision techniques for segmenting cracks, and machine learning for classifying them into defect vs non-defect patterns. In the aerospace industry, the most common type of part is metallic and very reflective. As noted in [6], commercial depth sensors exhibit poor accuracy when sensing metallic parts. In [7], authors train a custom deep CNN for automatic crack detection on concrete surfaces. Their approach gives a 0.87 F-measure value on the CrackLS315 dataset. In [8], authors similarly used a CNN and a vision-based tactile sensor for crack profile estimation. However, it is unclear how to extend the approach to images containing multiple kinds of defects that are not scratches. [9] is the closest to our work. They propose a 2-stage visuotactile pipeline targeted only to crack detection. They used 3500 images to train an object detector and used an optical fiber-based tactile-proximity sensor for assessing cracks. However, their method is tested on a toy dataset using 3D-printed parts containing cracks in a lab setting. Their dataset contains a single large crack across the image on a non-metallic surface. We have integrated our detection pipeline in a production setting and show results on real aerospace parts. Moreover, we require an order of magnitude less data than their work. _Metal defect datasets_: In this section, we cover datasets that target defect detection in industrial parts and manufacturing processes. MVTec dataset [10] introduced a challenging dataset for unsupervised anomaly detection. The dataset contains RGB images of various small to medium manufactured parts like carpets, leather, metal nut, cable, hazelnut, etc. However, each image contains only a single type of anomaly. In comparison, our dataset contains multiple defects in a single image and can be very small(less than 2% of pixels in the image). The magnetic tile dataset[11] contains 1344 grayscale images along with a segmentation mask for each image. They provide a segmentation mask for each image. The dataset is targeted towards industrial parts (flat metallic sheets), which are challenging to image, similar to our case. However, the parts considered in the dataset are flat and have consistent illumination across the tile plane. This illumination setting is hard to replicate for aerospace parts, which can be curved and have a significant variation in color across the metallic part. ## III Boeing-CMU multi-defect dataset We introduce a novel dataset of surface defect detection for aerospace metal parts. This dataset is used to test our defect detection algorithm in an offline setting. Our dataset contains 184 RGB images with bounding box annotations in Pascal VOC format[12] for each image. Each RGB image contains multiple defects. The defects are manually made on the parts by experts from Boeing with a process similar to the real defects in production, and they are a more challenging inspection cases since the defect density is higher than real parts in production. Each bounding box contains the location and class of the defect. This dataset contains 3 kinds of defects - _scratches_, _drill runs_, and _gouges_. Figure 2 illustrates the defects by showing its RGB images, GelSight tactile image, and depth profile along the defect, respectively. The standard definition of the defects is given in terms of depth and width of surface geometry, as marked in the _Heightfield_ in Figure 2. Table I shows the breakdown of the number of defects in our dataset. The dataset was collected at the Boeing lab with an Intel RealSense D455 camera at a resolution of 1280 \(\times\) 800. The full setup is shown in Figure 3. We placed soft boxes (bulb with a diffuser cloth in front) at an angle of 45\({}^{\circ}\) along the vertical axis on either side of the camera. This illumination setting allows us to capture images of metallic curved panels without over-saturation or under-exposure in any part of the image. For the dataset, we used 18 curved metal (approximate radius of curvature 26.5 inch) panels - 2 panels of dimension 40 inch \(\times\) 40 inch, 15 panels of dimensions 56 inch \(\times\) 38 inch, and 1 panel of dimension 94in \(\times\) 20in. We collect 9 images at different locations per panel to cover the whole panel. Each panel is a piece of an aircraft with fasteners, a support structure underneath, and a green temporary protective coating. All the images were manually labeled by Boeing personnel using LabelImg1, a graphical image annotation tool. Figure 1 shows some illustrative images in the datasets. One noticeable feature is the presence of significant variation in the surface color. This is due to the surface being curved and metallic in appearance. Footnote 1: [https://github.com/heartexlabs/labelImg](https://github.com/heartexlabs/labelImg) **Tactile dataset**: We collected tactile data using a GelSight Mobile 0.5X[13], a high-resolution vision-based tactile sensor with 6.9\(\mu\)m resolution in x-y direction and 4\(\mu\)m in the z-direction. We manually pressed the sensor on the probable defect location. We collected 59 scans from 1 Boeing panel, containing - 17 scratches, 14 gouges, and 18 drill runs. We also collect 10 no-defect cases. Each tactile scan is manually labeled with a class label. ## IV Multi-modal defect detection method and setup Figure 4 shows the proposed pipeline for surface defect detection and classification based on visual and tactile sensing. Our 2-stage pipeline uses RGB images for identifying defect regions with a confidence value. We delegate bounding boxes with low confidence scores to the second stage and use high-resolution tactile images for identifying the defect. In the following section, we provide details about each stage. ### _Stage I: Vision-based defect detection_ The first stage uses an RGB camera to scan the surface and predict defects. We used a Faster Region-based Convolutional Neural Network(Faster R-CNN)[14] with MobileNet-v3 FPN backbone [15]. The neural network architecture was chosen based on empirical observation. The model was pretrained on Common Objects in Context (COCO) dataset [16]. We fine-tune the last 3 backbone layers, regression, and classification models after feature prediction. Note, the model can be used with images of any size without resizing, at both train and test time, as it is fully convolutional. The neural network model predicts multiple bounding boxes per image. Each bounding box contains the coordinates of the rectangle region in the camera coordinate frame, defect class, and confidence score for that class. At test time, we predict bounding boxes with a confidence score higher than 0.7 as surface defects with certainty and shortlist those with scores between 0.1 and 0.7 to delegate to the next stage of the pipeline. These threshold choices provide a good trade-off between detection in stage I and proposing candidates with minimal false positives for stage II. While training, we use 3 data augmentation techniques - photometric (varying brightness, contrast, hue, and saturation), CutAndPaste [17], and translation. These augmentation techniques make our model robust to illumination changes and the presence of distracting features (like bolts and big cracks) at runtime when the inspection parts could be placed in a totally different environment and could be of different shapes. Figure 5 illustrates the augmentation techniques applied individually. At the training time, we apply all of them at the same time. The photometric data augmentation is specifically helpful to make the model robust to lighting variation which might occur in the production environment. ### _Stage II: Tactile-based defect detection_ We use GelSight Mobile [13] from GelSight Inc. for obtaining high-resolution tactile information. The tactile sensor provides a high-quality heightfield as shown in Figure 2 GelSight Image. Due to the high-quality heightfield, we can directly inspect anomalous regions and use the defect description to identify them. For figuring out the anomalous regions on heightfield, we use the canny edge detector without non-maximal suppression, followed by the Probabilistic Hough line for scratches & drill run and Hough Circle detection for gouges, respectively. We hand-tuned the parameters of canny edge detector and feature detection algorithms. This step is required to identify potential regions containing defects. After figuring out the anomalous region, we extract the depth profile by generating a line segment passing perpendicular to the scratch & drill run or passing through the center of the gouge, as shown in Figure 6C. After obtaining the depth profile, we detrend the depth by presuming the depth in the Fig. 1: **Dataset Illustration**: It contains RGB images of aircraft parts from Boeing. Each panel is curved with 3 sizes 40in \(\times\) 40in, 56in \(\times\) 18in, and 94in \(\times\) 20in. For each image, we have bounding box annotations made by industry inspectors. Fig. 3: **Dataset capture setup**: Left image contains - (1) Position for metal panel placement; (2) Newer 24 in \(\times\) 24 in soft boxes lights with 700 Watt, 5500K CFL Light Bulbs; (3) RealSense D435 camera. On the right, we show the real setup used to collect images for our dataset. Fig. 2: **Dataset defect description**: The top image shows an RGB image and 3 types of defect. The bottom 3 rows show(left-to-right) zoomed-in RGB image, heightfield of the anomalous region, and detrended depth profile. neighborhood of the defect is zero-level. The detrending is crucial to correctly identify the depth of the defect and use the defect definitions for identification. We use the depth and width defect descriptions, as mentioned in Table I, for identifying the defect in the extracted profile, as shown in Figure 6. For drill run detection, we require the number of minima peaks with depth \(>10\mu m\) to be greater than 3. This heuristic is motivated by the fact that the drill run forms a repeated pattern of bumps in the specimen. ## V Robot system integration We integrated our defect detection pipeline with a robot system that is very similar to a system that can be applied for online detection in factories, as shown in Figure 7. The robot system consists of a UR3 robot arm, a RealSense 435F RGBD camera mounted at the robot end-effector, a GelSight Mobile 0.5x mounted using a custom-designed mount at the robot end-effector and a Neewer \(24\)in \(\times\)\(24\)in a softbox. Note, the depth information is not used for defect detection purposes. The robot planner and defect prediction algorithms run on a computer with Intel i7-10850H CPU @ 2.7 GHz, 6 Cores with NVIDIA Quadro T200 GPU, and Windows 10 operating system. The GelSight tactile sensor mount is specifically designed in order to allow Fig. 4: **Detection Overview**: Our approach consists of 2 stages A) Vision stage uses Deep Learning based bounding box detector for identifying defects in the RGB image from the global view. B) Based on the confidence threshold we identify defects or send them to stage 2. C) Tactile stage uses the high-resolution heightfield extracted from GelSight and inspects the depth profile of anomalous regions to identify the type of defect. Fig. 5: **Data augmentation strategies**: This visual illustrates the original image and images after a single augmentation applied to the original image. We found that these augmentations make our detection robust to illumination changes, translation variations, and clutter(bolts). Fig. 6: **Tactile detection pipeline**: The outline of our tactile sensor-based detection system A) Raw data capture by GelSight Mobile B) Output of Canny edge detection on heightfield image C) Automated anomalous profile selection D) Depth profile along the anomalous profile with width and depth annotations. Fig. 7: **Runtime System**: The robot system contains (A) UR3 robot arm (B) RealSense RGBD 435F camera (C)Neewer Illumination source (D) Custom tactile sensor mount E) GelSight Mobile 0.5x (F) Specim under inspection. Our algorithm is run on a PC not shown in the figure. compliance when indenting the metal specimen. Figure 8B shows the CAD drawing of the sensor mount. The camera to robot calibration is done using MoveIt hand-eye calibration. GelSight to end-effector transform is manually computed based on manufactured gripper mount. In the first stage, the robot arm collects RGB images, using the algorithm defined in Section V-A and feeds them to phase I of the defect detection system described in Section IV-A. Phase I outputs defect regions and uncertain regions. Then, the robot control uses an algorithm mentioned in Section V-B, to collect the tactile image of each uncertain region. This tactile image is, then passed to phase II, tactile detection described in Section IV-B, for processing. ### _RGB Data Collection with the Robot_ In this section, we will describe the robot control technique which is used for capturing RGB images for surface defect detection. In our current testing setup, the capture locations are pre-defined manually in the robot's task-space coordinates (3D Cartesian locations). We request that the robot collect RGB images at multiple locations to ensure the entire surface of the panel is covered. In our initial experiment, those locations are manually chosen based on the fixed position of the parts. The robot calculates the joint angle configuration for a task-space location using inverse kinematics [18]. The robot then generates joint angle trajectories toward the target joint locations using linear interpolation. We leverage the robot simulation to check for collisions and singularity. After which, the trajectory is forwarded to the robot's controller. ### _Tactile Data Collection with the Robot_ In this section, we will describe the robot control strategy used to obtain tactile images using the GelSight sensor. To capture a focused tactile image, the robot needs to make the GelSight Mobile indent the surface in the perpendicular direction at the defect location. Therefore, to achieve normal indentation, we estimate coarse normal direction by obtaining a coarse depth measurement from the RGBD camera and fitting a polynomial function in \((x,y,z)\) to the specimen surface. Given the fitted surface function, we obtain the coarse surface normal at the target data capture location by differentiating the polynomial function w.r.t. \(x\) and \(y\), followed by a cross-product. We, then, use inverse kinematics and interpolation, as mentioned in the previous section, to move closer to the object. After that, we use tactile servoing until we obtain a focused tactile scan. We use background subtraction thresholding to estimate if the tactile scan is in focus. ## VI Experiments To evaluate our proposed pipeline for defect detection, we perform analysis of each stage - vision only in Section VI-A and tactile only in Section VI-B. We, then, perform an analysis of our two-stage inspection system integrated with a robot in Section VI-C. For our on-site robot experiments, we record the detection runtime and the accuracy of defect detection. ### _Offline Vision-based surface defect detection_ We first evaluate the performance of our vision-based algorithm for defect detection using the offline dataset introduced in Section III. We fine-tuned the Neural Network using 150 training images of resolution 1280\(\times\)800. We investigate the effect of using data augmentation techniques for defect detection by comparing the performance of the trained model with various augmentations. Each model was trained on 150 images for 100 iterations using SGD with a learning rate of 0.005 and weight decay of 5e-4 in PyTorch. During testing, we only consider bounding boxes that have a high confidence score (\(0.5\) in our experiments). For calculating the recall, we used _maximum detections_ allowed per image to be 100. This parameter intuitively means the bounding box predictions allowed in each image. Table II shows the evaluation metrics using the trained Neural Network with and without augmentations. For all the metrics, we used Intersection over Union = 0.4 (metric for finding the overlap between bounding boxes) as the threshold for finding correspondence between the ground truth bounding box and the predicted bounding box. Figure 9 shows the test results. We found that the common misclassification cases are: (i) confusion between scratch and drill run(Figure 9 case A); (ii) regions that look like scratches but do not have depth(Figure 9 case B); (iii) very few visual features for classification(Figure 9 case C, D, E and F) These issues would be solved by our tactile stage, as it accounts for indentation depth and captures an orthographic view of the defect. In on-site robot experiments, we obtained images containing many challenging artifacts, as shown in Figure 10. Specifically, large bolt regions and bright light spots caused issues in the detection. Without augmentation, the probability of those areas being classified as a defect is high, as shown in Figure 10 left. However, with our augmentation techniques, the neural network is correctly able to identify those regions as normal regions. defect classification result. We obtain average classification accuracy of 95.75%. Note, the tactile-only approach allows to identify defects with 100% success rate if the class identification is not of concern. We notice some misclassification due to the high variability in the defects and dirt on the sensor surface in the tactile data collection. We showcase the misclassified cases in Figure 11. For the drill run cases, we found the depth profile is significantly different than the ideal profile according to the industrial partners and the misclassified cases have fewer drill features. Therefore, all the misclassifications are reasonable. ### _Online Robot system evaluation_ In this section, we run our integrated robotic detection system to inspect an aerospace part for potential defect regions. We capture multiple RGB images at different locations to cover the entire surface of the part. Then the tactile exploration procedure is performed on each RGB-image-covered area. We compare the performance of our system at runtime with vision-only and tactile-only approaches. We choose accuracy and runtime as the metric for comparison. Since tactile data capture (mean time = 22.26 seconds) takes 4x more time than visual data capture (mean time = 6.52 seconds). We use these to give an estimated time for all experiments instead of actual runtime. We use 1 panel for our robotic experiment containing 15 defects - 7 scratches, 7 gouges, and 1 Drill Run. We use 2 RGB images to cover the panel used in our experiment. Siemens engineer manually Fig. 11: **Tactile detection failures**: This visual shows the illustrative failure cases in our tactile dataset with ground truth and predicted defect labels. We found 2 _Drill Run_ cases misclassified because the number of repeated features was very few. Fig. 12: **Tactile confusion matrix**: We plotted the predicted label using our tactile detection algorithm on the x-axis and true labels on the y-axis. This visual highlights that our tactile detection algorithm can classify defects very well. Fig. 10: **Comparison of RGB-based defect detection with/without data augmentation at robot experiment time**: In this figure, the ground truth boxes are marked with solid lines, and predicted areas are marked with dashed lines. The colors of the bounding box represent _drill run_, _sogue_, and _scratch_ in red, green, and blue color respectively. The left side shows the model performance without data augmentation on 2 test images. It identifies large bolt regions as scratch defects and empty bolt regions as gouges which is incorrect. The model trained with data augmentation is able to correctly identify those regions as background as shown on the right and obtains 94.58% recall rate without defect classification as compared to 63.56% without augmentations. Fig. 9: **RGB-only detection results in offline dataset**: We highlight the prediction of our algorithm on reserved images in our offline dataset. In the bottom row, we highlight the failure cases in detection. The common causes of failure are insufficient visual features(drill run looking like a scratch in (A)) and no depth information at the defect location(B is a paint bump instead of a scratch in the surface. The depth profile between the paint bump and scratch is significantly different). labeled the test data for this experiment. Table III compares the baselines with our approach quantitatively for a new aerospace panel at Boeing's facility. Our approach achieves a perfect recall rate(@IoU=0.4 and _max detections_=100) of 1.0, which is 26.5% higher than the vision-only method and takes 0.01x of runtime as compared to the tactile-only approach. The defect detection system has been integrated with multiple robotic systems at 2 different locations - Siemens research lab and Boeing production labs. These environments had 2 different robotic systems - UR3 in Siemens labs and UR10 in Boeing labs. These environments had different illumination settings and panels with different curvatures for testing. This highlights that our detection is easy to adapt to various environments. ## VII Conclusion This work introduces a robotic aerospace defect dataset and a 2-stage pipeline for defect detection on large-scale parts. Stage I uses an RGB camera to identify defect areas with a preliminary estimation, followed by the stage the robot uses a high-resolution tactile sensor GelSight Mobile for precise inspection of the potential defect area. Our approach is shown to be beneficial in terms of accuracy (perfect recall) and speed of inspection (70x faster than the tactile-only approach). We were also successfully able to integrate the detection system in 2 different environments, containing different robot arms, different illumination, and different metal panel. Comprehensive evaluation in production environment out of the scope of this research work. We did not have the capacity to test the robustness of the pipeline after repeated use. Touch sensor measurements become less accurate over time due to repeated interaction. Therefore, accuracy evaluations of the pipeline at repeated intervals may help the system to become robust. Transfer learning under significant illumination or inspection material is another avenue of research. Using multiple viewpoints in a single detection might be an interesting research direction to improve the accuracy of the vision stage. Another interesting extension would be to incorporate human feedback for the online update of the prediction model. ## VIII Acknowledgment The research is partially sponsored by Advanced Robotics for Manufacturing Institute by the Office of the Secretary of Defense and was accomplished under Agreement Number W911NF-17-3-0004. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Office of the Secretary of Defense or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
2309.05842
Fairness- and uncertainty-aware data generation for data-driven design
The design dataset is the backbone of data-driven design. Ideally, the dataset should be fairly distributed in both shape and property spaces to efficiently explore the underlying relationship. However, the classical experimental design focuses on shape diversity and thus yields biased exploration in the property space. Recently developed methods either conduct subset selection from a large dataset or employ assumptions with severe limitations. In this paper, fairness- and uncertainty-aware data generation (FairGen) is proposed to actively detect and generate missing properties starting from a small dataset. At each iteration, its coverage module computes the data coverage to guide the selection of the target properties. The uncertainty module ensures that the generative model can make certain and thus accurate shape predictions. Integrating the two modules, Bayesian optimization determines the target properties, which are thereafter fed into the generative model to predict the associated shapes. The new designs, whose properties are analyzed by simulation, are added to the design dataset. An S-slot design dataset case study was implemented to demonstrate the efficiency of FairGen in auxetic structural design. Compared with grid and randomized sampling, FairGen increased the coverage score at twice the speed and significantly expanded the sampled region in the property space. As a result, the generative models trained with FairGen-generated datasets showed consistent and significant reductions in mean absolute errors.
Jiarui Xie, Chonghui Zhang, Lijun Sun, Yaoyao Zhao
2023-09-11T21:54:49Z
http://arxiv.org/abs/2309.05842v1
# Fairness- and Uncertainty-Aware Data Generation for Data-Driven Design ###### Abstract _The design dataset is the backbone of data-driven design. Ideally, the dataset should be fairly distributed in both shape and property spaces to efficiently explore the underlying relationship. However, the classical experimental design focuses on shape diversity and thus yields biased exploration in the property space. Recently developed methods either conduct subset selection from a large dataset or employ assumptions with severe limitations. In this paper, fairness- and uncertainty-aware data generation (FairGen) is proposed to actively detect and generate missing properties starting from a small dataset. At each iteration, its coverage module computes the data coverage to guide the selection of the target properties. The uncertainty module ensures that the generative model can make certain and thus accurate shape predictions. Integrating the two modules, Bayesian optimization determines the target properties, which are thereafter fed into the generative model to predict the associated shapes. The new designs, whose properties are analyzed by simulation, are added to the design dataset. An S-slot design dataset case study was implemented to demonstrate the efficiency of FairGen in auxetic structural design. Compared with grid and randomized sampling, FairGen increased the coverage score at twice the speed and significantly expanded the sampled region in the property space. As a result, the generative models trained with FairGen-generated datasets showed consistent and significant reductions in mean absolute errors._ Keywords: machine learning; data-driven design; fairness and diversity; uncertainty; data generation; adaptive sampling. ## 1 Introduction Design space exploration (DSE) searches through a wide range of design parameters and configurations for optimal engineering design solutions [1, 2]. With the advent of advanced machine learning (ML) algorithms, data-driven design methods have emerged and allowed rapid, accurate and cost-efficient design generation and DSE [3]. In mechanical design, various data-driven design pipelines and databases have been constructed to aid design tasks such as metamaterial and structural design [4, 5, 6]. Conventional data-driven design (Figure 1) starts with the parameterization of target designs, followed by design of experiments (DOE) techniques that sample from the design space such as geometric space [4]. Recently, non-parametric representations such as topology optimization have been implemented in data-driven and generative design [7, 8]. With design representations and experimental plans established, designs can be generated in computer-aided design environments. Thereafter, the mechanical and physical properties of the designs can be analyzed using simulation or real-world experiments. After the data are acquired from the experiments, the relationship between the design space and the property space can be modeled using ML. There are typically two modeling tasks: design performance prediction and generative models. Performance prediction models predict the properties of a design given the design parameters. They are frequently used as surrogate models to replace computationally heavy simulations and speed up design optimization [9]. Generative models, characterizing the inverse relationship, generate designs with respect to specified properties or constraints [8]. It is more difficult to learn such one-to-many relationships that one input could correspond to multiple outputs [10]. Although such workflows have been commonly implemented and have been contributing to various design research discoveries, risks of representation bias stemming from data acquisition might cause fairness issues in the dataset and thus compromise the performance of ML models. Representation bias describes the phenomenon that some parts of the target population are underrepresented in the dataset [11]. In design datasets, the most salient representation bias resides in the property space, where samples are passively populated [12]. DOE conducted on the design space ensures the generation of diverse design geometries and configurations. Nonetheless, it results in skewed underlying distribution in the property space due to the nonlinear relationship between design shape and properties. Consequently, design datasets are commonly unbalanced in the property space with intensively searched regions, voids in the sampled regions, and unexplored regions [13]. Representation bias in the dataset will propagate to the ML models and eventually yields unsatisfactory designs. Unexplored regions imply missing knowledge in the dataset and contribute to inaccurate predictions of unexplored properties. Data imbalance may cause the ML model to focus on the intensively sampled property regions, while overlooking the underrepresented properties. Current methods to mitigate representation bias in design datasets mainly concentrate on data augmentation such as over-sampling and under-sampling. Over-sampling techniques increase the sample size by partially altering existing samples or generating new synthetic samples [14]. Down-sampling removes similar samples from the overrepresented groups [15]. However, the former might contribute to the overfitting of existing samples and the latter might remove samples with important information [16]. Chan et al. [13] proposed METASET to select an unbiased subset from a large metamaterial shape database. Determinantal point process (DPP) is utilized to model the diversity in both shape and property spaces, which are jointly considered to evaluate the subsets. The selected subset is highly diverse with a small sample size, offering better predictive performance and less training time. Lee et al. [12] proposed t-METASET that iteratively generates diverse unit cell shapes and acquires diverse properties from the existing samples. Its task-aware functionality guides property sampling toward the designated region. However, these methods only implement subset selection in the property space and thus cannot actively expand the sampled region. Wang et al. [6] designed a shape perturbation algorithm that gradually samples new properties toward the unexplored regions in the property space. It builds on the assumption that a small perturbation in the shape of a design will yield a small change in its properties. The rarely explored regions can be populated by slightly altering the shapes of the existing samples nearby. However, the assumption has serious limitations because small perturbations in different shapes can yield considerably different property shifts, which potentially makes the process uncontrollable and inefficient. To ensure fair and efficient property space exploration, there needs to be a more reliable method that detects the regions where the existing dataset has insufficient coverage and accurately generates designs to increase coverage. Accurate design generation requires this method to model the relationship between shapes and properties instead of relying on assumptions such as small perturbation. Generative models and reinforcement learning (RL) have recently been implemented to generate design geometries that can achieve desirable properties. Chen and Ahmed [3] presented performance augmented diverse generative adversarial network (GAN) that combines GAN loss with performance augmented DPP loss in the training process. Such a GAN model learns to synthesize the training design data while generating diverse shapes with desirable properties. Considering there are usually multiple target properties in design tasks, Chen and Ahmed [17] integrated performance augmented diverse GAN with multi-objective Bayesian optimization (BO). As demonstrated in the case studies, this pipeline can generate diverse shapes and facilitate the exploration of the full Pareto fronts in the property space. Nobari et al. [18] proposed performance conditional diverse GAN to enable the generation of designs with designated properties. Compared with performance augmented diverse GAN, this model is more flexible as the users can appoint desirable properties instead of maximizing or minimizing properties. Instead of GANs that directly generate designs with desirable properties, RL traverses the property space and moves toward the optimal properties iteratively. Jang et al. [4] trained an RL agent that iteratively generates diverse designs by rewarding the diversity of topology. Compared with conventional greedy search, this method can generate 5% more design shapes on average in the tire design case study. Agrawal and McComb [5] trained an RL-based design agent that explores the design space with varying model fidelity. This framework is computationally efficient because of its embedded mechanism to tradeoff between low- and high-fidelity models during DSE. The common limitation of the above generative models and RL pipelines is that a specific application must be defined before the initiation of DSE. The goals of these methods are to find the optimal or designated properties within the design space. It is straightforward to define the optimality of some properties such as tensile strength, whose optimality means its maximum. For properties such as elastic modulus (EM) and porosity, optimality is dependent on the use case. For instance, soft robotics would favor designs with relatively small EM, while the EM of human bone implants should be close to the EM of original human bones for improved bone integration. To prepare a general-purpose database for various applications, there needs to be a method that fairly explores the property space with no optimal properties specified. This method can explore and exploit the potential of a type of design to facilitate future DSE and design decision-making. Adaptive sampling is an efficient method to actively generate new data from insufficiently regions [19]. Typical adaptive sampling techniques select new samples according to the predictive performance or uncertainty of ML models. When determining the new samples using predictive performance, the feature space is usually segregated into subspaces. Based on the test set, the subspaces that exhibit the highest predictive error will be the regions of interest (ROI) for adaptive sampling. For Figure 1: Schematics of the procedures in data-driven design and the role of FairGen. instance, Zhang et al. [20] designed an adaptive sampling technique to iteratively improve the performance of surrogate models in design optimization. The test set is divided into subgroups using K-means clustering and KNN. The subgroup that possesses the highest total prediction error is the ROI. Thereafter, maximum curvature is used to select a set of points from the ROI to generate new samples. Adaptive sampling based on predictive performance has also been implemented for structural design [21], configuration design [22], electromagnetic design [23], and protective coating design [24]. Uncertainty metrics such as entropy of prediction probabilities are also widely deployed in adaptive sampling. Gaussian process regression models are trained as surrogate models in various design optimization works and can guide adaptive sampling because of their inherent uncertainty measurement functionality [19]. Xu et al. [25] and Liu et al. [26] implemented Gaussian process adaptive sampling for hall effect sensor design optimization and functionally graded cellular structure design optimization, respectively. Nonetheless, the existing adaptive sampling methods lack the ability to deal with inverse problems and one-to-many relationships in generative design. In this paper, the authors propose a fairness- and uncertainty-aware data generation (FairGen) pipeline that adaptively samples designs with missing properties (Figure 1). It adds an iterative process to the conventional design pipeline to fairly generate new samples. FairGen does not only exploit the voids within the sampled region, but also gradually expands the sampled region to explore the property space. The key contributions and features of this pipeline include: * Introducing a fairness metric to design data generation to quantify and visualize data coverage. * Constructing a novel pipeline and generative models to directly generate missing properties in the dataset. * Building deep ensemble to model the predictive uncertainties of the generative models and guide the generative process. * Proposing a pipeline to achieve adaptive sampling for data-driven design problems with inverse modeling and one-to-many relationships. * FairGen rapidly explores the property space to expand the sampled regions. * FairGen significantly improves the performance of inverse design models. The remainder of this paper is organized as follows. Section 2 introduces the methodology of FairGen, including the formulation of the coverage and uncertainty modules. Section 3 presents the setting and procedures of the S-slot auxetic design property space exploration case study. Section 4 discusses the results with respect to the coverage increase rate, property space sampled region expansion, and the impact on generative models. Section 5 highlights the remarks of this research. ## 2 Methodology This section illustrates the elements and procedures of FairGen. Section 2.1 demonstrates the FairGen pipeline. Section 2.2 discusses the coverage module with respect to data fairness, data coverage, and Voronoi diagram to construct the coverage map. Section 2.3 discusses the uncertainty module with respect to mixture density network (MDN) and deep ensemble method to capture the predictive uncertainty. Section 2.4 discusses BO integrating the coverage and uncertainty modules to find the target properties. ### FairGen pipeline Figure 2 visualizes the pipeline and modules of FairGen. This pipeline starts with an initial dataset (D\({}^{0}\)) sampled from the shape space (R\({}^{4}\)) that contains d geometric parameters. The p types of properties of the n designs from the D\({}^{0}\) are analyzed using simulation, then populated in the property space, R\({}^{7}\). At each iteration, the mission is to find the empty regions in the property space and generate designs to supplement them. Thus, a data coverage module is built to indicate the uncovered regions. Due to the limitation of the existing knowledge, it is infeasible to accurately generate all missing properties at once. This becomes an optimization problem in which an optimal set of n\({}_{p}\) target property samples (D\({}^{p}\)) is searched. Thus, BO is implemented at every iteration to find a solution of D\({}^{p}\) that maximally increases the data coverage in the property space. The coverage module computes the covered area as the coverage score (S\({}_{C}\)) when D\({}^{p}\) is added to the existing dataset. After D\({}^{p}\) is solved by BO, the corresponding shape sets (D\({}^{5}\)) that can provide D\({}^{p}\) must be found. MDN, a generative model, is trained using the existing dataset and predicts the shapes given D\({}^{p}\). However, BO purely maximizing the coverage score will yield target properties that maximize the covered area and thus is far away from the existing samples. MDN trained on the existing samples will generate inaccurate shape predictions that do not correspond to the target properties. This raises a conflict between the expansion of coverage and the predictive performance of the generative model. Therefore, an uncertainty module consisting of multiple MDNs is established to compute the predictive uncertainties regarding D\({}^{p}\). An uncertainty score S\({}_{U}\) characterizing the predictive uncertainties is added to the objective function as a trade-off with the coverage score. This way, BO is encouraged to find a D\({}^{p}\) that both efficiently increases the data coverage and ensures accurate shape prediction. The shapes predicted by the MDNs from the uncertainty module are analyzed in simulation to find the actual properties. The new shape-property set is added to the existing dataset, which forms a new dataset D\({}^{i}\), where i is the number of iterations. This pipeline can be executed iteratively until the desired S\({}_{C}\) is reached or the designated computational resource is exhausted. ### Coverage module The first task of measuring representation bias is to establish a metric. There are two mentalities to model representation bias: fairness and diversity. Fairness describes the lack of bias and diversity describes the richness of variety [27]. Distance-based diversity metrics have been commonly implemented in the research domain of data-driven design [12, 13, 17, 18]. For example, Chan et al. [13] implemented DPP where Euclidean distance and Hausdorff distance were used to construct similarity kernels for 2-dimensional and 3-dimensional shapes, respectively. The authors argued that diversity metrics such as DPP are more flexible and easily implementable to be incorporated into ML pipelines. However, it is hard to use diversity metrics to quantify and visualize data coverage. The quantification and visualization of data coverage at different sample sizes and different D\({}^{p}\)'s help evaluate and guide the data generation process; thus, a data coverage module must be constructed with a suitable fairness metric. Asudeh et al. [28] defined the notion of coverage of a point in a continuous-valued feature space. Given a dataset D, a query point q, a distance function \(\Delta\), a vicinity value \(p\), and a threshold value k, the coverage of q by D is defined: \[Cov_{\rho,k}(q,D)=\begin{cases}true\quad\text{ {if} }|\{t\in D|\Delta(t,q)\leq\rho\}|\geq k\\ false\quad\text{ {otherwise}}\end{cases} \tag{1}\] This definition essentially checks if the query point is at the vicinity defined by \(p\) and \(\Delta\) of at least k data points from the dataset D. With user-defined \(p\) and k, a region covered by the dataset can be computed by: \[S_{C}(D)=\{q|Cov(q,D)=True\} \tag{2}\] In FairGen, the coverage of the property space is to be improved. The covered area is the coverage score of the coverage module to quantify coverage and evaluate the selection of target properties. BO will utilize the coverage score to find a set of target properties that optimally increases the improvement of data coverage. The definition of data coverage is clear and straightforward to understand and implement. The covered region can also be plotted for users to monitor coverage progress and data generation efficiency. However, the computational complexity increases rapidly with the magnitude of k, and the size and dimension of the dataset. A naive algorithm that enumerates through all \(n!/[k!(n!-k!)]\) data point combinations and finds all mutually covered regions is computationally inefficient. The overlap among the covered regions requires additional and complex computation. Asudeh et al. [28] proposed using Voronoi diagram to reduce the computational complexity when calculating data coverage [29, 30]. Given two samples, \(t_{i}\) and \(t_{j}\) from dataset D, any point on a line \(h(i,j)=\{q|\Delta(q,t_{j})=\Delta(q,t_{j})\}\) is equidistant to the two points. The half-space \(h^{+}(i,j)=\{q|\Delta(q,t_{j})\leq\Delta(q,t_{j})\}\) includes \(t_{i}\), and any point in this half-space is closer to \(t_{i}\). A polygon \(V(i)=\{V_{|i\neq j}\}h^{+}(i,j)\) is a Voronoi cell of sample i in which any point is closer to \(t_{i}\) than other samples in D. In this way, the aggregation of all Voronoi cells is the Voronoi diagram of the first order. Similarly, for \(k^{th}\) order Voronoi diagram, the k-nearest neighbors of any point in a Voronoi cell V(S) belong to S, where \(S\in D\) and \(|S|=k\). For an arbitrary value of k used in data coverage, a \(k^{th}\) order Voronoi diagram can be constructed. To find the covered area, an algorithm only needs to enumerate through the Voronoi cells, and only computes the concurrently covered area by the associated k samples in S. This method does not suffer from overlap as the feature space has been segregated into Voronoi cells. Figure 3 demonstrates the use of Voronoi diagram to find the covered area by 1000 samples in the property space. Using the method proposed by Boots et al. [30], a \(k^{th}\) order Voronoi diagram can be constructed in a time complexity of \(O(k^{2}n\log n)\) in a 2-dimensional space. For each Voronoi cell, the region covered by the associated data point is solved. The aggregation of all the regions is equivalent to the covered region by the dataset. Therefore, the area of the covered region is computed as the \(S_{C}\) that reflects how well the property space is covered. \(S_{C}-S_{C}^{\prime}\) can be the metric to evaluate the selection of D\({}^{p}\), where \(S_{C}\) and \(S_{C}^{\prime}\) are the coverage score after and before D\({}^{p}\) is added to the property space, respectively. Figure 3: Data coverage using first order Voronoi diagram in a standardized 2-dimensional property space with k=1 and \(p\)=0.08. Figure 2: FairGen pipeline to iteratively generate missing properties in the property space. Through FairGen iterations and BO, the coverage module may consume considerable computational resources as Voronoi diagrams will be constructed repeatedly to compute new \(\mathrm{Sc}\)'s. The advantage of Voronoi diagram is that a new diagram can be generated based on the preceding diagram to speed up the computation. From an optimization perspective, the coverage improvement metric \(\mathrm{S_{C}-S_{C}^{\prime}}\) can be simplified to \(\mathrm{S_{C}}\) during optimization since \(\mathrm{S_{C}^{\prime}}\) is a constant. Moreover, \(\mathrm{S_{C}}\) as the objective function of BO encourages the selection of target properties that are far away from the existing properties. Taking those properties as the input, the MDN model will generate shapes that do not correspond to them. Thus, an uncertainty module is constructed to resolve this issue. ### Uncertainty module The uncertainty module calculates the predictive uncertainties of the MDN models for a given \(\mathrm{D^{P}}\). The predictive uncertainties form an uncertainty score (\(\mathrm{S_{C}}\)) that penalizes the objective function to prevent selecting a \(\mathrm{D^{P}}\) that yields uncertain and thus potentially inaccurate shape predictions. There are two types of uncertainties: aleatoric and epistemic uncertainties [31]. Aleatoric uncertainty describes the inherent randomness such as sensor noise and measurement errors; epistemic uncertainty characterizes missing knowledge such as missing data or variables [32]. In such a context, the predictive uncertainty is to be modeled and utilized to guide BO. Deep ensemble is a scalable and robust method to model predictive uncertainty [33]. To estimate the predictive uncertainty, multiple probabilistic neural network (NN) models are trained with different weight initialization and training data shuffling. The models are treated as a uniformly weighted mixture model where the predictions are combined as: \[p(y|x)=M^{-1}\sum_{m=1}^{M}p_{\theta_{m}}(y|x,\theta_{m}) \tag{3}\] where \(\mathrm{x}\) is the input, \(\mathrm{y}\) is the prediction, \(\mathrm{M}\) is the number of models, and \(\theta\) are the parameters. For regression problem, the prediction is a of Gaussian mixture: \[M^{-1}\sum N(\mu_{\theta_{m}}(x),\sigma_{\theta_{m}}^{2}(x)) \tag{4}\] where \(\mu\) and \(\sigma^{2}\) are the mean and variance, respectively. This mixture can be approximated as one Gaussian distribution where the mean and variance are: \[\mu_{*}(x)=M^{-1}\sum_{m=1}^{M}\mu_{\theta_{m}}(x) \tag{5}\] \[\sigma_{*}^{2}(x)=M^{-1}\sum_{m=1}^{M}\left(\sigma_{\theta_{m}}^{2}(x)-\mu_{ \theta_{m}}^{2}(x)\right)-\mu_{*}(x) \tag{6}\] Suppose the true relationship in Figure 4 is to be modeled with some training data collected. Given the same input, each model will provide a prediction, \(\mathrm{y_{m}}\), as a Gaussian distribution. The five predictions are approximated using one Gaussian distribution. If the input is within the region where data is available, the variance of the prediction is small, indicating small predictive uncertainty. If the input has no training data nearby, the variance of the predictions is large, characterizing a large predictive uncertainty. The deep ensemble method essentially investigates the difference among the M distributions learned by the M models. The same rationale is utilized to build the uncertainty module and obtain a \(\mathrm{D^{P}}\) with low predictive uncertainty through BO. In generative design, generative models such as MDN are trained to predict the shapes that possess the input properties. MDNs proposed by Bishop [34] use the output discrete values from NNs to create a mixed Gaussian distribution and then, train the NNs to achieve consistency between the training dataset and the mixed distribution. Figure 5 depicts the structure of MDN comprised of a deep NN and a mixed Gaussian. The input layer receives the target properties. The output of the deep NN is reparametrized to construct a batch of Gaussian distributions, which are combined to form a Gaussian mixture. Design shapes are then sampled from the mixed Gaussian distribution. MDN is chosen to build the uncertainty module because it has embedded uncertainty measurement functionality. Thus, the deep ensemble method to characterize predictive uncertainty can be extended to MDN. Figure 4: Modeling predictive uncertainty using ensemble method. The previous deep ensemble scenario in Figure 4 describes a mixture of several single univariate Gaussian distributions. Modeling the predictive uncertainty of MDNs requires a mixture of several batches of multivariate Gaussian distributions (Figure 6 (a)). Each batch of Gaussian distributions is from one MDN model and each Gaussian distribution has d dimensions. Each model learns G distributions instead of one distribution in the previous example. Therefore, the deep ensemble method must be modified to investigate the difference among the M batches of G distributions learned by the M models. The assumption is that the M models are trained to learn the same G distributions, which characterize the true marginal distributions of the output variables. The first step is to find the correspondence of the G distributions from different MDNs using the training data. Although the MDNs learn the same ground truth distributions, the orders can be different. The approximation method using equations (5) and (6) must be conducted among the corresponding distributions indicated by the arrows in Figure 6 (a). When the training input X of size \(\mathbf{n\times p}\) is fed into an MDN, three matrices describing the proportions (\(\mathbf{n\times G}\)), means (\(\mathbf{n\times G\times d}\)), and variances (\(\mathbf{n\times G\times d}\)) of the G distributions will be the output. The corresponding distributions should have mean matrices close to each other. With this trait, the correspondence of distributions from multiple MDNs can be discovered by calculating the differences among the mean matrices. After the correspondence is established, the corresponding distributions are approximated using one Gaussian distribution: \[\mu_{*,g}(x)=M^{-1}\sum_{m=1}^{M}\mu_{\theta_{m,g}}(x) \tag{7}\] \[\sigma_{*,g}^{2}(x)=M^{-1}\sum_{m=1}^{M}\left(\sigma_{\theta_{m,g}}^{2}(x)- \mu_{\theta_{m,g}}^{2}(x)\right)-\mu_{*,g}(x) \tag{8}\] for \(\forall\)\(g=1,2,...,\)\(G\) where each \(\sigma_{*,g}^{2}(x)\) has a size of \(1\times\text{d}\). To obtain an uncertainty score that characterize the predictive uncertainty of the MDN models regarding a property input x, the variances are summed across G aggregated distributions and d dimensions: \[S_{U}(x)=\sum_{g=1}^{G}\sigma_{*,g}^{2}(x)\times J_{d} \tag{9}\] where J is a \(\text{d}\times 1\) matrix of ones. Using the uncertainty module, the predictive uncertainty can be obtained at an arbitrary x in the property space. The example of a predictive uncertainty heatmap is plotted in Figure 6 (b). This heatmap indicates that the predictive uncertainty is low at the region where data is abundant and thus conveying sufficient knowledge to the model. As x travels toward the regions where data are sparse or absent, the predictive uncertainty increases, signaling a high potential for inaccurate predictions. Flexibility is the reason why the uncertainty score is used as a penalty instead of a constraint. By tuning the penalty factor (\(\psi\)), FairGen can switch between exploration and exploitation modes to actively search outside or within the sampled regions. Moreover, there will be fluctuation of the overall uncertainty levels, which could be compensated by the penalty factor. This module might impose a high computational cost on the pipeline as multiple MDNs must be trained at every FairGen Figure 5: Structure of an MDN. Figure 6: Uncertainty module using deep ensemble method to model predictive uncertainty: a) Gaussian mixtures; and b) predictive uncertainty heatmap. iteration. To speed up the uncertainty module, parallel training of the multiple models can be implemented as they are independent of each other. Transfer learning can help reduce training time. Instead of training from a randomly initialized model at every FairGen iteration, the models trained during the last iteration can be re-trained with the new dataset. ### Bayesian optimization The optimization function in FairGen finds the optimal \(D^{p}=\left\{x_{1},x_{2},...,x_{n_{p}}\right\}\) as the input to the generative models for design generation. At the i\({}^{\text{th}}\) iteration, the coverage and uncertainty modules calculate the coverage and uncertainty scores, accounting for the entire D\({}^{p}\): \[S_{C}(D)=S_{C}(D^{\text{t}}\cup D^{p}) \tag{10}\] \[S_{U}(D^{p})=\sum_{l=1}^{n_{p}}\sum_{g=1}^{c}\sigma_{g,g}^{2}(x_{l})\times J_{d} \tag{11}\] The objective function of the BO can be formulated as: \[\underset{D^{p}}{\text{max}}\qquad f(D^{p})=S_{C}(D)-\psi S_{U}(D^{p}) \tag{12}\] After D\({}^{p}\) is determined by BO, the MDNs trained during the construction of the uncertainty module are utilized to generate design shapes. As D\({}^{p}\) is found with the penalty of their predictive uncertainties, some accurate estimations of the design shapes are likely to be obtained from the MDNs. Thereafter, the designs generated are analyzed using simulation to acquire the real properties. Finally, the shapes and properties generated during this iteration are added to the dataset. The next iteration can be executed with the updated dataset to further explore the property space. ## 3 Results This section exhibits the S-slot design case study with respect to the design problem, FairGen setting, and results. ### S-slot design space exploration In this paper, a case study of S-shaped perforated auxetic metamaterial design is conducted. S-slot designs have been proven to have an enhanced fatigue life due to its lower von Mise stress compared to the traditional circular design [35]. A dataset will be generated using FairGen and compared with conventional methods. The design spaces in this case study, including the shape and property spaces, are defined in this subsection. As shown in Figure 7, the S-slot is defined by four parameters including slot tail height (h), slot cap length (a), slot cap height (b) and cap rotation (a). The slot thickness, vertical spacing (VS) and horizontal spacing (HS) are fixed in this case study. Maximum von Mises stress (MS) and EM are investigated in this DSE problem. As stress concentrations are the main reason for crack initiation, the MS of S-slot designs needs to be considered during the design process. Ideally, the MS in the design should be as small as possible. EM is also a mechanical property frequently discussed in research articles related to auxetic metamaterial [36]. The definition of optimal EM is determined based on the application as mentioned in the introduction. The goal of this case study is to generate a design dataset to build a generative model that predicts the design shapes given the required MS and EM. This dataset should efficiently explore the property space to possess abundant generative design knowledge. Although the design should have a small MS, the data generation process is not driven toward small MS regions to demonstrate a general case. We adopted the same numerical simulation as in the previous research [37] using static linear analysis with 3-dimensional triangle shell elements (S3R) on a unit cell with periodic boundary conditions in Abaqus to generate our simulation dataset. Although the elastic-plastic behavior is not considered, this simulation takes a relatively low computational cost and still provides stress distribution information related to crack initiation. ### FairGen setting and iterations The initial dataset consists of the shapes and properties of 1000 designs sampled using grid search from the shape space. The properties are standardized to the range of around [-1, 3] to facilitate the subsequent ML and FairGen operations. For the coverage module, \(\rho\) is 0.08 because a 2% percentage error of the property is acceptable in property prediction tasks. k is 1 because the initial dataset has only partially explored the property space. The uncertainty module includes 5 MDN models, which have six hidden layers, 10 Gaussian distributions, and 3000 training epochs. The uncertainty penalty is 0.1. BO will find the optimal 3 target properties in 50 iterations and 10 extra random walks. In this setting, it was found that selecting more than 3 target properties is likely to yield some unreasonable property selections. Experiments were run on a computer with a 12\({}^{\text{th}}\) Gen Intel i7 processor with 16 gigabytes of available RAM on Windows 11. The models were trained in the central processing unit. Figure 7: S-slot design. a) Geometric parameters defining the S-slot; and b) Slot layout. At the beginning of every FairGen iteration, the existing dataset was used to initialize the Voronoi diagram in the coverage module and train 5 MDNs in the uncertainty module. The two modules output the coverage and uncertainty scores for the D\({}^{\text{p}}\) selected at every BO iteration. The scores were combined to compute the objective function, which guides the Bayesian optimizer to select the D\({}^{\text{p}}\) for the next BO iteration. The final D\({}^{\text{p}}\) selected by BO both optimally increased the data coverage and yielded reasonable shape predictions. New designs were generated using the 5 MDNs trained in the uncertainty module with D\({}^{\text{p}}\) as the input. For each property in D\({}^{\text{p}}\), 3 designs were generated from each MDN, resulting in 45 new designs per FairGen iteration. Thereafter, the new designs were subject to manufacturability check to filter our infeasible designs such as S-slot intercept and thin wall. The properties of the feasible designs were obtained from simulation, and then added to the existing dataset. Some designs with properties that extended the coverage to the lower-right part of the properties space were regarded as outliers because they possess high maximum stress on the design. Such properties are undesirable and might bring some errors from the simulation. Figure 8 showcases the properties of the generated designs at some FairGen iterations. At the 5\({}^{\text{th}}\) iteration, S\({}_{\text{U}}\) was increased from 3.5 at the beginning to 4.2 (Figure 8 (a)). One target property aimed to exploit a void within the sampled region. The generated properties successfully filled the void. Two target properties tried to explore the uncovered region. Many new designs were generated that considerably expanded the sampled region. At the 10\({}^{\text{th}}\) iteration, one target property exploited a void and densified the surrounding region (Figure 8 (b)). The other two target properties led to the finding of two designs that expanded the sampled region. At the 15\({}^{\text{th}}\) iteration, two target properties exploited the sampled region and one target property explored the rightmost unexplored region (Figure 8 (c)). At the 20\({}^{\text{th}}\) iteration, one target property searched a void region, and two properties explored the rightmost region (Figure 8 (d)). ## 4 Discussion After 20 FairGen iterations, 799 new designs have been generated based on the 1000 initial designs. To form a comparison and investigate the effectiveness of FairGen, 3000 designs were generated using grid sampling and randomized sampling from the shape space, respectively. The former is a conventional DOE method with a strong bias toward the designated geometrical parameters [38]. The latter utilizes a Latin Hypercube sampling (LHS) that encourages shape diversity [39]. The comparison among the three sampling methods will be analyzed with respect to the data coverage, property space exploration, generative modeling, and computational cost. ### Data coverage and property space exploration Figure 9 (a) reveals the increase in data coverage as the number of samples increased using the three sampling techniques. Grid sampling started from a low coverage score than randomized sampling because of its strong bias. FairGen Figure 8: Iterative results of FairGen in the case study at a) 5\({}^{\text{th}}\) iteration with 1178 samples; b) 10\({}^{\text{th}}\) iteration with 1389 samples; c) 15\({}^{\text{th}}\) iteration with 1579 samples; and d) 20\({}^{\text{th}}\) iteration with 1769 samples started from the same coverage score as grid sampling because it was initialized with a dataset based on grid sampling. Although randomized sampling offered a high initial coverage score, data coverage is increasing at the same speed as grid sampling. Also, they both show the trend to converge. On the contrary, the FairGen coverage score curve has not shown the trend to converge. Using FairGen, the data coverage was rapidly improved and quickly surpassed randomized sampling at the second iteration. Eventually, FairGen sampling reached a coverage score of 5.8 while the other two methods were below 4.8. Figure 9 (b) visualizes the datasets generated by the three methods. Grid sampling provided the worst property space exploration effect as most of its samples are covered by the other two methods. Grid sampling intensively sampled the low-MS and low-EM region, while the rest of the property space is either sparsely populated or unexplored. Samples were likely to stick together and repetitively cover a region. Randomized sampling also intensively searched the low-MS and low-EM region, which was less severe than grid sampling. Samples were likely to evenly disperse instead of forming blocks, but also created some greater voids. FairGen significantly avoided the intensive search effect and generated more evenly distributed properties. It almost established the full contour of the sampled area with only a small portion established by others. In reality, the best design at a certain level of elastic modulus should possess the smallest maximum von Mises stress. Figure 9 (b) indicates that FairGen offered the smallest maximum von Mises stress at almost all elastic modulus levels with fewer samples, especially at high elastic moduli. In conclusion, FairGen has a better capability to explore and exploit the potential of the design in DSE. ### Generative modeling The purpose of increasing data coverage is to improve the performance of the ML models. This subsection investigates the effect of FairGen on generative models. MDN models were trained using the dataset acquired from the three sampling techniques. 50 test properties were randomly sampled within the sampled region of the property space. For each test property, each MDN predicted 10 shapes. In total, each MDN predicted 500 shapes, whose properties were analyzed using simulation. The real properties were compared with the target properties to find the predictive errors. To avoid being misled by randomness, tests were conducted at different data sizes: 1200, 1400, 1600, and 1800 designs in the training set (Table 1). This way, both the predictive error and the trend can be the evidence for comparison. The mean absolute error (MAE) of generative model predictions can sometimes be misleading for generative models as some outliers might be generated. Thus, both the MAEs (Table 1) and the absolute prediction error scatter plots (Figure 10) are provided. The horizonal and vertical axes in Figure 10 represent the absolute prediction errors of MS and EM, respectively. Table 1 indicates that all three sampling techniques helped reduce the MAE as the number of samples increased. The MAEs of FairGen were always 1/3 smaller than grid sampling and on average 1/8 smaller than randomized sampling. This could be verified by the scatter plots. When trained with 1200 designs (Figure 10 (a)), large prediction errors were obtained from all models. The performances of FairGen and randomized sampling are close to each other and are significantly better than grid sampling. As more designs being generated, the prediction errors of the three methods became smaller and smaller. Meanwhile, the models trained using FairGen generated datasets performed better than the models trained using randomly sampled datasets (Figure 10 (b)-(d)). The generative modeling test results revealed that data generated using FairGen efficiently explored the property space to embed more knowledge regarding generative design. \begin{table} \begin{tabular}{c c c c} \hline \hline n & FairGen & Grid sampling & Randomized sampling \\ \hline 1200 & 0.2067 & 0.2989 & 0.1835 \\ 1400 & 0.1499 & 0.2095 & 0.1610 \\ 1600 & 0.1408 & 0.2081 & 0.1671 \\ 1800 & 0.1286 & 0.1903 & 0.1574 \\ \hline \hline \end{tabular} \end{table} Table 1: MAEs of the MDNs trained with different numbers of training examples generated from FairGen, grid sampling, and randomized sampling. Figure 9: Comparison among FairGen, grid sampling, and randomized sampling with respect to a) data coverage (k=1 and p=0.08); and b) property space exploration. ### Computational cost The goal of FairGen is to reduce the time and resources required to build an unbiased dataset. It has been shown that FairGen provides higher data coverage and better generative modeling capabilities. Nonetheless, it also adds the coverage module, uncertainty module, and BO to the data generation pipeline. The upper bound of the time complexity of Voronoi diagram construction is \(O((d+1)n^{d/2}\,k^{d/2+1})\)[40]. For a 2-dimensional Voronoi diagram where \(k=1\), the cost can be reduced to \(0(n\log n)\)[22]. The number of Voronoi cells is bounded by \(O(n^{d/2}\,k^{d/2})\), which yields an upperbound of n cells in this case study [28]. For each cell, the complexity of identifying the covered region is \(O(k(d+1))\). Thus, the complexity to compute the entire covered area is bounded by \(O((d+1)n^{d/2}\,k^{d/2+1})\), which is \(O(3n)\) in this case study. The computational cost of building the uncertainty module is equivalent to training 5 MDNs. At every FairGen iteration, the two modules must be constructed again. At every BO iteration, the Voronoi diagram covered area, and the output of 5 MDNs are computed. This case study has relatively small n, k, and d such that the baseline pipeline is not computationally heavy. For large values of n, k and d, methods such as transfer learning [41] and data coverage approximation [28] can be utilized to significantly reduce the computational cost. For the computation unit in this case study, it took around 37 seconds to complete the simulation of one design. From 1000 to 1799 samples, the time to initialize the coverage module ranged from 2 to 3 seconds. The time to build the uncertainty module increased from 50 to 78 seconds. The entire BO consumed around 120 to 240 seconds. The computational time spent on 20 FairGen iterations was around 4960 seconds. The time to generate 799 designs using FairGen is equivalent to 933 designs generated by geometric sampling techniques. With reasonable extra computational time, FairGen achieved exceptional property exploration and generative modeling results. ## 5 Conclusions This paper proposed and demonstrated the FairGen pipeline that efficiently explores the property space in DSE problems. The existing methods cannot directly generate missing properties in a design dataset to explore the potential of the design. This leads to missing knowledge and unsatisfactory ML model performance. FairGen finds the missing properties and actively generates designs that provide those properties to complement the dataset. Its coverage module detects unexplored regions in the property space using a fairness metric. The uncertainty module evaluates the predictive uncertainty in the property space to avoid sampling from the regions about which the generative models are uncertain. BO integrates the coverage and uncertainty modules to solve for the target properties that both maximally increase the data coverage and yield reasonable shape predictions. Thereafter, the target properties are input into the generative models to generate the associated shapes, whose properties are analyzed using simulation. The new designs are Figure 10: Scatter plots of the absolute prediction errors yielded by different sampling techniques at a) n=1200; b) n=1400; c) n=1600; and d) n=1800. added to the dataset and the above steps can be implemented iteratively to improve data coverage. In the S-slot case study, FairGen was implemented to investigate its efficiency, starting with a dataset that has 1000 designs sampled using grid geometric sampling. After 20 iterations, 799 new designs were generated. The coverage score was increased from 3.5 to 5.8 whereas grid sampling and randomized sampling could only increase the coverage score to 4.8 at 3000 samples. FairGen also significantly expanded the sampled region in the property space more than the other sampling techniques. The expanded area means designs with better properties can be obtained from the dataset generated using FairGen. The generative modeling test revealed that the models trained using FairGen generated dataset reduced the MAE by 1/3 and 1/8 on average compared with the datasets generated using grid sampling and randomized sampling, respectively. Computationally, the time spent on generating 799 designs using baseline FairGen is equivalent to generating 933 designs using other sampling methods in the current setting of the case study and computational resources. The limitation of FairGen is the lack of a shape diversity mechanism. Future work will focus on the simultaneous improvement of shape and property fairness. Moreover, FairGen can be modified to actively drive data generation toward desirable property regions. ## Acknowledgements This work is funded by McGill University Graduate Excellence Fellowship Award [grant number 00157]; Mitacs Accelerate program [grant number IT13369]; and McGill Engineering Doctoral Award (MEDA). ## Declaration of Competing Interest The authors declare that they have no known competing interests.
2309.16180
A More General Theory of Diagnosis from First Principles
Model-based diagnosis has been an active research topic in different communities including artificial intelligence, formal methods, and control. This has led to a set of disparate approaches addressing different classes of systems and seeking different forms of diagnoses. In this paper, we resolve such disparities by generalising Reiter's theory to be agnostic to the types of systems and diagnoses considered. This more general theory of diagnosis from first principles defines the minimal diagnosis as the set of preferred diagnosis candidates in a search space of hypotheses. Computing the minimal diagnosis is achieved by exploring the space of diagnosis hypotheses, testing sets of hypotheses for consistency with the system's model and the observation, and generating conflicts that rule out successors and other portions of the search space. Under relatively mild assumptions, our algorithms correctly compute the set of preferred diagnosis candidates. The main difficulty here is that the search space is no longer a powerset as in Reiter's theory, and that, as consequence, many of the implicit properties (such as finiteness of the search space) no longer hold. The notion of conflict also needs to be generalised and we present such a more general notion. We present two implementations of these algorithms, using test solvers based on satisfiability and heuristic search, respectively, which we evaluate on instances from two real world discrete event problems. Despite the greater generality of our theory, these implementations surpass the special purpose algorithms designed for discrete event systems, and enable solving instances that were out of reach of existing diagnosis approaches.
Alban Grastien, Patrik Haslum, Sylvie Thiébaux
2023-09-28T05:47:52Z
http://arxiv.org/abs/2309.16180v1
# A More General Theory of Diagnosis from First Principles ###### Abstract Model-based diagnosis has been an active research topic in different communities including artificial intelligence, formal methods, and control. This has led to a set of disparate approaches addressing different classes of systems and seeking different forms of diagnoses. For instance Reiter's "Theory of Diagnosis from First Principles" primarily targets static systems, considers that diagnoses are minimal sets of faults consistent with the system's model and the observation, and efficiently explores the powerset of faults by means of simple consistency tests. In contrast, diagnosis approaches to discrete event dynamic systems, pioneered by Sampath, Zanella, and others, traditionally reconstruct all system traces consistent with the observation, either explicitly or through a precompiled structure. In this paper, we resolve such disparities by generalising Reiter's theory to be agnostic to the types of systems and diagnoses considered. This more general theory of diagnosis from first principles defines the minimal diagnosis as the set of preferred diagnosis candidates in a search space of hypotheses. Computing the minimal diagnosis is achieved by exploring the space of diagnosis hypotheses, testing sets of hypotheses for consistency with the system's model and the observation, and generating conflicts that rule out successors and other portions of the search space. Under relatively mild assumptions, our algorithms correctly compute the set of preferred diagnosis candidates. The main difficulty here is that the search space is no longer a powerset as in Reiter's theory, and that, as consequence, many of the implicit properties (such as finiteness of the search space) no longer hold. The notion of conflict also needs to be generalised and we present such a more general notion. We present two implementations of these algorithms, using test solvers based on satisfiability and heuristic search, respectively, which we evaluate on instances from two real world discrete event problems. Despite the greater generality of our theory, these implementations surpass the special purpose algorithms designed for discrete event systems, and enable solving instances that were out of reach of existing diagnosis approaches. ## 1 Introduction Discrete event systems (Cassandras & Lafortune, 1999) (DESs) are models of dynamic systems that represent states and events in a discrete manner. DESs are a natural model of many kinds of event-based systems, such as, for example, protocols (Holzmann, 1991) or business processes (van der Aalst, 2013), and also often form a natural abstraction of hybrid discrete-continuous dynamical systems. The diagnosis problem, in the context of dynamical systems, is to infer from a system model and partial observation of events emitted by the system some diagnostically relevant properties of its current state or behaviour - for example, whether any abnormal events have occurred, and if so, which ones, how many times and in what order? Since the seminal work of Sampath et al. (Sampath, Sengupta, Lafortune, Sinnamohideen, & Teneketzis, 1995), DESs diagnosis methods have examined all sequences of events that represent possible system behaviours under the system model and the observation, and have extracted the diagnostic information from those sequences. This contrasts with the approach developed by the Artificial Intelligence community for static systems: known as "diagnosis from first principles" (i.e., model-based diagnosis, as opposed to expert-based diagnosis) the approach pioneered by de Kleer, Reiter and Williams (Reiter, 1987; de Kleer & Williams, 1987) uses a theorem prover to test the consistency of diagnostic hypotheses with the model and the observation. By working directly at the level of hypotheses relevant to the diagnosis, this approach avoids enumerating all explanations of the observation (which are, in general, exponentially many). When trying to understand why such a "test-based" diagnosis approach for DESs did not eventuate, two main reasons come to mind. The first is the absence of an efficient "theorem prover" for checking the consistency of a set of hypotheses and an observed DES, which is a problem akin to planning or model checking. However, there has been considerable work in these areas in the last decades so that available tools can now be used for diagnosis (cf., (Grastien, Anbulagan, Rintanen, & Kelareva, 2007; Sohrabi, Baier, & McIlraith, 2010; Haslum & Grastien, 2011)). The second reason is that the diagnose algorithm proposed by Reiter (Reiter, 1987) was designed to diagnose circuits, and therefore returns only a set of faults. DESs, in contrast, can experience multiple occurrences of the same fault event, and the diagnoser may be required to determine the number of repetitions of faults, or order in which they took place. Reiter's algorithm cannot be applied in this setting and extending it in this direction raises major issues. Our main contribution in this paper is to resolve these issues and generalise the test-based diagnosis framework to a larger class of diagnostic hypothesis spaces, appropriate to DESs and other models of dynamical systems. We present a general definition of model-based diagnosis, independent of the form of the system model and the form of diagnosis required. This definition encompasses the existing theory of diagnosis of circuits as a special case, but also applies to dynamic system models, such as DESs, and beyond. As a result, DES diagnosis problems can be solved using the same techniques as for circuit diagnosis. More precisely, we formulate the diagnosis problem as follows: given a set of _hypotheses_ (abstractions of the system behaviour that discriminate only according to aspects that are relevant to the diagnosis) and a preference relation over the hypotheses, the diagnosis is defined as the set of minimal (most-preferred) _diagnosis candidates_, where a candidate is a hypothesis that is consistent with the model and the observation. _Diagnosis_ is therefore the problem of exploring the _hypothesis space_ to identify these minimal diagnosis candidates. We present different _exploration strategies_ that require only an oracle capable of testing whether a given set of hypotheses intersects the diagnosis. This test solver plays a role similar to the theorem prover in Reiter's algorithm. Importantly, we show that the test solver does not have to be given an explicit, enumerated set of hypotheses. Instead, the set of hypotheses to test is implicitly represented as those that satisfy a set of _diagnostic properties_; the test solver's task is then to find a candidate that satisfies these properties. The implicit representation of hypothesis sets allows the diagnosis algorithm to test infinite sets of hypotheses that can be represented by a finite set of properties. The exploration strategies we propose fall into two classes: The "preferred-first" strategies start by testing the most preferred hypotheses, until candidates are found; these candidates are then minimal. The "preferred-last" strategies generate and refine candidates until their minimality is proven. For each exploration strategy, we determine the conditions on the hypothesis space that are necessary to ensure termination of the diagnosis algorithm. Reiter's diagnose algorithm follows a preferred-first strategy, but additionally uses _conflicts_ to improve its efficiency. Conflicts enable the test solver to provide more information when the outcome of a test is negative. We generalise this idea and incorporate it into our preferred-first strategy. In our framework, a conflict is a chunk of the hypothesis space, which may be larger than the set of hypotheses tested, that is proven to contain no candidate. We show that they can be represented as sets of diagnostic properties that are inconsistent with the observed system. Because at least one of these properties must be negated, conflicts focus the exploration of the hypothesis space and thus accelerate the search for a diagnosis. This work was motivated by our experience with real-world DES diagnosis problems occurring in a number of application domains, including in particular power systems alarm processing and business process conformance checking, which we describe below. Existing model-based diagnosis approaches were unable to cope with the complexity of these problems. We use these problems to benchmark various instances of our approach, differing in the hypotheses space, the strategy for exploring it, and the test solver implementation chosen, against other DES diagnosis methods. We show that our approach, using a test solver based on SAT, is able to solve most of these problems, significantly outperforming earlier state-of-the-art algorithms. We also obtain good performance with a test solver based on heuristic search. The present article builds on our earlier conference publications (Grastien, Haslum, & Thiebaux, 2011; Grastien, Haslum, & Thiebaux, 2012; Grastien, 2014). The first article formulates diagnosis as a search problem on the hypothesis space and introduces the idea of a search strategy; the second one explains how conflicts can be exploited for the specific case of diagnosis of discrete event systems; and the last one shows how the theory can be applied to hybrid systems. Compared to these original works, we now present a unified theory motivated by a number of real world examples. This theory is more thoroughly developed, complete with proofs, and comprehensively evaluated wrt other algorithms. This paper is organised as follows: In the next section, we provide some motivating examples for the present work. Section 3 gives a definition of the diagnosis problem that is independent from the modeling framework and the hypothesis space. Section 4 introduces the key concept of representation of sets of hypotheses by sets of properties and explains how questions relevant to diagnosis are formulated as diagnosis tests. Section 5 demonstrates how these definitions are instantiated for two different modeling frameworks: diagnosis of circuits and diagnosis of discrete event systems. Section 6 presents different strategies for exploring the hypothesis space. In Section 7, we discuss the relation to previous work, in particular that which our theory generalises. Section 8 describes two implementations of test solvers for discrete event systems diagnosis, and Section 9 the results of our experiments with these implementations. Section 10 concludes. ## 2 Motivating Examples In this section, we briefly present examples of diagnosis problems for discrete event and hybrid dynamical systems. Each one of these problems requires a more expressive concept of diagnosis than the classical "set of faults" definition, and thus serves to motivate our general framing of the problem and our generalisation of test-based diagnosis algorithms. ### Conformance Checking and Data Cleaning Deciding if a record of events matches or does not match a specified process, or obeys or does not obey a set of rules, is a problem that arises in several contexts. It is known as _conformance_ or _compliance checking_ in the Business Process Modelling (BPM) literature (van der Aalst, 2013; Hashmi, Governatori, Lam, & Wynn, 2018). Although there are many BPM formalisms, most of them model discrete event systems. Conformance checking may be just deciding whether the recorded event trace matches the process specification (in diagnosis terms, whether the system's execution is normal or abnormal), but often one seeks to find a best _trace alignment_(De Giacomo, Maggi, Marella, & Sardina, 2016): a set of insertions (events missing from the trace), deletions (spurious events in the trace) and substitutions (erroneous events in the trace) that together are sufficient to make the event trace match the process. In diagnosis terms, these adjustments to the trace are fault events, and a best trace alignment corresponds to a minimal diagnosis candidate. Note that in such a candidate, the same fault event may occur multiple times, for example if the trace has multiple spurious events of the same type. Thus, the space of diagnosis hypotheses can not be modelled simply as sets of fault events. The problem of process model adaptation, which examines event traces corresponding to multiple executions of the process and seeks a minimal modification of the process specification that suffices to make all traces match, can likewise be viewed as an instance of DES diagnosis. Another example of diagnosis of recorded event traces occurs in longitudinal, or temporal, databases, where each record denotes a change in the status of some entity occurring at some time. The ordered set of records relating to one entity forms a timeline, or event trace, of that entity. In the case study described by Boselli et al. (Boselli, Cesarini, Mercorio, & Mezzanzanica, 2014), each entity is a person, and each record pertains to a change in their employment status: starting work for a new employer, ceasing work, extending a fixed-term position or converting a current job between part-time and full-time, or between fixed-term and continuing. Entity timelines are typically subject to integrity constraints, rules that prescribe events that cannot or must happen. For example, a person must have started work with an employer before that job can cease, or be converted or extended; a person can only hold one full-time job at a time, and thus cannot start a part-time job if already on a full-time position, or start a full-time job if already working at all, but a person can start a new part-time job if they are already working another part time. However, errors and omissions in data entry mean that entity timelines often do not satisfy the database rules. Rather than rejecting such records, the problem of _data cleaning_ is to find a minimal set of corrections that will restore consistency to the timeline (Dallachiesa, Ebaid, Eldawy, Elmagarmid, Ilyas, Ouzzani, & Tang, 2013; Geerts, Mecca, Papotti, & Santorino, 2013; Boselli et al., 2014). For example, consider the following timeline, from Boselli et al.'s data set: \begin{tabular}{c c c c c c} Date & Worker & Event type & Full/Part & Term/Cont. & Employer \\ \hline \(d_{1}\) & 1370 & start & full-time & fixed-term & 8274 \\ \(d_{2}\) & 1370 & cease & full-time & fixed-term & 8274 \\ \(d_{3}\) & _1370_ & _convert_ & _full-time_ & _fixed-term_ & _8274_ \\ \(d_{4}\) & _1370_ & _cease_ & _full-time_ & _fixed-term_ & _8274_ \\ \(d_{5}\) & 1370 & start & full-time & fixed-term & 36638 \\ \end{tabular} The records on dates \(d_{3}\) and \(d_{4}\) violate the integrity constraints, because they record a conversion event for a position that has already ceased, and a double cessation record for the same position. Like trace alignment, these corrections may be insertion of missing records, deletion of spurious records, or changes to individual fields of a record, including changes to the timing of records, and thus the order of event in the timeline, and like in that case each correction can occur multiple times in a timeline. Thus, viewed as a DES diagnosis problem, a minimal diagnosis candidate is a multiset of faults events. In the example above, the minimal diagnosis candidates include replacing the conversion on \(d_{3}\) with a "start" event (i.e., the person starting work again for the same employer), or deleting the cessation event on \(d_{2}\) and changing either full-time or fixed-term status in the records on \(d_{3}\) and \(d_{4}\). Because there is no way to know with certainty which diagnosis candidate corresponds to the true sequence of events, a data cleaning diagnoser needs to return the complete set of minimal fault event multisets, for a human to decide which corrections to apply or whether to investigate further. Note that when the diagnostic hypotheses are multisets (or sequences) of faults rather than simple sets, the hypothesis space is infinite, and even the set of candidates or the diagnosis may be infinite. Close attention must therefore be given to avoiding non-termination of the diagnosis algorithm. In this paper, we present a number of algorithms that are able to compute the complete diagnosis, also in infinite hypothesis spaces, along with sufficient assumptions to guarantee their termination (Section 6). ### Alarm Processing In large complex systems, such as power grids or telecommunication networks, faults can produce non-trivial effects. Alarms are time-stamped system-generated messages intended to aid operators in diagnosing fault conditions and take timely corrective actions. However, system complexity and the local nature of alarm conditions mean that when a fault occurs, its secondary effects often result in "alarm cascades" which obscure rather than inform about the root cause. This problem has been recognised for some time (Prince, Wollenberg, & Bertagnolli, 1989), and there have been several attempts to use AI techniques to ease the interpretation of alarms through filtering, prioritising and explaining them (Cordier, Krivine, Laborie, & Thiebaux, 1998; Cordier & Dousson, 2000; Taisne, 2006; Larsson, 2009; Bauer, Botea, Grastien, Haslum, & Rintanen, 2011). Framing the problem as dynamical system diagnosis treating unexplained alarms as fault events means that a diagnoser can identify secondary alarms, and thus focus attention on root causes (Bauer et al., 2011; Haslum & Grastien, 2011). Alarm logs have an important temporal dimension. For example, in a power network, the event of a circuit breaker opening can explain a following voltage drop alarm on the power line protected by the breaker, if the breaker opening isolates the line. This implies that the _sequence_ of fault (unexplained) events in the diagnosis also matters: An unexplained circuit breaker opening followed by an unexplained voltage drop does not carry the same meaning as the same two unexplained alarms in the opposite order (the former implies that it could not be inferred from the model and observation that the breaker opening was sufficient to isolate the line). Thus, the diagnostic hypotheses in this setting are sequences of fault events, rather than sets. Sequences of faults pose particular problems for classical diagnosis algorithms. Decomposition, for instance, is no longer as easy: In the simple case when diagnostic hypotheses are sets of faults, inferring independently that faults \(f_{1}\) and \(f_{2}\) are present implies that any candidate fault set must contain \(\{f_{1},f_{2}\}\) as a subset. However, when diagnostic hypotheses are fault sequences, inferring the presence of fault events \(f_{1}\) and \(f_{2}\) does not distinguish between sequences in which \(f_{1}\) occurs before \(f_{2}\) and those with the two events in opposite order. Existing conflict-directed algorithms for diagnosis over fault-set hypotheses are based on such a decomposition. We show in this paper how the notion of conflict can be generalised to any type of diagnostic hypothesis space. This is done by making the concept of _properties_ of a hypothesis explicit, and defining a set of properties that is sufficient to represent every relevant hypothesis set, for any hypothesis space (Section 4.2). ### Diagnosis of Hybrid Systems Hybrid systems are a class of models of dynamic systems that exhibit both discrete mode changes and continuous evolution. Hybrid systems naturally model physical processes under discrete control, such as electrical systems (Kurtoglu, Narasimhan, Poll, Garcia, Kuhn, de Kleer, van Gemund, & Feldman, 2009; Fox, Long, & Magazzeni, 2012) and heating, ventilation, and air conditioning (HVAC) systems (Behrens & Provan, 2010; Ono, Graybill, & Williams, 2012; Lim, van den Briel, Thiebaux, Backhaus, & Bent, 2015). Diagnosis of hybrid systems can exhibit all the complexities of discrete event systems, and more. Consider, for example, the possible fault modes of sensors in the Adapt benchmark system (Kurtoglu et al., 2009): When operating normally, a sensor's output is the real-valued sensed value plus a bounded random noise. However, the sensor can fail by becoming stuck at a fixed reading, by returning a value at a fixed offset from the true reading, or by becoming subject to drift, which is an offset value that increases over time. At a discrete abstraction level, this is simply four possible fault modes, but a fully precise diagnosis should also identify the offset constant or drift rate for those fault modes. The consistency-based approach can be applied to diagnosis of hybrid systems (Grastien, 2014), and has some advantages over approaches that require a predictive model to simulate the system, which are unable to handle unspecified or unpredictable behaviour modes (Hofbaur & Williams, 2004). However, as we will show in this paper, there are limitations to what can be guaranteed. If the diagnosis is required to estimate real-valued fault parameters, such as the offset or drift rate of a faulty sensor, the hypothesis space is _dense_, in which case a finite minimal diagnosis may not exist. ## 3 The Diagnosis Problem In this section, we first present a generic definition of the diagnosis problem, based on the notion of hypothesis space. The hypothesis space is motivated by the fact that different diagnostic environments (static systems and dynamic systems, in particular) require different types of diagnoses. We then illustrate the generic definition with different types of hypothesis spaces and discuss their relative expressiveness. Finally, we discuss a number of properties of these spaces that will influence the type of strategy that may be used to explore the space. ### Diagnosis Definition We consider a system with a model _Mod_, i.e., a description of all behaviours the system can exhibit. We assume this model is "complete", by which we mean that if a behaviour \(\sigma\) is possible in the system, then this behaviour is allowed by the model; we then write \(\sigma\in\textit{Mod}\). A (partial) observation \(o\) of the system is a predicate on behaviours: \(o(\sigma)\) is true if behaviour \(\sigma\) is consistent with what has been observed. We make no assumptions about how _Mod_ and \(o\) are represented other than that they are of a form that the test solver (that is, the theorem prover, model checker, etc, that will be used to reason about the system) can work with. Typically, they will be given in some compact form (such as a set of logical constraints, a factored representation of a discrete event system, or similar). Given the model \(\mathit{Mod}\) and an observation \(\mathit{o}\), the purpose of diagnosis is not to retrieve the exact behaviour (or set of possible behaviours), but to infer the diagnostic information associated with it. For instance, we may want to identify which faults have occurred in the system and in which order. The diagnostic abstraction of a behaviour is called a "hypothesis" and we write \(\mathit{hypo}(\sigma)\) for the (single) hypothesis associated with behaviour \(\sigma\). We write \(\mathbb{H}\) for the hypothesis space and we assume that hypotheses are mutually exclusive (i.e., \(\mathit{hypo}:\mathit{Mod}\rightarrow\mathbb{H}\) is a function). Because the system is only partially observable and may not be diagnosable1, it is generally not possible to precisely retrieve the hypothesis \(\mathit{hypo}(\sigma)\). Instead, the _diagnosis_ is the collection of hypotheses that are consistent (compatible) with both the model and the observation; such hypotheses are called "diagnosis candidates". From now on, we will use \(\delta\) to represent a candidate, whilst \(h\) will refer to a hypothesis that may not be a candidate. Footnote 1: Diagnosability is the property that a fault will always be precisely identified; there is generally a correlation between diagnosability and the uniqueness of the diagnosis candidate (Grastici & Torta, 2011). **Definition 1** (Diagnosis): _Given a model \(\mathit{Mod}\), an observation \(\mathit{o}\), and a hypothesis space \(\mathbb{H}\), the diagnosis is the subset \(\Delta(\mathit{Mod},\mathit{o},\mathbb{H})\) of hypotheses supported by at least one behaviour consistent with the observation:_ \[\Delta(\mathit{Mod},\mathit{o},\mathbb{H})=\{\delta\in\mathbb{H}\mid\exists \sigma\in\mathit{Mod}:\mathit{o}(\sigma)\wedge\mathit{hypo}(\sigma)=\delta\}. \tag{1}\] Because it asks only for consistency between the candidate and the observation, this definition of diagnosis is weaker than that of an abductive diagnosis (Brusoni, Console, Terenziani, & Theseider Dupre, 1998), which requires each candidate to logically imply (part of) the observation. To make the diagnosis more precise, it is common to impose a minimality condition. The hypothesis space is equipped with a partial order relation \(\preceq\) such that if \(\delta\preceq\delta^{\prime}\), then \(\delta\) is preferred to \(\delta^{\prime}\), meaning \(\delta^{\prime}\) may be removed from the diagnosis. Recall that a partial order relation is antisymmetric, i.e., \((h\preceq h^{\prime})\wedge(h^{\prime}\preceq h)\Rightarrow(h=h^{\prime})\). In the rest of the paper, we assume without loss of generality the existence of a unique most preferred hypothesis \(h_{0}\) of \(\mathbb{H}\). This will normally correspond to the nominal system behavior, but if necessary (e.g., if they were multiple such behaviors or even an infinite number of them), one can always take \(h_{0}\) to be a dummy hypothesis inconsistent with the system model. We want to ignore the candidates that are not minimal with respect to \(\preceq\), where the subset of minimal elements of a set \(\mathbb{H}\) is defined as \(\min_{\prec}H=\{h\in H\mid\nexists h^{\prime}\in H.\ h^{\prime}\prec h\}\). We also want every ignored candidate to be supported by at least one minimal candidate. We then say that the minimal diagnosis _covers_ the diagnosis. Formally given two subsets \(H\) and \(H^{\prime}\) of \(\mathbb{H}\), \(H\) covers \(H^{\prime}\) if \[\forall h^{\prime}\in H^{\prime}.\ \exists h\in H.\ h\preceq h^{\prime}.\] The definition of the minimal diagnosis is as follows: **Definition 2** (Minimal Diagnosis): _Given a model \(\mathit{Mod}\), an observation \(\mathit{o}\), and a hypothesis space \(\mathbb{H}\), the subset \(\Delta_{\preceq}(\mathit{Mod},\mathit{o},\mathbb{H})\) of candidates in \(\Delta(\mathit{Mod},\mathit{o},\mathbb{H})\) that are minimal with respect to \(\preceq\) is the minimal diagnosis if it covers the diagnosis._ In most diagnosis environments, it will be the case that a minimal diagnosis always exists. However, in Subsection 3.3 we show an example of where it does not. To simplify notation, we will in the rest of the paper omit the parameters from the diagnosis and the minimal diagnosis, i.e., we will simply write \(\Delta\) and \(\Delta_{\preceq}\). ### Examples of Hypothesis Spaces The simplest hypothesis space is the Binary Hypothesis Space (BHS), where each behaviour is classified only as either nominal or faulty. This leads to a fault detection problem rather than a diagnosis one. The preferred hypothesis is generally the nominal hypothesis. The most commonly used hypothesis space is the Set Hypothesis Space (SHS). Given a set \(F\) of faults, a hypothesis is the subset \(h\subseteq F\) of faults that appear in the behaviour. Preference is given to hypotheses that contain a subset of faults: \(h\preceq h^{\prime}\Leftrightarrow h\subseteq h^{\prime}\). Another popular hypothesis is the Minimal Cardinality Set Hypothesis Space (MC-SHS). Hypotheses are defined similarly to SHS, as the set of faults that affect the system. The preference relation however is defined through the number of faults, with \(h\) preferred over \(h^{\prime}\) if it has the smallest cardinality (number of faults in the hypothesis): \[h\preceq h^{\prime}\Leftrightarrow\bigg{(}h=h^{\prime}\ \vee\ |h|<|h^{\prime}|\bigg{)}.\] For the case where the probability of faults varies, each fault \(f\) is associated with an a-priori probability \(Pr(f)\in(0,0.5)\), and the a-priori probability of hypothesis \(h\) is then \(h=\Pi_{f\in h}\ Pr(f)\times\Pi_{f\in F\setminus h}\ (1-Pr(f))\). The preference relation of the A-priori Probability Set Hypothesis Space (AP-SHS) then maximises the a-priori probability: \[h\preceq h^{\prime}\Leftrightarrow\bigg{(}h=h^{\prime}\ \vee\ Pr(h)<Pr(h^{\prime})\bigg{)}.\] Bylander et al. proposed more elaborate definitions based on qualitative plausibilities of the hypotheses (Bylander, Allemang, Tanner, & Josephson, 1991). Our theory does not handle diagnosis problems in which probability is maximised a-posteriori, i.e., after the likelihood of the hypothesis given the observations has been factored in (Lucas, 2001). In dynamic systems, faults may occur several times. The Multiset Hypothesis Space (MHS) associates each fault with the number of occurrences of this fault: \(h:F\rightarrow\mathbf{N}\). A hypothesis is preferred to another if it has no more occurrences of any fault: \(h\preceq h^{\prime}\Leftrightarrow(\forall f\in F,\ h(f)\leq h^{\prime}(f))\). If we wish to also distinguish the order of occurrences of faults, a hypothesis in the Sequence Hypothesis Space (SqHS) is a (possibly empty) sequence of faults: \(h\in F^{\star}\). A hypothesis is preferred to another if the former is a subsequence of the latter. Formally, if \(h=[f_{1},\ldots,f_{k}]\) and \(h^{\prime}=[f_{1}^{\prime},\ldots,f_{n}^{\prime}]\), then \(h\preceq h^{\prime}\Leftrightarrow\exists g:\{1,\ldots,k\}\rightarrow\{1, \ldots,n\}:\ (\forall i\in\{1,\ldots,k-1\},\ g(i)<g(i+1))\)\(\wedge\ (\forall i\in\{1,\ldots,k\},\ f_{i}=f_{g(i)}^{\prime})\). For instance, hypothesis \([a,b]\) is preferable to hypothesis \([c,a,d,b]\). We can also strengthen the preference order to treat faults differently, for instance, to reflect their relative likelihood. As an example, we consider the Ordered Multiset Hypothesis Space (OMHS). The hypotheses in this space are the same as in MHS, i.e., mappings from each fault to the number of times it occurred, but we also have an ordering of the faults, and any number of occurrences of a fault \(f^{\prime}\) is preferred to a single occurrence of a fault \(f\prec f^{\prime}\). Formally, \(h\preceq h^{\prime}\Leftrightarrow\forall f^{\prime}\in F,\ h(f^{\prime})>h^{ \prime}(f^{\prime})\Rightarrow\exists f\in F:(f\prec f^{\prime})\wedge(h(f)<h^ {\prime}(f))\). This corresponds to fault \(f^{\prime}\) being infinitely more likely than fault \(f\). Finally, we consider faults that are represented by a continuous value. This can be used to model, for example, the situation where the fault is a drift in a model parameter. We assume a single continuous-valued fault. This is a very simple case, but it will be sufficient for illustrative purposes. In the Continuous Hypothesis Space (CHS), a hypothesis is a positive real value: \(h\in\mathbf{R}^{+}\). Preference is given to smaller values: \(h\preceq h^{\prime}\Leftrightarrow h\leq h^{\prime}\). ### Properties of Hypothesis Spaces In this section, we define the terminology related to hypothesis spaces which will be used to define our framework and to formulate termination conditions for different exploration strategies. Relations Between HypothesesIf \(h\preceq h^{\prime}\), we say that \(h\) is an _ancestor_ of \(h^{\prime}\) and that \(h^{\prime}\) is a _descendant_ of \(h\) (note that, since \(\preceq\) is non-strict, \(h\) is an ancestor and a descendant of itself). If \(h\prec h^{\prime}\) and there is no \(h^{\prime\prime}\) such that \(h\prec h^{\prime\prime}\prec h^{\prime}\), then we say that \(h^{\prime}\) is a _child_ of \(h\) and \(h\) is a _parent_ of \(h^{\prime}\). FinitenessThe first condition we consider is whether the hypothesis space is finite. Infinite hypothesis spaces must be dealt with more cautiously, as they may prevent the diagnosis algorithm from terminating. In a finite space, any systematic exploration strategy (i.e, one that does not revisit a previously rejected hypothesis) will terminate. BHS, SHS, MC-SHS, and AP-SHS are finite. Well Partial OrdernessA binary relation on a set \(\mathbb{S}\) is a _well partial order_ iff it is a (non-strict) partial order and every non-empty subset of \(\mathbb{S}\) has a finite and non-empty set of minimal elements according to the order (e.g., (Kruskal, 1972)). That is, \[\forall S\subseteq\mathbb{S}.\quad S\neq\emptyset\ \Rightarrow\ 0<|\min_{\preceq}(S)|<\infty.\] If the preference order \(\preceq\) is a well partial order on \(\mathbb{H}\), we say that \(\mathbb{H}\) is _well partially ordered_ (by \(\preceq\)). A well partial order is always well-founded, meaning it has no infinite descending chains. The continuous hypothesis space given in the previous section (CHS) is not well partially ordered. To see this, consider the set of hypotheses that correspond to a strictly positive value, i.e., \(S=\{h\in\mathbb{H}_{\mathrm{CHS}}\mid h>0\}\). This set has no minimal value, which means that \(\min_{\preceq}(S)\) is empty. All the other hypothesis spaces discussed in the previous section are well partially ordered. For the non-trivial cases of MHS and SqHS, this follows from the work of Nash-Williams (Nash-Williams, 1963) on well-quasi-ordered finite trees. Well partially ordered hypothesis spaces have several useful properties: First, that the minimal diagnosis always exists and is finite (this is shown in Theorem 1 below). Second, that the set of parents and the set of children of any given hypothesis are both finite. This follows from the fact that all parents of a hypothesis are themselves unordered; thus, they are all minimal in the set of the hypothesis' parents and, therefore, there cannot be infinitely many of them. The same is true of its children. Third, any strict descendant of a hypothesis is also a (possibly non-strict) descendant of some child of that hypothesis. **Theorem 1**: _If the hypothesis space is well partially ordered, then the minimal diagnosis exists and is defined by:_ \[\Delta_{\preceq}=\min_{\preceq}(\Delta)=\{h\in\Delta\mid\forall h^{\prime}\in \Delta,\ h^{\prime}\preceq h\Rightarrow h=h^{\prime}\}. \tag{2}\] _Furthermore, \(\Delta_{\preceq}\) is finite._ **Proof:** We must show that \(\min_{\preceq}(\Delta)\) satisfies the condition of Definition 2 which states that \(\min_{\preceq}(\Delta)\) must cover the diagnosis. Assume that the diagnosis is not covered by \(\min_{\preceq}(\Delta)\). Let \(\delta_{1}\) be a diagnosis candidate that is not covered: \(\nexists\delta^{\prime}\in\min_{\preceq}(\Delta)\) such that \(\delta^{\prime}\preceq\delta_{1}\). Then because \(\delta_{1}\not\in\min_{\preceq}(\Delta)\), there exists another preferable candidate \(\delta_{2}\prec\delta_{1}\) that is not covered. Applying the same reasoning, we end up with an infinite sequence of hypotheses \(\delta_{1}\succ\delta_{2}\succ\ldots\) This sequence contradicts the property of well partially order. \(\Box\) If the space is not well partially ordered, there is no such guarantee. For instance, in the CHS, if \(\Delta=\{h\in\mathbb{H}_{\text{CHS}}\mid h>0\}\), as in the example above, then \(\min_{\preceq}(\Delta)=\emptyset\) which does not satisfy the covering requirement of Definition 2. Thus, in this situation there exists no minimal diagnosis. Path, Depth and DistanceFinally, we define concepts that relate to a hypothesis' "position" in the hypothesis space, which we will use in Section 6 when proving termination of our diagnosis algorithms. A _path_ from hypothesis \(h\) to \(h^{\prime}\) is a sequence of hypotheses \(h_{1}\prec\ldots\prec h_{k}\) such that \(h_{1}=h\) and \(h^{\prime}=h_{k}\). An _atomic path_ is a path \(h_{1}\prec\ldots\prec h_{k}\) such that each \(h_{i}\) is a parent of \(h_{i+1}\). The _distance_ of hypothesis \(h\) (implicitely from \(h_{0}\)) is the minimal length of an atomic path from hypothesis \(h_{0}\) to hypothesis \(h\); if no such atomic path exists, hypothesis \(h\) is said to have an infinite distance. A hypothesis is said to be _finitely reachable_ if it has a finite distance. The ordered multiset hypothesis space (OMHS) illustrates a situation with non-finitely reachable hypotheses. Assume two fault events, \(f_{1}\) and \(f_{2}\) where \(f_{1}\prec f_{2}\) (any number of occurrences of \(f_{2}\) is preferred to one occurrence of \(f_{1}\)) and consider hypothesis \(h=\{f_{1}\to 1,f_{2}\to 0\}\). Then \(h\) has no parent: indeed, all strict ancestors of \(h\) are hypotheses \(h_{i}\) with no occurrence of \(f_{1}\) and \(i\) occurrences of \(f_{2}\): \(h_{i}=\{f_{1}\to 0,f_{2}\to i\}\). Then for all \(i\) the property \(h_{i}\prec h_{i+1}\prec h\) holds, and \(h_{i}\) is not a parent of \(h\). Since \(h\) has no parent no atomic path leads to \(h\). The _depth_ of a hypothesis \(h\) is the maximal length of a path from \(h_{0}\) to \(h\). If there is no maximal length, the depth is said to be infinite. The depth of a hypothesis is, by definition, larger than or equal to its distance, hence a hypothesis that is not finitely-reachable has an infinite depth. The converse may not hold however: there can be a finite atomic path \(h_{0}\prec h_{1}\prec h\) and, at the same time, an infinite number of paths \(h_{0}\prec h_{1}^{\prime}\prec\ldots\prec h_{k}^{\prime}\prec h\) for any \(k\). To find an example we have to look at some even more fine-grained preference order. For example, with reference to Figure 1, consider a system consisting of a component monitored by a sensor. The component can exhibit any number of temporary failures (represented by a natural number), while the sensor has two modes: nominal (\(N\)) and faulty (\(F\)). It is assumed that the component and the sensor both experiencing faults is infinitely more unlikely than any number of faults on the component. Consequently, \(h_{0}=\langle 0,N\rangle\) is the unique preferred hypothesis; \(h_{1}=\langle 0,F\rangle\) is a child of \(h_{0}\) (any \(h_{i}^{\prime}=\langle i,N\rangle\), \(i\geq 1\), is incomparable to \(h_{1}\)); \(h=\langle 1,F\rangle\) is a child of \(h_{1}\) (there is no hypothesis \(h^{\prime}\) such that \(h_{1}\prec h^{\prime}\prec h\)) hence \(h\)'s distance is \(2\) and \(h\) is finitely-reachable. On the other hand, we have \(h_{0}\prec h_{1}^{\prime}\prec h_{2}^{\prime}\prec\ldots\prec h\), i.e., \(h\) is infinitely deep. ### Abstraction of Hypothesis Spaces In the previous section, we hinted that the diagnosis in some hypothesis spaces is more informative than in others. We now formalise this notion. A hypothesis space \(\mathbb{H}\) (together with its preference relation \(\preceq\) and its function \(\mathit{hypo}:\mathit{Mod}\rightarrow\mathbb{H}\)) is a _refinement_ of hypothesis space \(\mathbb{H}^{\prime}\) (together with \(\preceq^{\prime}\) and \(\mathit{hypo}^{\prime}\)), and conversely \(\mathbb{H}^{\prime}\) is an _abstraction_ of \(\mathbb{H}\), if each hypothesis of \(\mathbb{H}^{\prime}\) corresponds exactly to a subset of hypotheses in \(\mathbb{H}\). Formally, there exists a function \(\alpha:\mathbb{H}\rightarrow\mathbb{H}^{\prime}\) that projects each hypothesis of \(\mathbb{H}\) on \(\mathbb{H}^{\prime}\) such that * \(\forall\sigma\in\mbox{\it Mod.\ hypo}^{\prime}(\sigma)=\alpha(\mbox{\it hypo}( \sigma))\), i.e., \(\mathbb{H}^{\prime}\) is an abstraction of \(\mathbb{H}\), and * \(\forall\{h_{1},h_{2}\}\subseteq\mathbb{H}.\ h_{1}\preceq h_{2}\Rightarrow\alpha (h_{1})\preceq^{\prime}\alpha(h_{2})\), i.e., the preference relation is maintained by the abstraction. The projection is extended naturally to a set of hypotheses, i.e., \(\alpha(H)=\{h^{\prime}\in\mathbb{H}^{\prime}\ |\ \exists h\in H.\ h^{\prime}= \alpha(h)\}\). For instance, the set hypothesis space is an abstraction of the multiset hypothesis space (over the same set of faults). Given a multiset hypothesis, i.e., a mapping \(F\rightarrow\mathbf{N}\), the abstraction function \(\alpha\) returns the subset of faults that are associated with a strictly positive number: \(\alpha(h)=\{f\in F\ |\ h(f)>0\}\). Furthermore, the preference relation is maintained: if \(h_{1}\preceq_{\rm MHS}h_{2}\), then \(h_{1}(f)\leq h_{2}(f)\) for all \(f\); consequently, \(\alpha(h_{1})\subseteq\alpha(h_{2})\) and \(\alpha(h_{1})\preceq_{\rm SHS}\alpha(h_{2})\). An abstraction/refinement relationship between two hypothesis spaces implies that the diagnoses (and minimal diagnoses) in those two spaces are also related. This is shown by the following two lemmas. Theorem 2 below states all abstraction relations (summarised in Figure 2) between the hypothesis spaces for discrete event systems (BHS, SHS, MC-SHS, AP-SHS, OMHS, MHS, and SqHS) described in the previous subsection. **Lemma 1**: _If \(\mathbb{H}^{\prime}\) is an abstraction of \(\mathbb{H}\), the projection on \(\mathbb{H}^{\prime}\) of the diagnosis in \(\mathbb{H}\) is the diagnosis in \(\mathbb{H}^{\prime}\): \(\alpha(\Delta)=\Delta^{\prime}\)._ **Proof:** We prove that \(\Delta^{\prime}\) is exactly the set of hypotheses \(\delta^{\prime}=\alpha(\delta)\) for some candidate \(\delta\in\Delta\). \[\begin{array}{rcl}\delta\in\Delta&\Rightarrow&\exists\sigma\in\mbox{\it Mod.\ }o(\sigma)\wedge\mbox{\it hypo}(\sigma)=\delta\\ &\Rightarrow&\exists\sigma\in\mbox{\it Mod.\ }o(\sigma)\wedge\mbox{\it hypo}^{ \prime}(\sigma)=\alpha(\delta)\\ &\Rightarrow&\alpha(\delta)\in\Delta^{\prime}\end{array}\] Figure 1: Hypothesis space illustrating that the depth can be infinite while the distance is finite. An unbroken line indicates a parent/child relationship; a dashed one, an ancestor one. The distance between \(\langle 0,N\rangle\) and \(\langle 1,F\rangle\) is two; the depth is infinite. Conversely, \[\begin{array}{rcl}\delta^{\prime}\in\Delta^{\prime}&\Rightarrow&\exists\sigma\in \mbox{\it Mod. }o(\sigma)\wedge\mbox{\it hypo}^{\prime}(\sigma)=\delta^{\prime}\\ &\Rightarrow&\exists\sigma\in\mbox{\it Mod. }\mbox{\it hypo}(\sigma)\in\Delta\wedge\mbox{\it hypo }^{\prime}(\sigma)=\delta^{\prime}\\ &\Rightarrow&\exists\sigma\in\mbox{\it Mod. }\mbox{\it hypo}(\sigma)\in\Delta\wedge\alpha(\mbox{\it hypo }(\sigma))=\delta^{\prime}\\ &\Rightarrow&\exists\delta\in\Delta.\ \alpha(\delta)=\delta^{\prime}\end{array}\] \(\Box\) **Lemma 2**: _If \(\mathbb{H}^{\prime}\) is an abstraction of \(\mathbb{H}\), the projection on \(\mathbb{H}^{\prime}\) of the minimal diagnosis in \(\mathbb{H}\) is contained in the diagnosis in \(\mathbb{H}^{\prime}\) and contains the minimal diagnosis in \(\mathbb{H}^{\prime}\): \(\Delta^{\prime}_{\preceq^{\prime}}\subseteq\alpha(\Delta_{\preceq})\subseteq \Delta^{\prime}\)._ **Proof:** Since \(\Delta_{\preceq}\subseteq\Delta\) then clearly \(\alpha(\Delta_{\preceq})\subseteq\alpha(\Delta)=\Delta^{\prime}\). Assume now that there exists a minimal candidate \(\delta^{\prime}_{1}\) in \(\mathbb{H}^{\prime}\) such that \(\delta^{\prime}_{1}\in\Delta^{\prime}_{\preceq^{\prime}}\setminus\alpha( \Delta_{\preceq})\). Then, by Lemma 1, there exists a candidate \(\delta_{1}\in\Delta\) such that \(\alpha(\delta_{1})=\delta^{\prime}_{1}\). Furthermore, since \(\delta^{\prime}_{1}\not\in\alpha(\Delta_{\preceq})\), \(\delta_{1}\not\in\Delta_{\preceq}\). Therefore, there must exist another candidate \(\delta_{2}\in\Delta_{\preceq}\) such that i) \(\delta_{2}\preceq\delta_{1}\) (which is why \(\delta_{1}\not\in\Delta_{\preceq}\)) and ii) \(\alpha(\delta_{2})=\delta^{\prime}_{2}\neq\delta^{\prime}_{1}\) (since \(\delta^{\prime}_{2}\in\alpha(\Delta_{\preceq})\) but \(\delta^{\prime}_{1}\not\in\alpha(\Delta_{\preceq})\)). However, by Lemma 1, \(\delta^{\prime}_{2}\) is a candidate, and by the second condition on \(\alpha\), \(\delta^{\prime}_{2}\preceq\delta^{\prime}_{1}\). Hence, \(\delta^{\prime}_{1}\) is not a minimal candidate, which contradicts its existence. \(\Box\) In other words, the projection of the minimal diagnosis \(\Delta_{\preceq}\) in \(\mathbb{H}\) is a subset of (possibly equal to) the diagnosis in the more abstract space \(\mathbb{H}^{\prime}\), whose minimisation is the minimal diagnosis in \(\mathbb{H}^{\prime}\). Returning to the example of the set and multiset hypothesis spaces, given a minimal diagnosis \(\Delta^{\mbox{\scriptsize{MHS}}}_{\preceq}=\{\{a\to 2,b\to 0\},\{a\to 1,b\to 1\},\{a\to 0,b\to 2\}\}\) in the multiset hypothesis space, its projection on the set hypothesis space is \(\alpha(\Delta^{\mbox{\scriptsize{MHS}}}_{\preceq})=\{\{a\},\{a,b\},\{b\}\}\). The minimal diagnosis in the set hypothesis space is \(\Delta^{\mbox{\scriptsize{SHS}}}_{\preceq}=\{\{a\},\{b\}\}\), which is the set of minimal elements of \(\alpha(\Delta^{\mbox{\scriptsize{MHS}}}_{\preceq})\). This relation between the (minimal) diagnosis in a hypothesis space \(\mathbb{H}\) and an abstraction \(\mathbb{H}^{\prime}\) of \(\mathbb{H}\) has implications for the complexity of computing it: Since the (minimal) diagnosis in \(\mathbb{H}^{\prime}\) can be computed from the (minimal) diagnosis in \(\mathbb{H}\), in time polynomial in the size of the diagnosis, we can say that diagnosing in a more refined hypothesis space is at least as hard as diagnosing in the more abstract space. **Theorem 2**: _The set of abstraction relations between hypothesis spaces shown in Figure 2 is correct and complete._ **Proof:** (Sketch) The abstraction function from *SHS to BHS is \(\alpha(h)=\mbox{nominal iff }h=\emptyset\). The abstraction function from SHS to CA-SHS is the identity function, and the preference relation of SHS is indeed maintained in CA-SHS: \(h\subseteq h^{\prime}\Rightarrow\left(h=h^{\prime}\ \vee\ |h|<|h^{\prime}|\right)\). Similarly the Figure 2: Abstraction relations between the hypothesis spaces of DES presented in Subsection 3.2; \(\mathbb{H}^{\prime}\) is an abstraction of \(\mathbb{H}\) iff there is a directed path from \(\mathbb{H}\) to \(\mathbb{H}^{\prime}\). preference between two SHS hypotheses is maintained when these hypotheses are interpreted as AP-SHS thanks to the fact that each fault has an a-priori probability below \(0.5\), which implies that removing a fault from a hypothesis increases its a-priori probability. The abstraction function from MHS to SHS has already been described. The abstraction function from SqHS to MHS counts the number of occurrences of each faulty event in the sequence. The abstraction function from OMHS to BHS is \(\alpha(h)=\text{nominal iff }h(f)=0\) for all faulty events \(f\). The abstraction function from MHS to OMHS is the identity function; OMHS is an abstraction of MHS because its associated precedence relation is more restrictive than that of MHS. Finally the abstraction function from CHS to BHS is \(\alpha(h)=\text{nominal iff }h=0\). There is no relation between SHS and OMHS since SHS does not mention the number of occurrences as OMHS does, while the mapping from OMHS to SHS does not maintain the preference relation: for instance, if \(a\prec b\), then \(\{a\to 0,b\to 1\}\prec_{\text{OMHS}}\{a\to 1,b\to 0\}\), while \(\{b\}\not\prec_{\text{SHS}}\{a\}\). \(\Box\) ## 4 Representing and Testing Sets of Hypotheses The diagnosis approach developed in this paper is based on an operation called the _diagnosis test_. A test, defined in Subsection 4.1, decides whether a given set of hypotheses has a non-empty intersection with the diagnosis, that is, whether any hypothesis in the set is a candidate. The set of hypotheses to be tested is not enumerated but represented symbolically. To this end, we define in Subsection 4.2_hypothesis properties_, which are atomic statements used to describe hypotheses. We show how to construct for any hypothesis space a matching property space that is "sufficient", in the sense that any set of hypotheses that we need to test has a representation using properties in this space. In Subsection 4.3 we discuss three specific types of tests, which we term "diagnosis questions", that together are sufficient to implement the exploration strategies we propose. The strategies themselves are described in Section 6. Here, and in the remainder of the paper, we consider only well partially ordered hypothesis spaces. As shown earlier, this ensures that the minimal diagnosis exists and is finite, so that the diagnosis algorithm can output it in finite time. ### The Diagnosis Test Our diagnosis algorithms are based on an operation called the _diagnosis test_. We assume the existence of an "oracle", called the _test solver_, that is able to perform such tests. We will describe several concrete implementations of test solvers, for DES and different hypothesis spaces, in Section 8. A diagnosis test is the problem of deciding whether a given set \(H\subseteq\mathbb{H}\) contains a diagnosis candidate. **Definition 3**: _A diagnosis test is a tuple \(\langle\text{Mod},o,H\rangle\) where Mod is a system model, \(o\) is an observation, and \(H\subseteq\mathbb{H}\) is a set of hypotheses._ _The result of a test is either a hypothesis \(\delta\in H\) such that \(\delta\in\Delta(\text{Mod},o,\mathbb{H})\) if any such \(\delta\) exists, and \(\bot\) otherwise (where \(\bot\notin\mathbb{H}\) is a distinct symbol)._ Later, in Section 6.3, we will amend this definition to allow the test solver to return a conflict instead of \(\bot\), but for now we limit ourselves to the simple version. Given a diagnosis problem \(\langle\text{Mod},o,\mathbb{H}\rangle\), a test is defined solely by the hypothesis set \(H\). If the test returns a candidate, we say it is successful; otherwise, we say it failed. ### Hypothesis Properties Some of the sets of hypotheses we will need to test to compute the diagnosis can be very large, and some of them will even be infinite. Therefore, we represent such sets symbolically, by a finite set of _hypothesis properties_. These properties are atomic statements about hypotheses. A set of properties represents those hypotheses that satisfy all properties in the set. Not all sets of hypotheses will be represented in this way. The minimal diagnosis returned by our algorithms is an explicitly enumerated set of candidates, as are some other sets manipulated by the algorithms during computation of the minimal diagnosis. However, all hypothesis sets given to the test solver to test are represented symbolically; that is, the test solver's input will be a set of properties, rather than a set of hypotheses. To distinguish the two types of sets, we will use \(H\) for sets of hypotheses represented symbolically and \(S\) for explicitly enumerated hypothesis sets. **Definition 4**: _A hypothesis property (or simply, property) is an object \(p\) that implicitly represents a (possibly infinite) set of hypotheses \(\mbox{hypos}(p)\subseteq\mathbb{H}\). If hypothesis \(h\) belongs to \(\mbox{hypos}(p)\), we say that \(h\) exhibits property \(p\), or that \(p\) is a property of \(h\). For any property \(p\), we also use \(\neg p\) as a property, with the meaning \(\mbox{hypos}(\neg p)=\mathbb{H}\setminus\mbox{hypos}(p)\)._ _Given a hypothesis property space \(\mathbb{P}\), we write \(\mbox{props}(h)\subseteq\mathbb{P}\) for the set of properties of \(h\). A set \(P\subseteq\mathbb{P}\) of properties implicitly represents the set \(\mbox{hypos}(P)\) of hypotheses that exhibit all properties in \(P\): \(\mbox{hypos}(P)=\{h\in\mathbb{H}\mid P\subseteq\mbox{props}(h)\}=\bigcap_{p \in P}\mbox{hypos}(p)\)._ Simple examples of properties are that a given fault occurred; or did not; that it occurred at most once; or more than once; that one type of fault occurred before another; and so on. We give more examples of properties later in this subsection. A priori, we can define properties to represent any set of hypotheses. Given a set \(H\) of hypotheses, we could define a property \(p_{H}\) such that \(\mbox{hypos}(p_{H})=H\). However, implementing support for such ad hoc properties in the test solver is not practical, and is also not very useful, since it does not help in the formation of informative conflicts. Useful properties are ones that allow the test solver to automatically infer information that can be generalised. For instance, the property that states that a specific fault did not occur is of this kind. Next, we define the property space \(\mathbb{P}\) that we will use in the rest of this paper. \(\mathbb{P}\) is derived from the hypothesis space \(\mathbb{H}\) considered and its preference relation, and is therefore defined for any hypothesis space. For each hypothesis \(h\in\mathbb{H}\), \(\mathbb{P}\) contains the following two properties and their negations: * \(p_{\mbox{desc}}(h)\) is the property of being a descendant of hypothesis \(h\), i.e., \(\mbox{hypos}(p_{\mbox{desc}}(h))=\{h^{\prime}\in\mathbb{H}\mid h\preceq h^{ \prime}\}\) and * \(p_{\mbox{anc}}(h)\) is the property of being an ancestor of hypothesis \(h\), i.e., \(\mbox{hypos}(p_{\mbox{anc}}(h))=\{h^{\prime}\in\mathbb{H}\mid h^{\prime} \preceq h\}\). These properties may appear somewhat abstract; their concrete meaning depends on the hypothesis space and preference order that underlies them. To give a more concrete example, let us look at the set hypothesis space (SHS): Let \(h=\{f_{1},f_{2}\}\subseteq F=\{f_{1},\ldots,f_{4}\}\) be the hypothesis that faults \(f_{1}\) and \(f_{2}\) took place, while the other two faults (\(f_{3}\) and \(f_{4}\)) did not. Then * \(p_{\mbox{desc}}(h)\) is the property that \(f_{1}\) and \(f_{2}\) took place (not ruling out that other faults may also have happened); * \(\neg p_{\rm desc}(h)\) is the property that not both \(f_{1}\) and \(f_{2}\) occurred; * \(p_{\rm anc}(h)\) is the property that no fault other than \(f_{1}\) or \(f_{2}\) took place, i.e., neither \(f_{3}\) nor \(f_{4}\); and * \(\neg p_{\rm anc}(h)\) is the property that some fault other than \(f_{1}\) and \(f_{2}\) took place, i.e., either \(f_{3}\) or \(f_{4}\) happened. These properties are sufficient to represent all of the sets of hypotheses that we will need to test in any of our strategies for exploring the hypothesis space. In fact, we can give a more precise characterisation of the hypothesis sets that can be represented with conjunctions of properties in \(\mathbb{P}\). To do this, we first need to recall some standard terminology: Let \(\preceq\) be a partial order on some set \(\mathbb{S}\); a subset \(S\) of \(\mathbb{S}\) is _convex_ iff for any two distinct elements \(a,b\in S\), every element \(c\) such that \(a\preceq c\preceq b\) is also in \(S\). **Theorem 3**: _Hypothesis set \(H\subseteq\mathbb{H}\) can be represented by a finite conjunction of properties over \(\mathbb{P}\) if and only if \(H\) is convex._ **Proof:** First, let \(H\) be a convex hypothesis set. If \(H=\emptyset\), the claim holds trivially, since the empty set can be represented by any contradictory set of properties, e.g., \(\{p_{\rm desc}(h),\neg p_{\rm desc}(h)\}\). Therefore, suppose \(H\) is non-empty. Let \(H^{\prec}=\{h^{\prime}\not\in H\mid\exists h\in H:h^{\prime}\preceq h\}\), \(H^{\succ}=\{h^{\prime}\not\in H\mid\exists h\in H:h\preceq h^{\prime}\}\), and \(H^{\rm U}=\{h^{\prime}\not\in H\mid\forall h\in H:h^{\prime}\not\preceq h\, \mbox{and}\,h\not\preceq h^{\prime}\}\), that is, \(H^{\prec}\) is the set of ancestors of hypotheses in \(H\) that are not themselves in \(H\), \(H^{\succ}\) is the set of descendants of hypotheses in \(H\) that are not themselves in \(H\), and \(H^{\rm U}\) is the set of hypotheses that are not ordered with respect to any element in \(H\). Because \(H\) is convex, every hypothesis \(h^{\prime}\in\mathbb{H}\setminus H\) must belong to one of these three sets: if \(h^{\prime}\) is not unrelated to every hypothesis in \(H\), it must be either preferred to some \(h\in H\), or some \(h\in H\) preferred to it; thus it belongs to either \(H^{\prec}\), \(H^{\succ}\). Furthermore, it cannot belong to both: if it did, there would be some hypothesis \(h\in H\) such that \(h\preceq h^{\prime}\) and some hypothesis \(h^{\prime\prime}\in H\) such that \(h^{\prime}\preceq h^{\prime\prime}\); this contradicts the convexity of \(H\). Construct the property set \(P=\{\neg p_{\rm anc}(h^{\prime})\mid h^{\prime}\in\max_{\prec}(H^{\prec})\} \cup\{\neg p_{\rm desc}(h^{\prime})\mid h^{\prime}\in\min_{\preceq}(H^{\succ}) \}\cup\{\neg p_{\rm desc}(h^{\prime})\mid h^{\prime}\in\min_{\preceq}(H^{\rm U })\}\). We claim that \(P\) is finite and that \(\mbox{\it hyppos}(P)=H\). That \(\min_{\preceq}(H^{\succ})\) and \(\min_{\preceq}(H^{\rm U})\) are finite follows directly from that \(\mathbb{H}\) is well partially ordered. For every hypothesis \(h^{\prime}\in H^{\prec}\) there is a \(h\in H\) such that \(h^{\prime}\preceq h\) (by construction) and such that \(h\) is minimal in \(H\). Hence, the maximal elements in \(H^{\prec}\) are exactly the minimal elements in the set of parents of the hypotheses in \(H\), and thus this set is also finite by the well partial orderedness of \(\mathbb{H}\). Since all three sets are finite, so is \(P\). If \(h\) exhibits \(p_{\rm anc}(h^{\prime})\) for some \(h^{\prime}\in H^{\prec}\), then \(h\preceq h^{\prime}\prec h^{\prime\prime}\) for some \(h^{\prime\prime}\in H\). Since \(h^{\prime}\not\in H\), by convexity, \(h\) cannot be in \(H\) either. Thus, all \(h\in H\) exhibit \(\neg p_{\rm anc}(h^{\prime})\) for all \(h^{\prime}\in H^{\prec}\). If \(h\) exhibits \(p_{\rm desc}(h^{\prime})\) for some \(h^{\prime}\in H^{\succ}\), then \(h^{\prime\prime}\prec h^{\prime}\preceq h\) for some \(h^{\prime\prime}\in H\). Analogously to the previous case, because \(h^{\prime}\not\in H\) and \(H\) is convex, \(h\) cannot be in \(H\). Thus, all \(h\in H\) exhibit \(\neg p_{\rm desc}(h^{\prime})\) for all \(h^{\prime}\in H^{\succ}\). Finally, if \(h\) exhibits \(p_{\rm desc}(h^{\prime})\) for some \(h^{\prime}\in H^{\rm U}\), then \(h^{\prime}\preceq h\). \(h\) cannot belong to \(H\) because if it did, \(h^{\prime}\) would be related to some element in \(H\), contradicting the construction of \(H^{\rm U}\). Thus, all \(h\in H\) exhibit \(\neg p_{\rm desc}(h^{\prime})\) for all \(h^{\prime}\in H^{\rm U}\). In summary, each hypothesis \(h\in H\) exhibits all properties in \(P\). Thus, \(H\subseteq\mbox{\it hyppos}(P)\). Now, let \(h^{\prime}\) be a hypothesis not in \(H\). We know that \(h^{\prime}\) belongs to at least one of \(H^{\prec}\), \(H^{\prec}\) or, \(H^{\tt U}\). If \(h^{\prime}\in H^{\prec}\) then it is either maximal in \(H^{\prec}\) or the ancestor of a hypothesis that is maximal in \(H^{\prec}\); in either case, it exhibits \(p_{\rm anc}(h^{\prime\prime})\) for some \(h^{\prime\prime}\in H^{\prec}\). Likewise, if \(h^{\prime}\in H^{\succ}\) then it is either minimal in \(H^{\succ}\) or the descendant of a hypothesis that is minimal in \(H^{\succ}\), so it exhibits \(p_{\rm desc}(h^{\prime\prime})\) for some \(h^{\prime\prime}\in H^{\succ}\). Finally, \(h^{\prime}\in H^{\tt U}\) then it is either minimal in \(H^{\tt U}\) or the descendant of a hypothesis that is minimal in \(H^{\tt U}\), so it exhibits \(p_{\rm desc}(h^{\prime\prime})\) for some \(h^{\prime\prime}\in H^{\tt U}\). In all three cases, \(h^{\prime}\) exhibits a property whose negation is in \(P\), and therefore \(h^{\prime}\not\in\mathit{hypos}(P)\). Hence \(\mathit{hypos}(P)\subseteq H\). So far, we have shown that if \(H\) is convex, then it can be represented by a finite conjunction of properties in \(\mathbb{P}\). To show the converse (only if), let \(H\) be a non-convex set. This means there are three hypotheses, \(h_{a}\), \(h_{b}\) and \(h_{c}\), such that \(h_{a}\preceq h_{c}\preceq h_{b}\), \(h_{a},h_{b}\in H\) and \(h_{c}\not\in H\). (Since the three hypotheses are necessarily distinct, we have in fact \(h_{a}\prec h_{c}\prec h_{b}\).) Suppose there is a property set \(P\) such that \(\mathit{hypos}(P)=H\): \(P\) must exclude \(h_{c}\), that is, there must be at least one property \(p\in P\) that \(h_{c}\) does not exhibit. There are only four ways to construct such a property: (1) \(p=p_{\rm anc}(h)\) for some strict ancestor \(h\prec h_{c}\). But this property also excludes \(h_{b}\) from \(\mathit{hypos}(P)\), since \(h_{c}\preceq h_{b}\). (2) \(p=p_{\rm desc}(h)\) for some strict descendant \(h_{c}\prec h\). This property excludes \(h_{a}\), since \(h_{a}\preceq h_{c}\). (3) \(p=\neg p_{\rm anc}(h)\) for some descendant \(h_{c}\preceq h\). (Note that here, \(h\) may be equal to \(h_{c}\).) Again, this property excludes \(h_{a}\), since \(h_{a}\preceq h_{c}\). (4) \(p=\neg p_{\rm desc}(h)\) for some ancestor \(h\preceq h_{c}\) (which may also equal \(h_{c}\)). This property excludes \(h_{b}\), since \(h_{c}\preceq h_{b}\). Thus, it is not possible to exclude \(h_{c}\) from \(\mathit{hypos}(P)\) without also excluding either \(h_{a}\) or \(h_{b}\). Therefore, since \(H\) includes both \(h_{a}\) and \(h_{b}\) but not \(h_{c}\), \(\mathit{hypos}(P)\) cannot equal \(H\). \(\Box\) ### Diagnostic Questions and Their Representations Next, we describe three different "diagnostic questions". Each question is a specific test that provides a piece of information about the diagnosis problem at hand. The strategies we present in Section 6 to explore the hypothesis space in search of the minimal diagnosis use these questions as their main primitives for interacting with the problem. We show how each of the questions is formulated as sets of hypotheses to test, and how those hypothesis sets can be represented by (conjunctive) sets of properties. In most cases, the mapping from a question to a test and from a test to its representation is straightforward, but for some, there are alternative representations. Which is the best representation depends in part on the strategy for exploring the hypothesis space: For conflict-directed strategies (introduced in Subsection 6.3), the representation should produce conflicts that are as general as possible. In addition, for the preferred-first strategy (Subsection 6.2), those conflicts should generate as few successors as possible. Finally, the property set should facilitate the task of the test solver. **Question 1**.: Is a given hypothesis \(h\) a diagnosis candidate? (candidate\((h)\)) * **Test hypothesis set**: \(H=\{h\}\). * **Representation by properties**: \(\{p_{\rm desc}(h)\}\cup\{\neg p_{\rm desc}(h^{\prime})\mid h^{\prime}\in{\rm children }(h)\}\). * **Test result**: yes or no. The test solver returns \(h\) if successful, and \(\bot\) otherwise. Note that this question could also be represented by the property set \(\{p_{\mathrm{desc}}(h),p_{\mathrm{anc}}(h)\}\) (since \(h\) is the only hypothesis that is both an ancestor and a descendant of \(h\)). However, the representation given above is the better one for the conflict-directed preferred-first strategy, and the basis of the one that we use. For particular hypothesis spaces, there can also be other, simpler but equivalent ways of representing them by properties. We discuss some alternatives in conjunction with the SAT-based implementation of a test solver for discrete event system diagnosis in Subsection 8.2. **Question 2.** Is a given candidate \(\delta\) minimal? (minimal(\(\delta\))) * **Test hypothesis set**: \(H=\{h\in\mathbb{H}\mid h\prec\delta\}\); * **Representation by properties**: \(\{p_{\mathrm{anc}}(\delta),\neg p_{\mathrm{desc}}(\delta)\}\). * **Test result**: Testing \(H\) above amounts to asking, "is there a candidate preferred to \(\delta\)?". Thus, the answer to the original question (\(\delta\) is minimal) is yes if the outcome of the test is \(\bot\). If \(\delta\) is not minimal, the test solver returns a strictly preferred candidate. **Question 3.** Given a finite and explicitly enumerated set of hypotheses \(S\), does \(S\) cover the diagnosis? (covers(\(S\))) * **Test hypothesis set**: \(H=\{h\in\mathbb{H}\mid\forall h^{\prime}\in S:h^{\prime}\not\preceq h\}\); * **Representation by properties**: \(\{\neg p_{\mathrm{desc}}(h^{\prime})\in\mathbb{P}\mid h^{\prime}\in S\}\). * **Test result**: As in Question 2, testing \(H\) asks the reverse of the question; thus, the answer is yes (\(S\) does cover the diagnosis) if the test solver returns \(\bot\), and if \(S\) does not cover the diagnosis, it returns a counter-example, in the form of a candidate not covered by \(S\). It is possible to characterise the minimal diagnosis in terms of diagnosis questions. **Theorem 4**: _A subset of hypothesis \(S\) is the minimal diagnosis if and only if it satisfies the following three conditions:_ * \(\forall h\in S.\;\mathrm{candidate}(h)\)_;_ * \(\forall h\in S.\;\mathrm{minimal}(h)\)_;_ * \(\mathrm{covers}(S)\)_._ **Proof:** That the minimal diagnosis satisfies these three conditions is a direct consequence of its definition (Definition 2). Assume now that \(S\) satisfies the conditions of the theorem. We show that \(S=\min_{\preceq}(\Delta)\) which, by Theorem 1, concludes the proof. Assume that \(S\neq\min_{\preceq}(\Delta)\); this means that either \(S\setminus\min_{\preceq}(\Delta)\neq\emptyset\) or \(\min_{\preceq}(\Delta)\setminus S\neq\emptyset\). i) Let \(h\) be a hypothesis of \(S\setminus\min_{\preceq}(\Delta)\); \(h\) is a candidate (by definition of \(S\)) but a non-minimal one. Consequently, there exists a minimal candidate \(\delta\in\min_{\preceq}(\Delta)\) such that \(\delta\prec h\). This contradicts the condition \(\mathrm{minimal}(h)\). ii) Let \(\delta\) be a minimal candidate of \(\min_{\preceq}(\Delta)\setminus S\). Since \(S\) covers the diagnosis, it must contain a hypothesis \(h\preceq\delta\); furthermore, since \(\delta\not\in S\), \(h\prec\delta\). Because \(\delta\) is a minimal candidate, \(h\) is not a candidate. This contradicts the first condition that all hypotheses in \(S\) should be candidates. \(\Box\) Some of our diagnosis procedures will not rely on the diagnosis question \(\mathrm{minimal}(h)\). For these procedures, we will rely on the following theorem instead. **Theorem 5**: _A subset of hypotheses \(S\) is the minimal diagnosis if and only if it satisfies the following three conditions:_ * \(\forall h\in S,\ \mbox{\rm candidate}(h)\)_;_ * \(\forall h,h^{\prime}\in S,\ h^{\prime}\not\prec h\)_;_ * \(\mbox{\rm covers}(S)\)_._ **Proof:** The proof is essentially the same as that of Theorem 4. The difference lies in the part i). We reuse the same notation, i.e., \(h\in S\setminus\min_{\preceq}(\Delta)\) and \(\delta\in\min_{\preceq}(\Delta)\) is such that \(\delta\prec h\). From the third condition, we know that there is \(h^{\prime}\in S\) such that \(h^{\prime}\preceq\delta\) (actually, the stronger relation \(h^{\prime}\prec\delta\) holds since \(\delta\) is not an element of \(S\)). Therefore the two elements \(h\) and \(h^{\prime}\) from \(S\) satisfy \(h^{\prime}\prec h\), which contradicts the second condition of Theorem 5. \(\Box\) ## 5 Diagnostic Properties in Different Settings In this section, we illustrate how the abstract definitions in the previous section are instantiated in two different modeling frameworks: static and dynamic (discrete event) systems. For the latter, we also show the instantiation of different hypothesis spaces. ### Static Systems Static systems are systems whose state does not normally change over time (except for becoming faulty). A typical example of a static system is a Boolean circuit. Static systems diagnosis consists in identifying the set of faults the system exhibits at a given point in time; there is no notion of multiple occurrences of the same fault, nor of temporal order between fault occurrences. Hence, the diagnosis is normally defined over the set hypothesis space (power set of the set \(F\) of faults), with the preference order defined as the subset relation. Static systems are typically modeled by a finite set of variables, each with their own domain of values. The set of possible system behaviours, which is a subset of all assignments of values to the variables, is defined by a set _Mod_ of constraints over the variables. (These can be expressed in propositional or first-order logic, or some other constraint formalism.) The observation is also defined by a set \(o\) of constraints on the values of certain variables of the model (for instance \(\mbox{\tt voltage}=\mbox{low}\)). Each possible fault \(f\in F\) is modeled by a Boolean variable \(v_{f}\in V_{F}\): this variable takes the value _true_ iff the fault is present. The hypothesis associated with a behaviour is then the subset of faults \(f\) whose corresponding variable \(v_{f}\) is _true_: \(\mbox{\it hypo}(\sigma)=\{f\in F\mid\sigma\to v_{f}\}\). A hypothesis \(h\) can be represented by a propositional formula \(\Phi_{h}=\bigwedge_{f\in h}v_{f}\wedge\bigwedge_{f\in F\setminus h}\neg v_{f}\). A hypothesis \(h\subseteq F\) is a candidate if it is logically consistent with the model and observation, i.e., if \[\mbox{\it Mod},o,\Phi_{h}\not\models\bot.\] Performing a test is therefore equivalent to solving a constraint satisfaction problem. (In case the model is represented by a propositional logic formula, that means a propositional satisfiability problem). The property \(p_{H}\) corresponding to a hypothesis set \(H\subseteq\mathbb{H}\), i.e., such that \(\mbox{\it hypo}(p_{H})=H\), is the logical disjunction of the formulas of the hypotheses in \(H\): \(\Phi_{p_{H}}=\bigvee_{h\in H}\Phi_{h}\). Of course, \(\Phi_{p_{H}}\) can also be represented by any formula that is equivalent to this. It is easy to show that: * \(\Phi_{\text{{\em dec}}(h)}\equiv\bigwedge_{f\in h}v_{f}\). That is, the descendants of hypothesis \(h\) (which are those hypotheses that \(h\) is preferred or equal to) are exactly those that include all faults that are present in \(h\), and possibly other faults as well. * \(\Phi_{\text{{\em enc}}(h)}\equiv\bigwedge_{f\in F\setminus h}\neg v_{f}\). That is, the ancestors of hypothesis \(h\) (which are those hypotheses that are preferred or equal to \(h\)) are exactly those that do not include any fault not present in \(h\), and possibly exclude some of the faults that \(h\) has. ### Discrete Event Systems Event-driven dynamic systems are characterised by transitions (be they discrete, timed or continuous) taking place over time. To simplify the discussion of this example, we will consider discrete untimed transitions, i.e., the classical discrete event system (DES) framework (Cassandras & Lafortune, 1999). However, the formulation below, and the diagnosis algorithms we present in the next section, generalise to other types of dynamic systems. Let \(\Sigma\) be the set of events that can take place in the system. A behaviour \(\sigma\in\Sigma^{\star}\) of the system is a (finite) sequence of events. Thus, the system model is a language \(\mathit{Mod}\subseteq\Sigma^{\star}\). It is common to distinguish in \(\Sigma\) a subset of observable events (\(\Sigma_{o}\)), and to define the observable consequence of a behaviour as the projection \(\Pi_{\Sigma_{o}}(\sigma)\) of the event sequence \(\sigma\) on \(\Sigma_{o}\)(Sampath et al., 1995). Then, an observation, expressed as a predicate on behaviours, has the form \(\mathit{o}(\sigma)\equiv(\Pi_{\Sigma_{o}}(\sigma)=w)\), for some fixed \(w\in\Sigma_{o}^{\star}\). More general forms of observation, such as partially ordered or ambiguous occurrences of observable events, can be specified similarly. Whichever form it takes, we can say that the observation is another language \(\mathcal{L}_{\mathit{O}}\subseteq\Sigma^{\star}\) such that a behaviour \(\sigma\) is consistent with it iff \(\sigma\in\mathcal{L}_{\mathit{O}}\). The faults are modeled by a subset \(F\subseteq\Sigma\) of (unobservable) events. The set of behaviours that correspond to a hypothesis \(h\) is also a language: \(\mathcal{L}_{h}=\{\sigma\in\Sigma^{\star}\mid\mathit{hypo}(\sigma)=h\}\). The precise definition of \(\mathcal{L}_{h}\) depends on the type of hypothesis space. In most cases, these languages are all regular, and hence representable by finite state machines. However, such a representation is normally too large to be feasible for computational purposes, so in practice an exponentially compact factored representation, such as a network of partially synchronised automata (Pencole & Cordier, 2005), Petri nets (Benveniste, Fabre, Haar, & Jard, 2003), or description in a modelling formalism like PDDL (Haslum & Grastien, 2011), is used instead. As we describe the hypotheses and properties for different spaces in the following, we will simply give them as (regular) languages. For the set hypothesis space (SHS), a hypothesis \(h\subseteq F\) corresponds to the language \(\mathcal{L}_{h}=\bigcap_{f\in h}(\Sigma^{\star}\{f\}\Sigma^{\star})\ \cap\ \bigcap_{f\in F \setminus h}(\Sigma\setminus\{f\})^{\star}\). For the multiset hypothesis space (MHS), the language \(\mathcal{L}_{h}\) of hypothesis \(h\) is the intersection \(\bigcap_{f\in F}\mathcal{L}_{f}^{=h(f)}\), where for each \(f\), \(\mathcal{L}_{f}^{=h(f)}\) contains all event sequences that have exactly \(h(f)\) occurrences of \(f\). For instance \(\mathcal{L}_{f}^{=2}=(\Sigma\setminus\{f\})^{\star}\left\{f\right\}(\Sigma \setminus\{f\})^{\star}\left\{f\right\}(\Sigma\setminus\{f\})^{\star}\). For the sequence hypothesis space (SqHS), \(\mathcal{L}_{h}\) is the language of words whose projection over \(F\) is \(h\): if \(h=[f_{1},\ldots,f_{k}]\), then \(\mathcal{L}_{h}=(\Sigma\setminus F)^{\star}\left\{f_{1}\right\}(\Sigma \setminus F)^{\star}\ldots(\Sigma\setminus F)^{\star}\left\{f_{k}\right\}( \Sigma\setminus F)^{\star}\). A hypothesis \(h\) is a candidate if the intersection \(\mathit{Mod}\cap\mathcal{L}_{\mathit{O}}\cap\mathcal{L}_{h}\) is non-empty. Essentially, any \(\sigma\) that belongs to this intersection is a possible behaviour of the system. Thus, a test can be seen as a discrete-state reachability problem. Given compact representations of the languages involved, tests can be carried out using, for example, model checking (Clarke, Grumberg, & Peled, 2000) or AI planning (Ghallab, Nau, & Traverso, 2004) tools. The property \(p_{H}\) is also a language, specifically \(\mathcal{L}_{p_{H}}=\bigcup_{h\in H}\mathcal{L}_{h}\). Immediate from the definition, the language of a set of properties \(P\) is the intersection of the properties' languages: \(\mathcal{L}_{P}=\bigcap_{p\in P}\mathcal{L}_{p}\). Likewise, the language of the negation of a property is the complement of its language, i.e., \(\mathcal{L}_{\neg p}=\Sigma^{\star}\setminus\mathcal{L}_{p}\). Using these, the languages of properties \(p_{\text{desc}}(h)\) and \(p_{\text{anc}}(h)\) can be built up according to their definitions. However, just as in the case of static systems, we can also find simpler, and more intuitive, equivalent expressions for \(\mathcal{L}_{p_{\text{desc}}(h)}\) and \(\mathcal{L}_{p_{\text{anc}}(h)}\). For the set hypothesis space, these are: * \(\mathcal{L}_{p_{\text{desc}}(h)}=\bigcap_{f\in h}(\Sigma^{\star}\{f\}\Sigma^{ \star})\). In other words, descendants of \(h\) are those event sequences that contain at least one occurrence of each fault \(f\in h\). * \(\mathcal{L}_{p_{\text{anc}}(h)}=\bigcap_{f\in F\setminus h}(\Sigma\setminus\{ f\})^{\star}\). The ancestors of \(h\) are those event sequences that do not contain any occurrence of any fault event not in \(h\). For the multiset hypothesis space, the languages of these properties can be written as follows: * \(\mathcal{L}_{p_{\text{desc}}(h)}=\bigcap_{f\in F}\mathcal{L}_{f}^{\geq h(f)}\), * \(\mathcal{L}_{p_{\text{anc}}(h)}=\bigcap_{f\in F}\mathcal{L}_{f}^{\leq h(f)}\), where \(\mathcal{L}_{e}^{\geq x}\) is the language of event sequences in which \(e\) occurs at least \(x\) times and \(\mathcal{L}_{e}^{\leq x}\) the language of sequences where \(e\) occurs at most \(x\) times. The former can be written as \(\mathcal{L}_{e}^{\geq x}=\Sigma^{\star}\mathcal{L}_{e}^{=x}\Sigma^{\star}\), and the latter as \(\bigcup_{i=0,\dots,x}\mathcal{L}_{e}^{=i}\). For the sequence hypothesis space, the properties can be written as follows. Let \(h=[f_{1},\dots,f_{k}]\): * \(\mathcal{L}_{p_{\text{desc}}(h)}=\Sigma^{\star}\{f_{1}\}\Sigma^{\star}\dots \Sigma^{\star}\{f_{k}\}\Sigma^{\star}\). In other words, the descendants of \(h\) are all event sequences in which the sequence \(h=[f_{1},\dots,f_{k}]\) is "embedded". * \(\mathcal{L}_{p_{\text{anc}}(h)}=(\Sigma\setminus F)^{\star}\left\{f_{1}\right\} ^{0/1}(\Sigma\setminus F)^{\star}\dots(\Sigma\setminus F)^{\star}\left\{f_{k} \right\}^{0/1}(\Sigma\setminus F)^{\star}\). That is, the ancestors of \(h\) are all event sequences that contain some substring of \(h\) as an embedded subsequence, and that do not contain any fault event interspersed between the occurrences of these fault events. ## 6 Diagnosis Strategies We have cast the diagnosis problem as a search for the minimal candidates in the space of hypotheses, and we have shown how this search can query the problem using symbolic tests. To instantiate the framework into a concrete diagnosis algorithm, we must also specify a strategy for the exploration of the hypothesis space, and an implementation of the test solver that is appropriate for the class of system models and the hypothesis space. We describe implementations of test solvers in Section 8. In this section, we outline two broad types of exploration strategies: The first, which we call "preferred-last", maintains a set of candidates, which is iteratively extended until it covers the diagnosis. The second, which we call "preferred-first", searches in a top-down fashion, testing at each step the most preferred hypothesis that has not yet been rejected. In each case, we first present the basic strategy, followed by refined versions. In particular, we show how the preferred-first strategy can be enhanced through the use of _conflicts_, in a manner analogous to their use in diagnose(Reiter, 1987). ### The Preferred-Last Strategy The preferred-last strategy (PLS) begins with an empty set \(S\) of candidates, and repeatedly tests whether this set covers the diagnosis. This test is an instance of Question 3, described in Subsection 4.3. If the answer to the question is negative, it leads to the discovery of a new candidate which is added to \(S\). When \(S\) covers the diagnosis we know that it is a superset of the minimal diagnosis, because it contains only candidates. The minimal diagnosis is then extracted from \(S\) by removing non-minimal elements, as required by Theorem 5. The strategy is summarised in Algorithm 1. ``` 1:Input: Model Mod, observation \(o\), hypothesis space \(\mathbb{H}\) 2:\(S:=\emptyset\) 3:while\(\neg\)covers\((S)\)do 4: Let \(\delta\) be the candidate found by the coverage test. 5:\(S:=S\cup\{\delta\}\) 6:endwhile 7:return\(\min_{\preceq}(S)\) ``` **Algorithm 1** The preferred-last strategy (PLS) **Theorem 6**: _PLS returns the minimal diagnosis. Furthermore, if the hypothesis space is well partially ordered, then PLS terminates._ **Proof:** Assume PLS terminates. We first show that the three conditions of Theorem 5 are satisfied by the returned set \(R=\min_{\preceq}(S)\). Observe that both \(S\) and \(R\subseteq S\) are finite since \(S\) is enumerated. 1) All hypotheses in \(R\) are candidates. 2) Since \(R\) is minimised, it contains no pair of hypotheses that are comparable. 3) Let \(\delta\in\Delta\) be a candidate. Since \(S\) covers \(\Delta\), there exists \(h_{1}\in S\) such that \(h_{1}\preceq\delta\). If \(h_{1}\in R\), then \(\delta\) is covered, but we need to consider the general case where \(h_{1}\not\in R\). Because \(h_{1}\) is in the set of non-minimal elements of \(S\) and \(S\) is finite, there is another hypothesis \(h_{2}\in S\) such that \(h_{2}\preceq h_{1}\) holds. This hypothesis \(h_{2}\) could also not belong to \(R\), in which case this hypothesis is also covered by another hypothesis \(h_{3}\). This gives us a sequence of hypotheses \(h_{1}\succ h_{2}\succ\ldots\) that all belong to \(S\). Since \(S\) is finite, there is a minimal hypothesis \(h_{k}\) for this sequence, and this hypothesis belong to \(\min_{\preceq}\ S\). Thus \(R\) covers the diagnosis. Now, suppose that PLS does not terminate: This means PLS generates an infinite sequence of candidates, \(\delta_{1},\delta_{2},\ldots\) Because \(\delta_{j}\) is generated from a test of coverage of \(\{\delta_{1},\ldots,\delta_{j-1}\}\), we know that \(\delta_{i}\not\preceq\delta_{j}\) for all \(i<j\). Furthermore, since the preference order is well-founded, we know that any strictly descending subchain of this sequence is finite. Therefore, for any index \(i\), there exists at least one index \(k\geq i\) such that \(\delta_{k}\preceq\delta_{i}\) and \(\delta_{k}\) is minimal in the sequence. We write \(m(i)\) the smallest such index \(k\). We note that for any index \(j>m(i)\), \(\delta_{m(i)}\) and \(\delta_{j}\) are incomparable (as \(\delta_{m(i)}\) is minimal in \(S\) and \(\delta_{j}\) is after \(\delta_{min(i)}\) in the sequence). We also note \(m(i+1)>i\) for any index \(i\). Therefore, the set \[S^{\prime}=\{\delta_{m(i)},\delta_{m(m(i)+1)},\delta_{m(m(m(i)+1)+1)},\ldots\}\] contains infinitely many mutually-incomparable candidates (hence, all minimal in \(S^{\prime}\)), which contradicts the well partial orderness of \(\preceq\). \(\Box\) Although the PLS algorithm is guaranteed to eventually terminate, for infinite hypothesis spaces there is no worst-case bound on the number of iterations required before a covering set has been found (for finite hypothesis spaces it is of course bounded by the size of the space). Consider, for instance, the Sequence Hypothesis Space with only one fault \(f\) and write \(h_{i}=f^{i}\) (i.e., \(h_{i}\) indicates that \(f\) occurred precisely \(i\) times); assume that the diagnosis is \(\Delta=\{h_{0},h_{1},h_{2},\ldots\}=\mathbb{H}\) (any number of occurrences of \(f\) might have happened); then for any \(i\), PLS could generate this sequence of candidates: \(h_{i},h_{i-1},h_{i-2},\ldots,h_{0}\). All sequences will eventually end with \(h_{0}\), but there is no a-priori bound on their size until (in this instance) the first candidate is found. PLS computes some candidates and then tries to improve them. Sometimes, however, instead of improving known candidates, it will go sideways and compute other irrelevant candidates. The following example illustrates this problem of slow convergence. **Example 1**: _Consider a set hypothesis space over a large set of faults \(F\), and a diagnosis problem in which \(\Delta=\mathbb{H}\), i.e., all hypotheses are candidates (this would be the situation for example in a weak-fault model with nominal observations). The minimal diagnosis is then the singleton \(\Delta_{\preceq}=\{h_{0}\}\)._ _All candidates that involve \(\lfloor\frac{|F|}{2}\rfloor\) faults are mutually incomparable, which means the coverage test can iteratively generate all of them, leading to an exponential-time computation._ In order to speed up convergence of PLS, we add an extra step which "refines" each new candidate found into a minimal one. The intuition is that if minimal candidates are generated early, we can avoid exploring "redundant" options. For instance, in Example 1 above, the number of iterations will be at most \(|F|+1\). The refinement of a candidate \(\delta\) is performed by testing whether \(\delta\) is minimal, i.e., asking Question 2. If \(\delta\) is not minimal, the test returns a preferred candidate; this is repeated until the current candidate is minimal. The revised algorithm, called PLS+r, is shown in Algorithm 2. Note that, in this algorithm, all elements inserted in \(S\) are guaranteed to be minimal. Thus, there is no need to remove non-minimal elements at the end. ``` 1:Input: Model Mod, observation \(o\), hypothesis space \(\mathbb{H}\) 2:\(S:=\emptyset\) 3:while\(\neg\)covers(\(S\))do 4: Let \(\delta\) be the candidate found by the coverage test. 5:while\(\neg\)minimal(\(\delta\))do 6: Replace \(\delta\) with the candidate found by the minimality test. 7:endwhile 8:\(S:=S\cup\{\delta\}\) 9:endwhile 10:return\(S\) ``` **Algorithm 2** The preferred-last strategy with refinement (PLS+r) **Theorem 7**: _PLS+r returns the minimal diagnosis. Furthermore, if the hypothesis space is well partially ordered, then PLS+r terminates._ **Proof:** Any candidate added to \(S\) by PLS that is not also added by PLS+r is non-minimal, and therefore removed from the final set by PLS. Thus, PLS+r returns the same diagnosis. The refinement step effectively changes only the order in which candidates are generated. Since PLS terminates regardless of the order in which the candidates are generated in, PLS+r also terminates under the same condition. ### The Preferred-First Strategy The preferred-first strategy is based on the following intuition: Because faults are rare events, it can be expected that minimal candidates have small depth. Therefore, a sensible approach to the hypothesis space exploration is to start by testing the most preferred hypotheses; if those hypotheses are proven to be candidates, then their descendants do not need to be explored, since we are only interested in the minimal diagnosis. The basic version of the preferred-first strategy (PFS) is presented in Algorithm 3. ``` 1:Input: Model _Mod_, observation \(o\), hypothesis space \(\mathbb{H}\) 2:\(S_{\mathrm{R}}:=\emptyset\)// Will store the result 3:\(S_{\mathrm{O}}:=\min_{\preceq}(\mathbb{H})\)// i.e., \(\{h_{0}\}\) 4:while\(S_{\mathrm{O}}\neq\emptyset\)do 5:\(h:=\mathrm{pop}(S_{\mathrm{O}})\) 6:if\((\exists h^{\prime}\in S_{\mathrm{O}}\cup S_{\mathrm{R}}:h^{\prime}\preceq h)\)then 7:continue 8:endif 9:if\(\mathrm{candidate}(h)\)then 10:\(S_{\mathrm{R}}:=S_{\mathrm{R}}\cup\{h\}\) 11:else 12:\(S_{\mathrm{O}}:=S_{\mathrm{O}}\cup\mathrm{children}(h)\) 13:endif 14:endwhile 15:return\(S_{\mathrm{R}}\) ``` **Algorithm 3** The preferred-first strategy (PFS). Both \(S_{\mathrm{O}}\) and \(S_{\mathrm{R}}\) are enumerated sets of hypotheses and, because any hypothesis has only a finite set of children, both sets are guaranteed to be finite. The set \(S_{\mathrm{O}}\) contains all hypotheses that are "promising", in the sense that their parents have been ruled out as candidates but the hypotheses themselves have not yet been tested. Starting with the unique most preferred hypothesis \(h_{0}\), the algorithm selects a current hypothesis \(h\) to test, which it removes from \(S_{\mathrm{O}}\) and stores it in \(S_{\mathrm{R}}\) if it is a candidate; otherwise, it adds the children of \(h\) to \(S_{\mathrm{O}}\). PFS returns the correct diagnosis, but termination is only ensured if the hypothesis space is finite. To demonstrate these results, we first prove the following lemma: **Lemma 3**: _Whenever the condition of the **while** loop in PFS is tested, the diagnosis is covered by \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\), i.e., \(\forall\delta\in\Delta,\ \exists h\in S_{\mathrm{O}}\cup S_{\mathrm{R}}:\ h\preceq\delta\)._ **Proof:** We prove the lemma by induction. Initially, \(S_{\mathrm{O}}=\{h_{0}\}\) so the coverage property holds. Assume that the coverage property is true for some \(S_{\mathrm{O}}\neq\emptyset\) and some \(S_{\mathrm{R}}\). We prove that the property still holds after a single execution of the loop body. Let \(h\in S_{\mathrm{O}}\) be the hypothesis chosen at Line 5. Consider a candidate \(\delta\): by induction, we know that there exists \(h^{\prime}\in S_{\mathrm{O}}\cup S_{\mathrm{R}}\) such that \(h^{\prime}\preceq\delta\). If \(h^{\prime}\neq h\), then the condition still holds in the next iteration, since \(h^{\prime}\) remains in \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\). On the other hand, if \(h^{\prime}=h\), then there are three cases: i) If the condition on Line 6 is true, then there exists \(h^{\prime\prime}\in(S_{\mathrm{O}}\setminus\{h\})\cup S_{\mathrm{R}}\) such that \(h^{\prime\prime}\preceq h\preceq\delta\). Since \(h^{\prime\prime}\) remains in \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) at the start of the next iteration, candidate \(\delta\) is covered. ii) If the condition on Line 9 is true, \(h\) is simply moved from \(S_{\rm O}\) to \(S_{\rm R}\), so \(S_{\rm O}\cup S_{\rm R}\) remains unchanged and the coverage property holds by induction. iii) If neither of these two conditions is satisfied, \(h\) will be removed from \(S_{\rm O}\) and its children added instead. In this case, since \(h\preceq\delta\) but \(h\) cannot be equal to \(\delta\), we have \(h\prec\delta\). Hence, there exists at least one hypothesis \(h^{\prime\prime}\) such that \(h\prec h^{\prime\prime}\preceq\delta\), and any minimal such hypothesis is a child of \(h\). Hence candidate \(\delta\) is covered at the next iteration by at least one child \(h^{\prime\prime}\) of \(h\) that has been added to \(S_{\rm O}\). \(\Box\) **Theorem 8**: _PFS returns the minimal diagnosis. Furthermore, if the hypothesis space is finite, then PFS terminates._ **Proof:** Let \(S_{\rm R}\) be the result of the algorithm (assuming it terminates). We prove that \(S_{\rm R}\subseteq\Delta_{\preceq}\), and then that \(\Delta_{\preceq}\subseteq S_{\rm R}\). \(S_{\rm R}\) is initially empty and elements are added (Line 10) only when they are proved to be candidates: hence \(S_{\rm R}\subseteq\Delta\). Furthermore we know from Lemma 3 that \(S_{\rm R}\cup S_{\rm O}\) covers the diagnosis at all times. Assume the non-minimal diagnosis candidate \(h=\delta\) is added to \(S_{\rm R}\) in some iteration. This means that \(\delta\) is the hypothesis popped from \(S_{\rm O}\) in this iteration. Since \(\delta\) is non-minimal, there exists a preferred candidate \(\delta^{\prime}\preceq\delta\) and this candidate is covered: \(\exists h^{\prime}\in S_{\rm R}\cup S_{\rm O}\). \(h^{\prime}\preceq\delta^{\prime}\). This, however, means that \(h^{\prime}\preceq\delta\), so \(\delta\) cannot have not have passed the test at Line 6. Hence, \(S_{\rm R}\) contains only minimal candidates. At the end of the algorithm, \(S_{\rm O}\) is empty, so \(S_{\rm R}\) alone covers the diagnosis. Hence, for any minimal candidate \(\delta\), there exists a hypothesis preferred to \(\delta\) that appears in \(S_{\rm R}\). But \(S_{\rm R}\) contains only minimal candidates, and the only candidate \(\delta^{\prime}\) that satisfies \(\delta^{\prime}\preceq\delta\) is \(\delta\). Therefore, all minimal candidates appear in \(S_{\rm R}\). To show termination, we prove that \(S_{\rm O}\) eventually becomes empty. At each iteration, one hypothesis \(h\) is removed from \(S_{\rm O}\); under certain conditions, the children of \(h\) are also added to \(S_{\rm O}\). We show that when this happens, the hypothesis \(h\) that was just removed can never re-enter \(S_{\rm O}\) in any future iteration. A hypothesis \(h\) can be added to \(S_{\rm O}\) only in the interaction in which one of its parents was removed from \(S_{\rm O}\). Thus, if no ancestor of \(h\) is currently in \(S_{\rm O}\) then \(h\) cannot be added to \(S_{\rm O}\) in any future iteration. Consider a hypothesis \(h\), removed from \(S_{\rm O}\) in the current iteration, and suppose that the algorithm reaches Line 12, so that children of \(h\) are added to \(S_{\rm O}\). This means the condition on Line 6 does not hold, which means there is no ancestor of \(h\) in \(S_{\rm O}\) (or in \(S_{\rm R}\)). Hence, \(h\) can never re-enter \(S_{\rm O}\). \(\Box\) In general, there is no guarantee that PFS will terminate when the hypothesis space is infinite. This is illustrated by the two examples below. In the first, lack of termination comes from _useless_ hypotheses which have no candidates among their descendants. As the second example shows even pruning those useless hypotheses is not sufficient to ensure termination. **Example 2**: _Consider a SqHS with two faults \(f_{1}\) and \(f_{2}\), and suppose that the diagnosis is \(\Delta=\{[f_{1}]\}\). Then, PFS will never end. Table 1 shows a possible evolution of PFS. PFS is unaware of the fact that no descendant of \([f_{2},f_{2},\ldots,f_{2}]\) is a candidate, and will therefore explore this branch for ever._ **Example 3**: _Consider again a SqHS with two faults \(f_{1}\) and \(f_{2}\), and consider that the diagnosis is \(\Delta=\{[f_{1}],[f_{1},f_{2}],[f_{1},f_{2},f_{2}],\ldots\}\), i.e., any hypothesis that starts with \(f_{1}\), followed by any number of \(f_{2}\). Then, all hypotheses of the form \([f_{2},\ldots,f_{2}]\) have a child that is a candidate (the hypothesis with \(f_{1}\) added to the beginning of the sequence), and hence none of them are useless. This makes it possible for PFS to explore an infinite path in the hypothesis space without encountering any candidate, thus never terminating._ However termination can also be guaranteed by pruning a different type of hypotheses. We call _undesirable_ those hypotheses that are not ancestors of any minimal candidates (formally, descendants\((h)\cap\Delta_{\preceq}=\emptyset\)). Again, assuming that all hypotheses have finite depth then pruning undesirable hypotheses guarantees termination. In fact, we use an even stronger pruning condition, which discards all undesirable hypotheses as well as some hypotheses that do not satisfy the undesirability condition but are redundant because the candidates they can lead to are covered by some other hypothesis. We call these hypotheses _non-essential_. Pruning non-essential hypotheses works better than pruning only the undesirable hypotheses for two reasons: First, because the undesirability condition cannot be directly tested during search, since the minimal diagnosis \(\Delta_{\preceq}\) is not known; the essentiality property, on the other hand, is straightforward to test. Second, pruning more hypotheses, as long as it does not compromise completeness of the returned diagnosis, is of course preferable since it leads to less search. Note that the part of the proof of Theorem 9 that establishes termination does not actually depend on pruning non-essential hypotheses; it continues to hold also if only undesirable hypotheses are pruned. A hypothesis \(h\) is said to be non-essential, with respect to \(S_{\mathrm{O}}\) and \(S_{\mathrm{R}}\), if all candidates \(\delta\) that are descendants of \(h\) are also descendants of some other hypothesis in \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\). The proof of Theorem 8 relies on the coverage property which states that for every candidate \(\delta\) some \(h\preceq\delta\) (either \(\delta\) or one of its ancestors) appears in \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) at the start of every iteration. Therefore, if \((S_{\mathrm{O}}\setminus\{h\})\cup S_{\mathrm{R}}\) covers the diagnosis, then \(h\) can be safely discarded from \(S_{\mathrm{O}}\) without losing the coverage property. Because \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) always covers the diagnosis (by Lemma 3), \(h\) is non-essential exactly when \((S_{\mathrm{O}}\setminus\{h\})\cup S_{\mathrm{R}}\) also covers the diagnosis. Note that an undesirable hypothesis \(h\) is always non-essential w.r.t. \(S_{\mathrm{O}}\) and \(S_{\mathrm{R}}\) if \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) covers the minimal diagnosis. Therefore, any undesirable hypothesis will be pruned by skipping non-essential hypotheses in PFS. The non-essential test is shown in Algorithm 4. It is added to PFS between Lines 8 and 9. We call the resulting algorithm PFS+e. **Theorem 9**: _PFS+e returns the minimal diagnosis. Furthermore, if all hypotheses of the hypothesis space have finite depth, then PFS+e terminates._ \begin{table} \begin{tabular}{|c|c|c|} \hline \(S_{\mathrm{O}}\) & \(S_{\mathrm{R}}\) & next element popped \\ \hline \(\{[]\}\) & \(\{\}\) & \([]\) \\ \(\{[f_{1}],[f_{2}]\}\) & \(\{\}\) & \([f_{1}]\) \\ \(\{[f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{2}]\) \\ \(\{[f_{1},f_{2}],[f_{2},f_{1}],[f_{2},f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{1},f_{2}]\) \\ \(\{[f_{2},f_{1}],[f_{2},f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{2},f_{1}]\) \\ \(\{[f_{2},f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{2},f_{2}]\) \\ \(\{[f_{2},f_{2}],[f_{2},f_{1},f_{2}],[f_{2},f_{2},f_{1}],[f_{2},f_{2},f_{2}]\}\) & \(\{[f_{1}]\}\) & \([f_{1},f_{2},f_{2}]\) \\ \(\ldots\) & \(\ldots\) & \(\ldots\) \\ \hline \end{tabular} \end{table} Table 1: Possible evolution of PFS **Proof:** That PFS+e returns the minimal diagnosis can be shown simply by proving that the coverage property (Lemma 3) still holds. We now have a fourth case in the induction step of proof: If \(h\) fails the non-essentiality test, then it is discarded without its children being added to \(S_{\mathrm{O}}\). However, this test actually checks the very property that we want to enforce, that \(S_{\mathrm{O}}\cup S_{\mathrm{R}}\) covers the diagnosis, so when it triggers and returns to start of the **while** loop, the coverage property also holds. Next, we consider termination. Let \(S_{\mathrm{O}}@0,\ldots,S_{\mathrm{O}}@i,\ldots\) represent the content of the set \(S_{\mathrm{O}}\) at the start of each iteration of the **while** loop, when the loop condition is evaluated. We need to show that \(S_{\mathrm{O}}@i\) will eventually be empty. To do this, we make use of the following three facts which we then prove: i) Let \(A=\{h\in\mathbb{H}\mid\exists\delta\in\Delta_{\preceq}.\)\(h\preceq\delta\}\): \(A\) is finite. ii) \(S_{\mathrm{O}}@i\cap A=S_{\mathrm{O}}@k\cap A\Rightarrow\forall j\in\{i, \ldots,k\}.\)\(S_{\mathrm{O}}@j\cap A=S_{\mathrm{O}}@i\cap A\). iii) \(S_{\mathrm{O}}@i\cap A=S_{\mathrm{O}}@(i+1)\cap A\Rightarrow S_{\mathrm{O}}@(i +1)\subset S_{\mathrm{O}}@i\). Assume the sequence \(S_{\mathrm{O}}@i\) goes on forever. By (iii) and because \(S_{\mathrm{O}}\) is always finite, the intersection of \(S_{\mathrm{O}}\) and \(A\) changes infinitely often. Furthermore, by (i) there is only a finite number of intersections of \(S_{\mathrm{O}}\) and \(A\), which means that the same intersection must eventually reappear. This contradicts (ii). It remains to prove claims (i) - (iii). i) First, note that that \(A=\bigcup_{\delta\in\Delta_{\prec}}\mathrm{ancestors}(\delta)\). Because \(\Delta_{\preceq}\) is finite, \(A\) is finite iff the set of ancestors of every minimal candidate is finite. Consider a minimal candidate \(\delta\). Since \(\delta\) has finite depth (assumption of the theorem), its ancestors all have depth \(d\) or less. We prove, by induction, that the set of hypotheses of depth \(d\) or less is finite, for any \(d\). This is true for \(d=1\), since only \(h_{0}\) has depth 1. Assume it is true for \(d-1\). By definition of depth, every hypothesis \(h\) of depth \(d\) is a child of some hypothesis \(h^{\prime}\) of depth \(d-1\). Since there is a finite number hypotheses \(h^{\prime}\) at depth \(d-1\), by inductive assumption, and each of them has a finite number of children (because the hypothesis space is well partially ordered), there can only be a finite number of hypotheses with depth \(d\). Thus, the number of hypotheses of depth \(d\) or less is also finite. ii) Assume \(i<j<k\) such that \(S_{\mathrm{O}}@i\cap A=S_{\mathrm{O}}@k\cap A\) and \(S_{\mathrm{O}}@i\cap A\neq S_{\mathrm{O}}@j\cap A\). Let \(A^{\prime}\subseteq A\) be the set of hypotheses \(h\) that are added at some point between iteration \(i\) and iteration \(k\), that is, \(A^{\prime}=\{h\in A\mid\exists\ell\in\{i,\ldots,k-1\}.\)\(h\not\in S_{\mathrm{O}}@\ell\wedge h\in S_{\mathrm{O}}@(\ell+1)\}\). Clearly \(A^{\prime}\) is not empty: Since \(S_{\mathrm{O}}@i\cap A\neq S_{\mathrm{O}}@j\cap A\), some hypothesis has either been added between \(i\) and \(j\), or some hypothesis has been removed between \(i\) and \(j\), in which case it must added again before iteration \(k\). Let \(h\) be a hypothesis that is minimal in the set \(A^{\prime}\). Since \(h\) is added to \(S_{\mathrm{O}}\) at some point between iteration \(i\) and iteration \(k\), a parent \(h^{\prime}\) of \(h\) must be removed at the same iteration (the only way to add an element to \(S_{\mathrm{O}}\) is through Line 12). However, if \(h^{\prime}\) is removed from \(S_{\mathrm{O}}\), it must be added again to \(S_{\mathrm{O}}\) at some later point, as otherwise \(S_{\mathrm{O}}@k\cap A\) could not equal \(S_{\mathrm{O}}@i\cap A\). This means \(h^{\prime}\) also belongs to \(A^{\prime}\), and since it is a parent of \(h\), this contradicts the choice of \(h\) as a minimal hypothesis in \(A^{\prime}\). iii) Consider an iteration \(i\) such that \(S_{\mathrm{O}}@i\cap A=S_{\mathrm{O}}@(i+1)\cap A\). Because \(S_{\mathrm{O}}\cap A\) is unchanged, the hypothesis \(h\) chosen at iteration \(i\) does not belong to \(A\). Any hypothesis not in \(A\) is, by definition, undesirable, since \(A\) contains all ancestors of all minimal candidates. Thus, since \(S_{\mathrm{O}}@i\cup S_{\mathrm{R}}@i\) covers the minimal diagnosis (by Lemma 3), so does \((S_{\mathrm{O}}@i\cap A)\cup S_{\mathrm{R}}@i\), and consequently so does \((S_{\mathrm{O}}@i\setminus\{h\})\cup S_{\mathrm{R}}@i\). Thus, \(h\) fails the essentiality test in PFS+e, so no children of \(h\) are added to \(S_{\mathrm{O}}\) and we have \(S_{\mathrm{O}}@(i+1)=S_{\mathrm{O}}@i\setminus\{h\}\). \(\Box\) ### Conflict-Based Strategy The conflict-based strategy is an improvement of PFS. The idea is to extract the core reason why hypothesis \(h\) is not a candidate, in order to reduce the number of successors of \(h\) that need to be inserted in the open list. We define a conflict as an implicit representation of a set of hypotheses that are not candidates. **Definition 5**: _A conflict \(C\) is an object that represents a set \(\mathit{hypos}(C)\) of hypotheses that does not intersect the diagnosis:_ \[\mathit{hypos}(C)\cap\Delta=\emptyset.\] We now assume that the test solver is not only able to decide whether the diagnosis intersects the specified set of hypotheses, but also to return a conflict in case the test fails. The following definition of a test result extends Definition 3. **Definition 6**: _The result of a test \(\langle\mathit{Mod},o,H\rangle\) is either a hypothesis \(h\in\Delta(\mathit{Mod},o,\mathbb{H})\cap H\) or a conflict \(C\) such that \(H\subseteq\mathit{hypos}(C)\)._ In the worst case, i.e., if the test solver is not able to provide useful information, the conflict can be defined such that \(\mathit{hypos}(C)=H\). Two problems need to be solved at this stage: i) how do we compute conflicts, and ii) how do we use conflicts for diagnosis. We first concentrate on the second issue. #### 6.3.1 Using Conflicts for Diagnosis A conflict can be useful in two different ways. First, a conflict can be used to avoid certain tests. For instance, let \(h\) be a hypothesis, the candidacy of which we want to test, and let \(C\) be a conflict that was previously computed. If \(h\in\mathit{hypos}(C)\), then \(h\not\in\Delta\) (by definition of a conflict). Therefore, inclusion in a conflict can serve as an early detection that a hypothesis is not a candidate. The second use of a conflict is to reduce the number of successors that need to be generated after a candidacy test failed. Again, let \(h\) be a hypothesis and let \(C\) be a conflict such that \(h\in\mathit{hypos}(C)\). Remember that the correctness of PFS relies on the fact that all diagnosis candidates are covered by a hypothesis from the open list or by a hypothesis from the already discovered minimal candidates. When \(h\) is proved to be a non-candidate, we no longer need to get \(h\) covered, but we need to cover the set \(S\) of all strict descendants of \(h\), which is the reason why Algorithm 3 includes all the minimal elements of \(S\) (the children of \(h\)) in the open list. Now however, not only do we know that \(h\) is not a candidate, but the same also applies to all the hypotheses of \(\mathit{hypos}(C)\). Therefore, we may include in the open list the minimal elements of \(S\setminus\mathit{hypos}(C)\). This is illustrated with Algorithm 5 where the conflict is used to compute the set of successors. We call PFS+ec (resp. PFS+c) the variant of PFS+e (resp. PFS) that uses conflicts. ``` if\(\text{candidate}(h)\)then \(S_{\text{R}}:=S_{\text{R}}\cup\{h\}\) else Let \(C\) be the conflict generated by the test. \(S_{\text{O}}:=S_{\text{O}}\cup\min_{\preceq}(\text{descendants}(h)\setminus \text{hypos}(C))\) endif ``` **Algorithm 5** Replacement of the If statement Lines 9-13 of Algorithm 3. **Theorem 10**: _PFS+ec returns the minimal diagnosis. Furthermore if all hypotheses of the hypothesis space have finite depth, then PFS+ec terminates._ **Proof:** The correct outcome of the algorithm is again proved by updating the coverage property (Lemma 3). Item iii) of the proof needs to be updated as follows. Candidate \(\delta\) is covered by the hypothesis \(h\) that has been disproved (\(\delta\in\text{descendants}(h)\)). Because \(\delta\) is a candidate and \(C\) is a conflict, \(\delta\not\in\text{hypos}(C)\). Hence \(\delta\in\text{descendants}(h)\setminus\text{hypos}(C)\). Since the hypothesis space is well partially ordered, \(\min_{\preceq}(\text{descendants}(h)\setminus\text{hypos}(C))\) is not empty and therefore, when the hypotheses in this set are added to \(S_{\text{O}}\) (line 5), at least one of them will cover \(\delta\) at the next iteration of the algorithm. The proof for termination of PFS+e also apply to PFS+ec. \(\Box\) We now illustrate how PFS+ec can accelerate the diagnosis. First it may remove a number of successors. **Example 4**: _Consider a SqHS with three fault events \(f_{1}\), \(f_{2}\), and \(f_{3}\). PFS+ec first tests the empty sequence \(h_{0}=[]\). Assuming \(h_{0}\) is not a candidate, then PFS would generate three successors, \([f_{1}]\), \([f_{2}]\), and \([f_{3}]\). Assume now the test solver finds the conflict \(C\) that specifies that either fault \(f_{1}\) or fault \(f_{2}\) occurred. This conflict rejects all hypotheses that contain only \(f_{3}\) faults. It is not difficult to show that the minimal elements of \(\text{descendants}(h_{0})\setminus\text{hypos}(C)\) are \([f_{1}]\) and \([f_{2}]\). In other words, the conflict allowed to discard the hypothesis \([f_{3}]\)._ But conflicts can also allow us to consider hypotheses that are "deeper" than the natural successors, thus skipping intermediate steps. **Example 5**: _Consider the same example as before, but this time with the conflict \(C\) that excludes \(h_{0}\) and all hypotheses with a single fault. Then the successors of \(h_{0}\) become: \([f_{1},f_{1}]\), \([f_{1},f_{2}]\), \([f_{1},f_{3}]\), \([f_{2},f_{1}]\), \([f_{2},f_{2}]\), \([f_{2},f_{3}]\), \([f_{3},f_{1}]\), \([f_{3},f_{2}]\), and \([f_{3},f_{3}]\). PFS does not need to test any \(h_{0}\)'s three children._ #### 6.3.2 Computing Conflicts and Successors So far, we have merely characterised the set of successors rejected by a conflict, but have not explained how to compute conflicts and successors in practice. A key issue addressed by our approach below is that the set \((\text{descendants}(h)\setminus\text{hypos}(C))\) is infinite in general. We first discuss the _computation of conflicts_. Whilst our approach restricts the type of conflicts computed, it makes it easy to test for inclusion in a conflict and compute successors. Conflicts are represented symbolically, similarly to the tested hypotheses. A conflict is a set of hypothesis properties which, as explained in Definition 4, is an implicit representation of a set of hypotheses: \[C\subseteq\mathbb{P}.\] To see how conflicts are computed, remember that the test solver is given a set \(P\) of properties that represents exactly the set \(H\) of hypotheses to be tested (\(H=\mathit{hypos}(P)\)). The task of the test solver is essentially to find an "explanation" of the observation that satisfies all these properties. If no such explanation exists (and, consequently, the test fails), then the solver may be able to track all the properties \(P^{\prime}\subseteq P\) that it used to decide the failure. Clearly: * no hypothesis that satisfies \(P^{\prime}\) is a candidate; hence \(P^{\prime}\) is a conflict; * the set of hypotheses represented by \(P^{\prime}\) is a superset of the set of hypotheses represented by \(P\): \(P^{\prime}\subseteq P\Rightarrow\mathit{hypos}(P^{\prime})\supseteq\mathit{ hypos}(P)=H\). Therefore, \(P^{\prime}\) is returned as the result of the diagnosis test. Given this definition of conflict, we now discuss the efficient _computation of the successors_ of a hypothesis rejected by a conflict. First, observe that PFS searches using Question 1, which, as stated in Subsection 4.3, can be formulated via two properties of the form \(p_{\mathrm{desc}}(\cdot)\) and \(p_{\mathrm{anc}}(\cdot)\), or alternatively via a \(p_{\mathrm{desc}}(\cdot)\) property in conjunction with a set of \(\neg p_{\mathrm{desc}}(\cdot)\) properties. We choose the latter representation, as using more properties will enable the generation of a more general conflict and increase efficiency. Second, the property of the form \(p_{\mathrm{desc}}(\cdot)\) can be ignored for the purpose of computing successors. This is because the successors of \(h\) (as defined in Algorithm 5) should contradict at least one property of the conflict but cannot contradict a \(p=p_{\mathrm{desc}}(h^{\prime})\) property: clearly if \(p\) is a property of \(h\) then \(h^{\prime}\preceq h\) and all descendants \(h^{\prime\prime}\) of \(h\) satisfy \(h^{\prime}\preceq h\preceq h^{\prime\prime}\), which means that \(p\) is also a property of \(h^{\prime\prime}\). Therefore, no successor of \(h\) will contradict \(p\) and, as a consequence, properties of the form \(p_{\mathrm{desc}}(h^{\prime})\) can be ignored to determine the successors. Formally, \(\mathrm{descendants}(h)\setminus\mathit{hypos}(C)=\mathrm{descendants}(h) \setminus\mathit{hypos}(C^{\prime})\) where \(C^{\prime}\) is the subset of properties of \(C\) that are of type \(\neg p_{\mathrm{desc}}(h^{\prime})\); notice that this does not imply that \(C^{\prime}\) is a conflict. Now, let \(h\) and \(h^{\prime}\) be two hypotheses. We write \(h\otimes h^{\prime}\) for the set of least common descendants of \(h\) and \(h^{\prime}\), i.e., \(h\otimes h^{\prime}=\min_{\preceq}(\mathrm{descendants}(h)\cap\mathrm{descendants}(h^{ \prime}))\). The following result holds: **Lemma 4**: _Let \(S\) be a set of hypotheses and let \(C_{S}=\{\neg p_{\mathit{desc}}(h^{\prime})\in\mathbb{P}\mid h^{\prime}\in S\}\) be a set of properties. Let \(h\) be a hypothesis. Then,_ \[\min_{\preceq}(\mathrm{descendants}(h)\setminus\mathit{hypos}(C_{S}))=\min_{ \preceq}(\bigcup_{h^{\prime}\in S}h\otimes h^{\prime}).\] **Proof:** This proof is in two parts: first, we prove that if \(S_{1}\) covers \(S_{2}\) (i.e., for all hypotheses of \(S_{2}\), there exists a preferred hypothesis in \(S_{1}\)) and conversely, then their minimal sets are equals; second, we prove that the two-way coverage holds for \(S_{1}=\mathrm{descendants}(H)\setminus\mathit{hypos}(C_{S})\) and for \(S_{2}=\bigcup_{h^{\prime}\in S}h\otimes h^{\prime}\). Let \(S_{1}\) and \(S_{2}\) be two sets of hypotheses such that \(\forall\{i,j\}=\{1,2\}\ \forall h_{i}\in S_{i}\ \exists h_{j}\in S_{j}\ h_{i} \preceq h_{j}\). Consider an element \(h_{i}\in\min_{\preceq}(S_{i})\); since \(h_{i}\in S_{i}\), there exists \(h_{j}\in S_{j}\) such that \(h_{j}\preceq h_{i}\). Furthermore since \(h_{j}\in S_{j}\), there exists \(h^{\prime}_{i}\in S_{i}\) such that \(h^{\prime}_{i}\preceq h_{j}\). Hence \(h^{\prime}_{i}\preceq h_{i}\) and therefore \(h^{\prime}_{i}=h_{i}\) (if \(h^{\prime}_{i}\prec h_{i}\), then \(h_{i}\) would not minimal). Consequently \(h^{\prime}_{i}\preceq h_{j}\preceq h_{i}\) and \(h^{\prime}_{i}=h_{i}\), which implies that \(h_{i}=h_{j}\). Thus \(\min_{\preceq}S_{1}=\min_{\preceq}S_{2}\). Assume now \(S_{1}=\text{descendants}(h)\setminus\text{\it hypos}(C_{S})\) and \(S_{2}=\bigcup_{h^{\prime}\in S}h\otimes h^{\prime}\). We prove that \(S_{1}\) covers \(S_{2}\), and in the next paragraph we prove that the converse holds as well. Let \(h_{2}\in S_{2}\) and let \(h^{\prime}\in S\) be the hypothesis such that \(h_{2}\in h\otimes h^{\prime}\), then \(h\preceq h_{2}\) and \(h^{\prime}\preceq h_{2}\); hence \(h_{2}\in\text{descendants}(h)\) and \(h_{2}\not\in\text{\it hypos}(C_{S})\) (since \(\text{\it hypos}(C_{S})\) includes \(\neg p_{\text{desc}}(h^{\prime})\)). Let \(h_{1}\in S_{1}\) be a hypothesis. By definition \(h_{1}\) is a descendant of \(h\) and does not belong to \(\text{\it hypos}(C_{S})\); hence there exists \(h^{\prime}\in S\) such that \(h_{1}\preceq h^{\prime}\). By definition, \(S_{2}=h\otimes h^{\prime}\) contains a hypothesis \(h_{2}\) such that \(h_{2}\preceq h_{1}\). \(\Box\) Lemma 4 gives us a way to compute the set of successors. Indeed, it should be clear that \(h\otimes h^{\prime}\) is finite for any \(h\) and any \(h^{\prime}\) since the hypothesis space is a well partial order. Therefore, the union in Lemma 4 can be enumerated and the minimal elements found by pairwise hypothesis comparisons. The implementation of operator \(\otimes\) is often simple. We now give concrete realisations for some of the hypothesis spaces we introduced. In SHS, a hypothesis is a set of faults. The single hypothesis \(h^{\prime\prime}\) such that \(\{h^{\prime\prime}\}=h\otimes h^{\prime}\) is then \(h^{\prime\prime}=h\cup h^{\prime}\). In MHS, a hypothesis associates each fault with a number of occurrences. Again, \(h\otimes h^{\prime}\) produces a single hypothesis \(h^{\prime\prime}\), which is defined by \(h^{\prime\prime}(f)=\max\{h(f),h^{\prime}(f)\}\) for all fault \(f\). In SqHS, multiple hypotheses can be minimal common descendants of \(h\) and \(h^{\prime}\). Such hypotheses \(h^{\prime\prime}\) are such that they contain all faults in \(h\) and \(h^{\prime}\), in the same order. The set of hypotheses can be computed by progressing in \(h\), \(h^{\prime}\), or in both at the same time (if the current fault is the same), until the end of both sequences is reached. Certain non-minimal hypotheses may still slip in, and must be removed. For instance, if \(h=[a,b]\) and \(h^{\prime}=[b,c]\), the procedure described above would produce: \(\{[a,b,b,c],[a,b,c,b],[a,b,c],[b,a,b,c],[b,a,c,b],[b,c,a,b]\}\) but the result is actually \(h\otimes h^{\prime}=\{[a,b,c],[b,a,c,b],[b,c,a,b]\}\). ## 7 Related Work The AI and control communities have developed a wide spectrum of diagnosis approaches targeting static or dynamic, discrete event, continuous, or hybrid systems. Obviously, we cannot discuss all of these. For instance, we do not cover approaches in state estimation or probabilistic diagnosis whose goal is to compute a probability distribution on candidates (Thorsley & Teneketzis, 2005; Stern, Kalech, Rogov, & Feldman, 2015). Instead, we focus our discussion on the frameworks which ours generalises. This includes in particular the founding works of Reiter (Reiter, 1987), de Kleer and Williams (de Kleer & Williams, 1987), and approaches that employ related algorithmic frameworks (Feldman, Provan, & van Gemund, 2010). ### Connection with Reiter's Theory Reiter's work (Reiter, 1987) is a key inspiration to the present theory. Similarly to Reiter's, our objective is a general theory of diagnosis from first principles, which determines the preferred diagnosis hypotheses solely from the available description of the system and of its observed behaviour, and which is independent from the way systems, hypotheses, and observations are represented. Our work generalises Reiter's in two significant ways. First, Reiter only considers the set hypothesis space (SHS). This space has many properties (Staroswiecki, Commault, & Dion, 2012), which allowed Reiter to propose a more specific implementation of PFS+c (diagnose). SHS is finite, which means that termination is not an issue (by no mean does this implies that Reiter and other researchers did not try to accelerate termination). It is also a lattice, i.e., any pair \(\{h,h^{\prime}\}\) of hypotheses has a unique least upper bound and a unique greatest lower bound; practically, this means that \(h\otimes h^{\prime}\) is always singleton, which simplifies successor computation. Finally, and most importantly, each hypothesis can be defined as the intersection of the set of descendants or non-descendants of hypotheses of depth 1. For instance, if \(F=\{f_{1},f_{2},f_{3}\}\), then \(\{f_{1},f_{2}\}\) is the unique element of descendants\((\{f_{1}\})\cap\mbox{descendants}(\{f_{2}\})\cap(\mathbb{H}\setminus \mbox{descendants}(\{f_{3}\}))\). Similarly, the set of descendants of any hypothesis is the intersection of descendants of hypotheses of depth 1: descendants\((\{f_{1},f_{2}\})=\mbox{descendants}(\{f_{1}\})\cap\mbox{descendants}(\{f_{2}\})\). Practically, this means that there exists a specialised property space that can be used to uniformly represent all hypotheses and that leads to conflicts that generalise well across the hypothesis space. For all these reasons, Reiter did not have to introduce the more complex algorithmic machinery we use in this paper. However, our theory enables much richer hypotheses spaces to be considered. This leads us to the second main difference with Reiter's work: whilst system-independence was one of Reiter's original aims, his theory was mainly applied to circuits and other static systems (Dague, 1994). Dynamic systems and in particular DESs were investigated using totally different approaches. In part, this can be explained by the immaturity of available consistency-checking tools for DESs (model checkers and AI planners) at the time. However, dynamic systems also naturally lend themselves to diagnostic abstractions richer than the set hypotheses space, such as considering sequences of fault events (Cordier & Thiebaux, 1994). ### Connection with de Kleer's Theory Reiter's theory applies to weak-fault models, which model only the correct behavior of components. De Kleer and Williams (de Kleer & Williams, 1987) extended Reiter's work to strong-fault models, which incorporate information about faulty behavior. They also used a different computational strategy, exploiting an assumption-based truth maintenance system (ATMS) (de Kleer, 1986). Their approach however still assumes the set hypothesis space. Strong-fault models bring additional challenges to the development of a general theory of diagnosis. Weak-fault models have a certain monotonicity property: if \(\delta\preceq h\) and \(\delta\) is a candidate, then \(h\) is also a candidate. This is one justification for returning the minimal diagnosis: it implicitly represents all diagnosis candidates. Such a representation however is no longer possible with strong-fault models, and instead, a new notion of "kernel diagnosis" was introduced (de Kleer, Mackworth, & Reiter, 1990). A kernel diagnosis is the conjunction of descendants and non-descendants of specified sets of hypotheses, e.g. descendants\((\{f_{1}\})\cap\mbox{descendants}(\{f_{2}\})\cap(\mathbb{H}\setminus\mbox{ descendants}(\{f_{3}\}))\cap(\mathbb{H}\setminus\mbox{descendants}(\{f_{4}\}))\), and the diagnosis can be represented by a (finite) set of maximal kernel diagnoses. Note that although all minimal candidates belong to some kernel diagnosis, i) this kernel diagnosis is not solely defined by the minimal candidate and ii) not all kernel diagnoses contain a minimal candidate. The generalisation of a kernel diagnosis to a richer hypothesis space than SHS is not trivial. For strong-fault models, the main benefits of representing the diagnosis as a set of kernel diagnoses over a set of minimal diagnoses are that: i) the candidates can be easily enumerated; and ii) verifying that a hypothesis is a candidate is easy. A kernel diagnosis represented by a set of properties as defined in the present article satisfies these two criteria. However the set of kernel diagnoses may become infinite. To see this, consider the following example over a multiset hypothesis space (MHS) with two fault events \(f_{1}\) and \(f_{2}\); for simplicity a hypothesis will be written \(h_{i,j}\) which means that fault \(f_{1}\) occurred \(i\) times, and fault \(f_{2}\)\(j\) times. We assume that \(\Delta=\{h_{0,j}\mid j\mbox{ mod }2=1\}\), i.e., did not occur and \(f_{2}\) occurred an odd number of times. The kernel diagnoses are the following: \[\text{descendants}(h_{0,1+2i})\setminus\text{descendants}(h_{1,1+2i})\setminus \text{descendants}(h_{0,2+2i}),\quad i\in\mathbf{N}.\] Such a representation of the diagnosis is infinite, which is why we advocate the computation of the minimal diagnosis. The second characteristic of the theory developed by de Kleer and Williams is the use of an ATMS to generate all the maximal conflicts before computing the diagnosis. ATMSs compute these conflicts by propagating the consequences of assumptions on the hypothesis properties. However, assuming, as is the case in this article, that the conflicts are convex sets of non-candidate hypotheses, the set of maximal conflicts may be infinite. Consider again the above example, and let \(h_{0,i}\neq h_{0,j}\) be two non-candidate hypotheses. Clearly both hypotheses cannot be in the same convex conflict (at least one hypothesis between them is a candidate). Thus, using an ATMS to pre-generate maximal convex conflicts is not feasible in the context of more general hypothesis spaces. Furthermore, even when the conflict set is finite, it can be prohibitively large and include many conflicts that are not needed to solve the problem. For instance, many conflicts will discard hypotheses that would not be minimal, even if they were candidates. An example of this, from the example above, is a conflict \(C\) where \(\mathit{hypos}(C)=\{h_{0,2}\}\). Such conflicts are not necessary to compute the minimal diagnosis. In the PFS algorithm, as well as other algorithms for computing hitting sets, the incremental generation of "useful" conflicts is preferrable. To avoid computing a potentially exponentially long list of minimal candidates, Williams and Ragno (Williams and Ragno, 2007) proposed to compute a subset of candidates that optimise some utility function (for instance, maximises the probability given a priori probabilities on faults). ### PLS-like Systems Bylander et al. proposed an approach that bears some similarities with PLS+r, in that it finds any diagnosis candidate and then searches for a candidate within its parent set (Bylander et al., 1991). It assumes the set hypothesis space, and that the problem has the monotonicity property (\(\delta\preceq h\ \wedge\ \delta\in\Delta\Rightarrow h\in\Delta\)), like weak-fault models do. This algorithm does not return all minimal candidates. SAFARI (Feldman et al., 2010) is a variant of this approach. It too assumes the SHS and a weak-fault model. The goal of this algorithm is to avoid the memory requirements associated with computing all the conflicts, as done with the ATMS approach, or maintaining an open list of hypotheses, as in the diagnose algorithm. SAFARI first computes a diagnostic candidate. It then checks whether any parent of the current candidate is a candidate, in which case it iteratively searches for a candidate parent. Because the model is weak-fault, this approach is guaranteed to return a minimal candidate. When a minimal candidate is found, a new search is started. This approach does not guarantee that all minimal candidates will be found. Furthermore to speed up the implementation, not all the parents are checked: the refinement is stopped as soon as two parent checks fail. ### Explanatory Diagnosis of Discrete-Event Systems Recently, Bertoglio et al. (Bertoglio, Lamperti, Zanella, & Zhao, 2020, 2020b; Lamperti, Trerotola, Zanella, & Zhao, 2023) proposed the _explanatory diagnosis_ of discrete event systems. They compute all possible sequences of faults that are consistent with the observations. The number of such sequences can be infinite in general, but they use regular expressions to represent them compactly. This diagnosis is more informative than the diagnosis traditionally computed for DES (set of faults). There are several important differences between their work and ours. First, they compute the complete diagnosis while we focus on computing the _minimal_ diagnosis; restricting ourselves to minimal diagnosis allows us to use more efficient algorithms, while Bertoglio et al. must explore all behaviours exhaustively. Second, they define diagnosis candidates as _sequences_ of faults while we allow for more definitions. This is not restrictive per se, as the sequences of faults form the most abstract space, but this, again, implies that we can use algorithmic improvements specific to our hypothesis space. Thirdly, we use an approach based on consistency tests while Bertoglio et al. compute all behaviours consistent with the observations. Finally, our approach is not limited to discrete event systems. ### Navigating the Space of Plans The problem of navigating through the space of possible plans in AI planning is very similar to the problem of diagnosis of discrete event systems. In classical planning, the optimal plan is generally the least expensive one. However the preference relation is sometimes more complex. One example is oversubscription planning, which requires finding a plan that satisfies all the hard goals and a maximal subset of soft goals. Because the planner does not know which combination of soft goals the user would rather see achieved, it should return all (cost optimal) plans that are non-dominated, i.e., such that no other plan achieve a superset of goals. Such problems can be formulated in our framework. The observations are the language of all plans that reach the hard goals. A "hypothesis" associated with a plan represents the subset of soft goals that this plan achieves. A hypothesis is preferable to another one if it is a superset of the latter. We can then use our search strategies to efficiently search for solutions. Eifler et al. (Eifler, Cashmore, Hoffmann, Magazzeni, & Steinmetz, 2020) propose techniques that are similar to search over hypothesis space as is done in model-based diagnosis. However our approach allows for more sophisticated definitions of classes of plans: rather than two plan belonging to the same class if they achieve the same soft goals, the user could also be interested in the order in which these goals are achieved (for instance the order in which a certain people can be visited). This can be modelled in our framework as a variant of the Sequence Hypothesis Space in which an element appears only once. ### Generation of Conflicts The theory of diagnosis from first principles relies heavily on the notion of conflicts to explore the hypothesis space efficiently. Junker presented an algorithm dubbed QuickXplain for computing minimal conflicts from a consistency checker that is ignorant of the underlying problem (Junker, 2004). QuickXplain isolates the subset of properties responsible for inconsistency by iteratively splitting the set of properties and testing them separately. Shchekotykhin et al. improved this work to produce several conflicts in a single pass (Shchekotykhin, Jannach, & Schmidt, 2015). The applications mentioned in the papers cited above considered the Set Hypothesis Space but these algorithms are applicable to any hypothesis space and can be used in our framework to generate conflicts. In the context of heuristic search planning, Steinmetz and Hoffmann (Steinmetz & Hoffmann, 2017) presented a technique to find conflicts (that are not guaranteed minimal). A conflict is a conjunction of facts such that any state that satisfies it is a dead-end from which the problem goal cannot be reached. Their algorithm uses the critical path heuristic \(h^{C}\)(Haslum, 2012) which lower bounds the cost of reaching the goal, as a dead-end detector, i.e., when \(h^{C}(s)=\infty\), the state \(s\) is a dead-end. The algorithm incrementally learns the value of the parameter \(C\) - a set of conjunction of facts -- adding new conjunctions when a dead-end unrecognised by \(h^{C}\) is found by the search. In our implementation of a test solver based on heuristic search below, we build on a different planning heuristic, namely LM-cut. ### Other Diagnosis Approaches Besides test-based approaches to diagnosis, two different classes of approaches have been developed (Grastien, 2013). The first, which bears some similarities with the test-based approach, consists in determining, off-line, a mapping between assumptions on the diagnosis and patterns satisfied by the observation. Implementations include indicators and ARR (Staroswiecki & Comtet-Varga, 2001), possible conflicts (Pulido & Alonso Gonzalez, 2004), chronicles (Cordier & Dousson, 2000), and, in an extreme interpretation of this class, the Sampath diagnoser (Sampath et al., 1995). The problem with approaches of this kind is the potentially large (exponential, or worse) number of observation patterns that need to be built off-line. The second approach consists in computing the set of behaviours that are consistent with the model and the observation, and extracting the diagnosis information from these behaviours. The main issue here is finding a representation of the set of behaviours that is compact enough and allows fast extraction of the diagnostic information. In circuit diagnosis, this approach has been pioneered by Darwiche and co-authors, and led to a thorough study of model compilation (Darwiche & Marquis, 2002; Darwiche, 2011). For DES diagnosis, this approach has dominated the research landscape (Pencole & Cordier, 2005; Su & Wonham, 2005; Schumann, Pencole, & Thiebaux, 2007; Kan John & Grastien, 2008; Zanella & Lamperti, 2003). The present paper significantly departs from existing work on DES diagnosis by offering a generalised test-based theory that encompasses DES and other types of dynamic systems. ## 8 Implementations The framework presented in this paper was initially developed for the diagnosis of discrete event systems. In the DES case, the task of the test solver is to decide if there exists a sequence \(\sigma\) of events that is allowed by the system model (\(\sigma\in\mathit{Mod}\)), consistent with the observation (\(o(\sigma)\) holds) and matching a hypothesis in the test set \(H\) (\(\mathit{hypo}(\sigma)=h\) for some \(h\in H\)). Realistically, we must assume that the model is given in some compact, factored representation, such as a network of partially synchronised automata (Pencole & Cordier, 2005), a Petri net (Benveniste et al., 2003) or a modelling formalism using state variables and actions (Haslum, Lipovetzky, Magazzeni, & Muise, 2019). Even if these representations are in theory equivalent to a single finite automaton, the exponential size of that automaton means it can never be fully constructed in practice. Thus, the test solver must work directly on the factored representation. This is the same problem that is faced in model checking (Clarke et al., 2000) and AI planning (Ghallab et al., 2004), and techniques from those areas can be adapted to solve it. In this section, we present two examples of how test solvers for DES diagnosis, over different hypothesis spaces, can be implemented. One implementation uses a reduction to propositional satisfiability (SAT), while the other uses heuristic state space search. To ground the discussion, we first introduce a simple, concrete factored DES representation. ### Representation of Large DES The representation that we will use to describe the test solver implementations below is a network of partially synchronised automata. This is a commonly used representation for DES diagnosis (Zanella & Lamperti, 2003; Pencole & Cordier, 2005; Su & Wonham, 2005). The DES is defined by a set of _components_, \(\mathcal{C}\), and a global alphabet of _events_, \(\mathcal{E}\). Each component \(c\) is a finite state machine: it has a set of local states \(S_{c}\) and a local transition relation \(T_{c}\subseteq S_{c}\times\mathcal{E}_{c}\times S_{c}\), where \(\mathcal{E}_{c}\) is the set of events that component \(c\) participates in. As usual, \((s,e,s^{\prime})\in T_{c}\) means the component can change from state \(s\) to \(s^{\prime}\) on event \(e\). The global state of the system is the tuple of component states, and a _global transition_ is a set of simultaneous component transitions. Synchronisation is partial: if \(e\not\in\mathcal{E}_{c}\) then \(c\) does not perform a transition when \(e\) occurs. More formally, given a global state \((s_{1},\ldots,s_{n})\), event \(e\) induces a global transition to a new state \((s^{\prime}_{1},\ldots,s^{\prime}_{n})\) iff for each component \(c_{i}\), either (i) \((s_{i},e,s^{\prime}_{i})\in T_{c_{i}}\), or (ii) \(e\not\in\mathcal{E}_{c_{i}}\) and \(s^{\prime}_{i}=s_{i}\). A subset \(\mathcal{E}_{O}\) of events are _observable_. In the diagnosis problems we consider, the observation is a sequence of observable events: \(\,o=e^{1}_{o},\ldots,e^{k}_{o}\). Another subset \(\mathcal{F}\subseteq\mathcal{E}\) are designated fault events. ### Implementation of PFS+ec using SAT Propositional satisfiability (SAT) is the problem of finding a satisfying assignment to a propositional logic formula on conjunctive normal form (CNF), or prove that the formula is inconsistent. SAT has many appealing characteristics as a basis for implementing a test solver: modern SAT solvers based on clause learning are very efficient, both when the answer to the question is positive and negative, and can easily be modified to return a conflict. Reductions to SAT have previously been used to solve discrete event reachability problems for diagnosis (Grastien et al., 2007; Grastien & Anbulagan, 2013), AI planning (Kautz & Selman, 1996) and model checking (Biere, Cimatti, Clarke, Strichman, & Zhu, 2003). The main disadvantage of reducing the reachability problem to SAT is that it requires a bound on the "parallel length" \(n\) of the sequence \(\sigma\) that is sought, and the size of the SAT encoding grows proportionally to this parameter.2 For the benchmark problem that we consider in our experiments (described in Section 9.3.1) this is not problematic: the structure of this benchmark allows us to prove that the maximum number of local transitions that can take place in any component between two observable events is at most 7, and therefore the parallel length of the sequence is bounded by \(7\times|o|\), where \(|o|\) is the number of observed events. For diagnosis of DESs where such a bound cannot be proven, however, this can be an issue. Footnote 2: The SAT encoding allows parallel execution of non-synchronised local transitions in separate components. The semantics of such parallel execution is simple: a parallel set of global transitions is permitted iff every linearisation of it would be. The purpose of allowing this form of parallelism is only to reduce the size of the encoding. In order to represent a path of parallel length \(n\) (where we take \(n=7\times|o|\)), we define SAT variables that model the state of every component between every consecu as well as variables that model which event occurred on each transition. For each \(s\in S_{c}\) of some component and each "timestep" \(t\in\{0,\ldots,n\}\), the propositional variable \(s@t\) will evaluate to _true_ iff the state of component \(c\) is \(s\) after the \(t\)-th transition. Similarly, for every event \(e\in\mathcal{E}\) and every timestep \(t\in\{1,\ldots,n\}\), the propositional variable \(e@t\) will evaluate to _true_ if event \(e\) occurred in the \(t\)-th transition. For simplicity, we also define the propositional variable \(tr@t\) which represents whether the (component) transition \(tr\) was triggered at timestep \(t\). The SAT clauses are defined to ensure that any solution to the SAT problem represents a path that satisfies the following three constraints (Grastien et al., 2007): (i) it should be allowed by the model; (ii) it should be consistent with the observations; (iii) its corresponding hypothesis should belong to the specified set \(H\). The translation of the first constraint into SAT is summarised on Table 2. The first two lines ensure that the origin and target states of each transition are satisfied. The third line encodes the frame axiom, which specifies that a component state changes only as an effect of a transition. The fourth line is a cardinality constraint (Marques Silva & Lynce, 2007) which indicates that a component can only be in one state at a time. The fifth and sixth lines ensure that the transitions and events match and the seventh line is a cardinality constraint whereby only one event can take place at a time for each component. The last line defines the initial state (component \(c\) starts in state \(s_{c0}\)). The second constraint (i.e., that the path matches the observation) is very easy to encode. Given that the \(i\)-th observed event took place at timestep \(7\times i\), we know which observable events occurred at which timestep. This information is simply recorded as unit clauses, i.e., if observable event \(e\) occurred at timestep \(t\) the clause \(e@t\) is created, otherwise the clause \(\overline{e@t}\) is created. More complex observations can be encoded in SAT, for instance if the order between the observed event is only partially known (Haslum & Grastien, 2011). Finally the last constraint is that the hypothesis associated with the path should belong to the specified set. Remember that the set is implicitly represented by a collection of hypothesis properties. We have shown in Section 5.2 how hypothesis properties can be seen as regular languages or intersections of such languages; these languages can be represented as finite state machines which in turn can be translated to SAT similarly to the translation to SAT of the model. However, for a given hypothesis space, it is usually possible to find a simpler, more compact, yet logically equivalent encoding of hypothesis properties. Let us first consider the set hypothesis space: The property of being a descendant from \(h\subseteq F\) can be represented by the clauses \[f@1\vee\ldots\lor f@n,\quad\forall f\in h,\] \begin{table} \begin{tabular}{|l l l|} \hline \(\forall c\in\mathcal{C}\). \(\forall tr=(s,e,s^{\prime})\in T_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \(tr@t\to s^{\prime}@t\) \\ \(\forall c\in\mathcal{C}\). \(\forall tr=(s,e,s^{\prime})\in T_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \(tr@t\to s@(t-1)\) \\ \(\forall c\in\mathcal{C}\). \(\forall s\in S_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \((s@t\wedge s@(t-1))\rightarrow\bigvee_{tr\in T_{c}}tr@t\) \\ \(\forall c\in\mathcal{C}\). \(\forall t\in\{0,\ldots,n\}\) & \(=_{1}\{s@t\mid s\in S_{c}\}\) \\ \(\forall c\in\mathcal{C}\). \(\forall t=(s,e,s^{\prime})\in T_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \(tr@t\to e@t\) \\ \(\forall c\in\mathcal{C}\). \(\forall e\in\mathcal{E}_{c}\). \(\forall t\in\{1,\ldots,n\}\) & \(e@t\rightarrow\bigvee_{tr\in T_{c}}tr@t\) \\ \(\forall c\in\mathcal{C}\). \(\forall t\in\{1,\ldots,n\}\) & \(\leq_{1}\{e@t\mid e\in\mathcal{E}_{c}\}\) \\ \(\forall c\in\mathcal{C}\) & & \(s_{c0}@0\) \\ \hline \end{tabular} \end{table} Table 2: Ensuring that the SAT solutions represent paths accepted by the model which state that the faults \(f\in h\) must occur in \(\sigma\). On the other hand, the property of being an ancestor of \(h\) can be represented by the unit clauses \[\overline{f\mbox{\raisebox{0.86pt}{$\boxplus$}}t},\quad\forall f\in F\setminus h,\ t\in\{1,\ldots,n\},\] which state that the faults \(f\not\in h\) should not occur in \(\sigma\). For the multiset hypothesis space, these properties can be represented in a similar way using cardinality constraints: \(\sigma\) corresponds to a descendant (resp. ancestor) of \(h\) iff for all fault \(f\), \(\sigma\) exhibits more (resp. less) than \(h(f)\) occurrences of \(f\). The encoding for the sequence hypothesis space is more complex. Let \(h=[f_{1},\ldots,f_{k}]\) be a hypothesis for which the property \(p_{\mathrm{desc}}(h)\) must be encoded. We write \(\{h_{0},\ldots,h_{k}\}\) the set of prefixes of \(h\) such that \(h_{k}=h\). Consider another hypothesis \(h^{\prime}\succeq h_{i}\) for some \(i\in\{0,\ldots,k-1\}\), and assume \(f\in F\) is appended to \(h^{\prime}\); then \(h^{\prime}f\succeq h_{i}\). Furthermore if \(f=f_{i+1}\) then \(h^{\prime}f\succeq h_{i+1}\). To model this, we introduce fresh SAT variables \(dh_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\) that evaluate to _true_ iff the trajectory \(\sigma\) until timestep \(t\) corresponds to a hypothesis that is a descendant of \(h_{i}\). Clearly, \(dh_{0}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\) is _true_ for all \(t\); furthermore \(dh_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}0\) is _false_ for all \(i>0\). The value of \(dh_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\) (\(i>0\)) can be enforced by the following constraints: \[dh_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\ \longleftrightarrow\ dh_{i}\mbox{ \raisebox{0.86pt}{$\boxplus$}}(t-1)\vee(dh_{i-1}\mbox{\raisebox{0.86pt}{$ \boxplus$}}(t-1)\wedge f_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t)\,.\] Encoding the ancestor property is more difficult. Consider a hypothesis \(h^{\prime}\preceq h_{j}\) for some \(j\in\{0,\ldots,k\}\), and assume \(f\in F\) is appended to \(h^{\prime}\); then \(h^{\prime}f\preceq h_{i}\) for any \(i\) such that \(f\) appears in \(\{f_{j+1},\ldots,f_{i}\}\). The negation of this expression is modelled as follows: \(h^{\prime}f\) is not an ancestor of \(h_{i}\) if \(h^{\prime}\) is not an ancestor of \(h_{i}\) or there exists a \(0\leq j<i\) such that \(f\notin\{f_{j+1},\ldots,f_{i}\}\) and \(h^{\prime}\) is not an ancestor of \(h_{j}\). As was the case for descendant properties, we create SAT variables \(ah_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\). For all \(i\), \(ah_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}0\) is _true_. The value of \(ah_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t\) is then ensured by \[\overline{ah_{i}\mbox{\raisebox{0.86pt}{$\boxplus$}}t}\ \longleftrightarrow\ \overline{ah_{i}\mbox{\raisebox{0.86pt}{$ \boxplus$}}(t-1)}\vee\bigvee_{j<i}\left(\bigvee_{f\in F\setminus\{f_{j+1}, \ldots,f_{i}\}}\left(\overline{ah_{j}\mbox{\raisebox{0.86pt}{$\boxplus$}}(t-1 )}\wedge f\mbox{\raisebox{0.86pt}{$\boxplus$}}t\right)\right).\] ### Implementation using Heuristic State Space Search State space exploration algorithms are widely used in model checking and AI planning. They construct part of the explicit representation on-the-fly, while searching for a state satisfying a given goal condition. The use of heuristic guidance enables these algorithms to focus the search towards the goal and explore only a very small fraction of the state space before the goal is found. Problem-independent heuristics are derived automatically from the factored problem representation (Bonet & Geffner, 2001). To take advantage of the very effective heuristics and search algorithms that exist, we need to express the hypothesis test as a state reachability problem, i.e., as a condition on the goal state to be found. This is straightforward to do with the help of some auxiliary components. The main goal is to find a sequence of events that generates the observation: Suppose, for simplicity, that the observation is a sequence of events, \(e^{1}_{o},\ldots,e^{n}_{o}\). We add new component \(c_{o}\) with states \(0,\ldots,n\), which tracks how far along the sequence we are. Its local transitions are \((i-1,e^{i}_{o},i)\); thus, any transition that emits an observable event will synchronise with \(c_{o}\), ensuring that these events match the observation. The goal condition is then to reach \(c_{o}=n\). Transitions that emit an observable event not in the sequence will never be applicable, and can simply be removed. The formulation of the hypothesis, or set of hypotheses, to be tested is more complex. Unlike in the SAT encoding, we cannot just provide an encoding of each diagnosis property in isolation and specify the test by their conjunction. Instead, we provide encodings of two of the diagnostic questions described in Section 4.3 that are required by the pls and pfs algorithms. We will use the multiset hypothesis space as the illustrative example. Encodings of the set hypothesis space are also easy to define (they work exactly the same but consider only the presence/absence of each fault, rather than the number of times it occurred). Encoding tests in the sequence hypothesis space is much more complicated. _Question 1:_ candidate(\(h\)). Recall that a multiset hypothesis is a mapping \(h:\mathcal{F}\rightarrow\mathbb{N}\). \(h\) is a candidate if there is an event sequence \(\sigma\) that includes each fault \(f\in\mathcal{F}\) exactly \(h(f)\) times. We can track the occurrences of fault events in the same way as the observation: For each fault \(f\), introduce a component \(c_{f}\) with states \(0,\ldots,h(f)\), and local transitions \((i-1,f,i)\). This construction ensures that the sequence contains no more than \(h(f)\) occurences of each fault \(f\). Adding \(c_{f}=h(f)\), for each \(f\), to the goal condition also ensures that the sequence exhibits exactly the specified fault counts. _Generating conflicts._ A complete search on the formulation above will find an event sequence witnessing that \(h\) is a candidate if such a sequence exists. If it does not, however, the search will only return the answer "no", after exhausting the reachable fraction of the state space. To generate a conflict, we need a small modification to both the encoding and the search algorithm. We extend each fault-counting component \(c_{f}\) with an extra state \(h(f)+1\), and the local transitions \((h(f),f,h(f)+1)\) and \((h(f)+1,f,h(f)+1)\). This allows event sequences that contain more occurrences of faults than \(h\) specifies. (We also remove \(c_{f}=h(f)\) from the goal.) But we also assign a cost to each transition: the cost is one for these transitions that correspond to additional faults, and zero for all other transitions. This means that instead of every sequence that reaches the goal being a witness for \(h\), every sequence with a total cost of zero is such a witness. We then run an optimal A\({}^{\star}\) search (Hart, Nilsson, & Raphael, 1968) using the admissible LM-Cut heuristic (Helmert & Domshlak, 2009), but interrupt the search as soon as the optimal solution cost is proven to be greater than zero. At this point, every state on the search frontier (open list) is either reached by a non-zero cost transition (corresponding to an additional fault not accounted for by \(h\)), or has a heuristic estimate greater than zero, indicating that some additional fault transition must take place between the state and the goal. Here, the specific heuristic that we use becomes important: The LM-Cut heuristic solves a relaxed version of the problem and finds a collection of sets of transitions (with non-zero cost) such that at least one transition from every set in the collection must occur between the current state and the goal. Each such set is what is known as a _disjunctive action landmark_ in the planning literature. Thus, this heuristic tells us not only that some additional fault transition must take place, but gives us a (typically small) set of which additional faults. Taking the union of these sets (or the singleton set of the fault transition already taken) over all states on the search frontier gives us a set \(F^{\prime}\) of faults such any candidate descendant of \(h\) must include at least one fault in \(F^{\prime}\) in addition to those accounted for by \(h\), and that is our conflict. _Question 3:_ covers(\(S\)). This question asks whether there is any candidate \(h^{\prime}\) such that \(h\not\preceq h^{\prime}\) for every \(h\in S\). For the multiset hypothesis space, this means finding an event sequence \(\sigma\) such that for each \(h\in S\) there is some fault \(f\) that occurs in \(\sigma\) strictly fewer times than \(h(f)\). As above, we introduce components \(c_{f}\) to count the number of fault occurrences in the sequence. We set the maximum count to \(n_{f}=\max_{h\in S}h(f)\), but add the local transition \((n_{f},f,n_{f})\), so that means "\(n_{f}\) or more occurrences of \(f\)". That \(h\not\preceq hypo(\sigma)\) can then be expressed by the disjunction \(\bigvee_{f\in\mathcal{F}}c_{f}<n_{f}\). The goal condition, that \(h\not\preceq hypo(\sigma)\) for all \(h\in S\), is simply the conjunction of these conditions for all \(h\in S\). ## 9 Experiments In this section, we apply implementations of different diagnosis algorithms derived from our theoretical framework to two realistic diagnosis problems, and benchmark them against other algorithms from the literature. ### Competing Algorithms We compare the sat-based and planning-based implementations of the algorithms presented in this paper with existing algorithms from the literature. #### 9.1.1 Diagnoser The seminal work on diagnosis of discrete event systems introduced the diagnoser (Sampath et al., 1995). The diagnoser is a deterministic finite automaton (DFA) whose transitions are labeled with observations and states are labeled with the diagnosis. Given a sequence of observations one simply needs to follow the single path labeled by this sequence and the diagnosis is the label of the state reached in this way. There are several issues with the diagnoser that prevented the use of this approach. First its size (the number of states of the DFA) is exponential in the number of states of the model and double exponential in the number of faults (for the set hypothesis space) (Rintanen, 2007). For the power network that we use as a benchmark in 9.3, the average number of possible fault events per component is 9, and the average number of states per component is well over 100; the number of components is over 10,000. A Sampath diagnoser for this system will have over \(100^{10,000}\times 2^{(9*10,000)}\simeq 10^{50,000}\) states. This method is therefore inapplicable but for small systems or systems that can be strongly abstracted. Second the diagnoser is originally designed for totally ordered observations. In our application many observations have the same time stamp, meaning that the order in which they were emitted is unknown. The diagnoser, as presented by Sampath et al., can certainly be adapted to account for bounded partial observable order but this would augment the size of the diagnoser to additional orders of magnitude. Third the approach is not applicable to infinite hypothesis spaces since the DFA would be infinitely large. #### 9.1.2 Automata Automata-based approaches consist in computing an implicit representation of the set of all sequences (traces) of events consistent with both the model and the observations. This representation is useful if it allows one to quickly infer some information about the actual system behaviour. For instance the implicit representation as a single finite state machine whose language is exactly said set makes it easy (in linear time) to decide whether a specific fault could have or definitely has occurred. A significant part of the work in discrete event systems aims at finding such representations that are compact, that can be computed quickly, and that allow for fast inferences (Su & Wonham, 2005; Pencole & Cordier, 2005; Cordier & Grastien, 2007). We chose an approach based on junction trees (Kan John & Grastien, 2008), the state-of-the-art in automata-based diagnosis of discrete event systems. This approach is based on the property that local consistency in tree structures is equivalent to global consistency, which result we explain now. Consider a finite set \(S\) of automata that implicitely represents the automaton \(A\) obtained by standard synchronisation of the automata in \(S\). Each automaton \(A_{i}\in S\) of this set is characterised by a set of events \(E_{i}\). A property of this setting is that every trace obtained by projecting a trace of \(A\) on \(E_{i}\) is a trace of \(A_{i}\); intuitively this means that a sequence of events allowed by \(A\) is (by definition of the synchronisation) allowed by every \(A_{i}\). The converse, the property of global consistency, is generally not true: \(A_{i}\) could contain traces that are the synchronisation of no trace from \(A\). Global consistency is a very powerful property, because it allows us to answer many questions regarding \(A\) by only using \(S\) (typically, questions such as whether a given fault certainly/possibly occurred). In general global consistency can only be obtained by computing \(A\) and then projecting \(A\) on every set of events \(E_{i}\) (in case this type of operations is repeated several times, a minimisation operation is necessary to reduce space explosion issues); this is computationally infeasible outside trivial problems. Local consistency is the property that every pair of automata in \(S\) is consistent, i.e., the property of consistency holds for the set \(\{A_{i},A_{j}\}\) and the synchronisation of \(A_{i}\) and \(A_{j}\). Local consistency does not imply global consistency. It is now possible to view the set \(S\) as a graph, where each node maps to an automaton and there is a path between two automata that share an event (\(E_{i}\cap E_{j}=E_{ij}\neq\emptyset\)) such that all automata on this path also share these events \(E_{ij}\). If this graph is a tree, then local consistency of \(S\) implies global consistency. In other words global consistency of \(S\) can be achieved without computing the automaton \(A\). There remains the issue of making \(S\) represent a tree. A technique used to transform an arbitrary graph into a tree is to construct a hyper-graph where the hyper-nodes are subsets of nodes of the original graph: a junction tree (Jensen & Jensen, 1994), aka decomposition tree. Accordingly, a new set \(S^{\prime}\) of automata is defined whose automata \(A^{\prime}_{i}\) are defined as the synchronisation of subsets of \(S\). In order to reduce the cost of these synchronisations the junction tree should have hyper-nodes of minimal cardinality. The decision problem associated with finding the optimal junction tree is NP-hard but there exist polynomial algorithms that provide good trees (Kjaerulff, 1990). We start of with \(S\) defined as the set of "local diagnoses" where each local diagnosis is the synchronisation of each component's model with its local observations. The local observations are not totally independent but are defined as batches of independent events. Therefore each batch is separated by a synchronisation tick that ensures that two ordered observations are indeed ordered. From a tree-shaped locally consistent representation \(S^{\prime}\), one needs to extract the minimal diagnosis. Assuming that the hypothesis space is defined over a subset \(F\) of fault events (as is the case with SHS, MHS, and SqHS), one option would be to compute the language \(\mathcal{L}_{F}\), defined as the projection of the language of \(S^{\prime}\) onto \(F\), and then extract its minimal words, a problem similar to the _enumeration problem_(Ackerman & Shallit, 2009). How to perform it efficiently given our definition of minimality, and how to perform it without explicitly computing \(\mathcal{L}_{F}\) is an open question. For this reason, we only provide the runtime for computing \(S^{\prime}\) as it gives us a good estimate of the overall performance of this approach. #### 9.1.3 Bdd A different approach to diagnosis of discrete event system consists in i) embedding in the system state the diagnostic hypothesis associated with the paths that lead to this state and ii) computing the set of states ("belief state") that the system may be in after generating the observations. The diagnosis is the set of hypotheses that label some state of the final belief state. The first point is rather easy to solve for some hypothesis spaces. For the set hypothesis space simply add a state variable \(v_{f}\) for each fault \(f\) that records the past occurrence of \(f\): \(v_{f}\) is false in the initial state and it switches to true whenever a transition labeled by \(f\) is encountered on the path. Other hypothesis spaces could be defined as easily, but the problem is that it requires an infinite number of state variables in general. It seems that there is no practical upper bound on this number but for trivial problems. The second point can be described easily. The model can be rewritten as a function that associates every observable event \(o\) with a set \(T_{o}\) of pairs of states (the event \(o\) is generated only when the state changes from \(q\) to \(q^{\prime}\), where \(\langle q,q^{\prime}\rangle\in T\)) as well as a set \(T_{\epsilon}\) of pairs for unobservable transitions. Starting from a given set of states \(\mathcal{B}\), the set of states reached by any number of unobservable events, written \(\mathit{silent}(\mathcal{B})\), is the minimal set of states that satisfies \(\mathcal{B}\subseteq\mathit{silent}(\mathcal{B})\) and \(q\in\mathit{silent}(\mathcal{B})\ \land\ \langle q,q^{\prime}\rangle\in T_{ \epsilon}\Rightarrow q^{\prime}\in\mathit{silent}(\mathcal{B})\); this set can be easily obtained by adding to \(\mathcal{B}\) states \(q^{\prime}\) as defined above until the set remains stable. Starting from a set of states \(\mathcal{B}\), the set of states reached by a single observable event \(o\), written \(\mathit{next}_{o}(\mathcal{B})\), is the set of states defined by the relation \(T_{o}\): \(\{q^{\prime}\mid\exists q\in\mathcal{B}.\ \langle q,q^{\prime}\rangle\in T_{o}\}\). We first assume that the observations are just a sequence \(o_{1},\ldots,o_{k}\) of observed events. The belief state at the end of the sequence of observations can be computed incrementally by alternating the two functions presented before: \(\mathcal{B}=\mathit{silent}\circ\mathit{next}_{o_{1}}\circ\mathit{silent} \cdots\mathit{silent}\circ\mathit{next}_{o_{k}}\circ\mathit{silent}(\mathcal{ B}_{0})\) where \(\mathcal{B}_{0}\) is the initial belief state. Our observations are not a single sequence of observed events: the order between some observation fragments are unknown. One way to solve this issue is by computing all possible sequences of observations and computing the union of the belief states obtained for each sequence. We use a more sophisticated approach. Because the observations are batches of unordered events, we compute, for each batch \(b_{j}\), all possible sequences; we compute then the belief state from \(\mathcal{B}_{j-1}\) for each sequence, and we obtain the belief state at the end of batch \(b_{j}\) as the union of the belief state for each sequence. So far we have not described how the belief states are represented. Because a state is an assignment of state variables to Boolean values, a state can be seen as a formula in propositional logic. A set of states is also a formula and the union (resp. the intersection) of two sets are implemented as the logical disjunction (resp. the conjunction). Sets of pairs of states as \(T\) can also be represented as a formula, but this requires a copy \(v^{\prime}\) for each state variable \(v\). The set of states \(q^{\prime}\) associated with at least one state of a specified set \(Q\), formally \(\{q^{\prime}\mid\exists q\in Q.\ \langle q,q^{\prime}\rangle\in T\}\), can be represented by the propositional formula: \(\exists V.\ (\Phi_{T}\land\Phi_{Q})[V^{\prime}/V]\) where \(V^{\prime}\) is the list of copied variables, \(\Phi_{T}\) is the formula representing \(T\), \(\Phi_{Q}\) is the formula representing \(Q\), and \([V^{\prime}/V]\) is the operation that consists in renaming in a formula all variables \(v^{\prime}\) with \(v\). Practically, for applications as model checking (Burch, Clarke, Long, McMillan, & Dill, 1994), classical planning (Kissman & Edelkamp, 2011), and diagnosis (Schumann et al., 2007), these formulas are represented using BDDs (Bryant, 1986). Finally one important issue when using BDDs is that of variable order. We make sure that every state variable \(v\) is followed by its copy \(v^{\prime}\). Furthermore we define all variables of each component in a single sequence. ### Setup We benchmark six different implementations of algorithms derived from our framework. These are: pfs+ec using the SAT-based test solver, applied to the set, multiset and sequence hypothesis spaces; pfs+c using the test solver based on heuristic search, applied to the set hypothesis space; and pls using the heuristic search-based test solver, applied to the set and multiset hypothesis spaces. Recall that the basic version of pfs, without the essentiality test, is only guaranteed to terminate when used with the finite set hypothesis space. Code can be downloaded here: github.com/alban-grastien/diagfwork. In addition, we compare the performance of these algorithms with two diagnosis methods presented in the previous subsection: the junction tree (JT) approach and the BDD-based (bdd) approach. JT, bdd, and the pfs variants using the SAT-based test solver are implemented in Java. The SAT-based test solver itself is a version of minisat 2.0 (Een & Sorensson, 2003), modified to return conflicts, and is implemented in C. The pfs and pls variants using the heuristic search-based solver are implemented in Lisp; the test solver is based on the HSP* AI planner (Haslum, 2008), which is implemented in C++. A new test solver instance is invoked for each test, without reuse of information from previous tests. For the SAT-based solver, it is likely that using incremental SAT (Hooker, 1993) could improve the aggregate performance over multiple tests. Remember, as we discussed in Subsection 3.4, that computing the diagnosis in the set, multiset and sequence hypothesis spaces is increasingly harder. ### First Benchmark: Diagnosis of a Power Transmission Network #### 9.3.1 The Diagnosis Problem The problem we consider is that of intelligent alarm processing for a power transmission network, as introduced by Bauer et al. (Bauer et al., 2011). The observations are alarms, generated by equipment in the network such as protection devices, switchgear, voltage and current monitors, etc. The objective of intelligent alarm processing is to reduce the volume of alarms, which can get very high, particularly in severe fault situations, by determining which alarms are "secondary", meaning they can be explained as follow-on effects of others. This is not simply a function of the alarm itself, but depends on the context. As a simple example, if we can deduce that a power line has become isolated, then an alarm indicating low or zero voltage on that line is secondary (implied by the fact that the line is isolated); but in other circumstances, a low voltage alarm can be the primary indicator of a fault. The power network is modelled, abstractly, as a discrete event system. The number of states in each component ranges between \(8\) and \(1,024\), with most components having well over a hundred states. The entire network has over \(10,000\) components, but for each problem instance (partially ordered set of alarms), only a subset of components are relevant to reasoning about that set of alarms; the number varies between \(2\) and \(104\) components in the benchmark problem set. The initial state is only partially known, and certain components have up to \(128\) initial states. There are \(129\) instances in the benchmark set, and the number of observations (alarms) in each ranges from \(2\) to \(146\). #### 9.3.2 Results A summary of results, in the form of runtime distributions, is shown in Figure 3. The complexity of the benchmark instances varies significantly. Many problems are quite simple, but the complexity rises sharply in the larger instances. Thus, solving a handful more problems is actually a pretty good result. JT solves only \(23\) out of the \(129\) instances. As soon as the problem includes a transmission line, the problem becomes too hard: the transmission line component has \(1,024\) states, and \(64\) possible initial states, which makes the automata determinisation required by JT too expensive. Comparing all the diagnosers operating on the set hypothesis space (SHS), PFS, with both test solvers, solves more problems than bdd (\(4\) more with the heuristic search-based solver, \(12\) more with the SAT-based solver), which in turn solves \(9\) more problems than pls. However, it is worth noting that PFS and pls can return some diagnosis candidates even when they fail to complete within the given time limit. All candidates found by PFS are minimal, and so form a subset of the minimal diagnosis. The instances not solved by PFS+ec/SAT (SHS) are also not solved by any other diagnoser, so we cannot determine how much of the minimal diagnosis has been found. Concerning pls/H.S., in \(17\%\) of the instances that it does not solve but for which the minimal diagnosis is known (because they are solved by some other diagnoser), the candidate set found by pls/H.S. is in fact the minimal diagnosis; it is only the last test, proving that there is no uncovered candidate, that fails to finish. This can be attributed to the asymmetric performance of the heuristic search Figure 3: Runtime distribution (number of problems solved vs. time limit) for all diagnosis algorithms compared in the experiment. based test solver: heuristically guided state space search can be quite effective at finding a solution when one exists, but is generally no more efficient than blind search at proving that no solution exists. It is also interesting to note that the performance of pfs+ec in the three different hypothesis spaces (SHS, MHS and SqHS) follows the expected hierarchy of problem hardness: fewer instances are solved in the sequence hypothesis space, which is a harder diagnosis problem, than in the easier multiset hypothesis space, and still more instances are solved in the easiest, the set hypothesis space, though the difference between MHS and SHS is only two problems. It turns out that most problem instances have the same number of minimal candidates for these hypothesis spaces. Only two instances solved by both diagnosers show different numbers: problem chunk-105, for example, has two minimal MHS candidates, \(\{\texttt{Line\_X9\_X10.fault}\to 1,\texttt{Breaker\_X1\_X2.fault}\to 1\}\) and \(\{\texttt{Breaker\_X1\_X2.fault}\to 2\}\), which lead to a single minimal SHS candidate, \(\{\texttt{Breaker\_X1\_X2.fault}\}\). Because the size of the minimal diagnoses are similar, the number of tests is also very similar and incurs only a small penalty for pfs (MHS). On the contrary, because MHS tests are more precise (specifying the exact number of faults), and because pfs does not use incremental solving, each individual MHS test may be easier to solve. ### Second Benchmark: The Labour Market Database #### 9.4.1 The Diagnosis Problem This diagnosis problem is based on the data cleansing problem that we already discussed in 2.1. Specifically, we consider the database provided by (Boselli et al., 2014) that records the employment history in the Italian Labour Market. This history is subject to logical and legal constraints (e.g., a job can end only after it has started; a person cannot hold two full-time jobs at the same time). Data cleansing is the problem of correcting the database to restore its integrity. Because the constraints apply to each person individually, we created one problem centered around each person whose history does not satisfy the constraints. For each problem, we considered all the relevant environment, in particular the list of employers mentioned in this person's records. For employers, employees, and jobs, we built generic automata modelling all histories consistent with the integrity rules. We further added faulty transitions modelling how the records could be incorrectly inserted into the database, e.g., transitions representing the fact that an employment cessation was filed for the wrong employer. A diagnosis is then a sequence of such incorrect operations. We end up with 600 problems. The systems are fairly small: in the worst case, a worker was in contact with five different employers, which translates into six automata with no more than six states each, and up to 46 events per component. #### 9.4.2 Results A summary of results, in the form of runtime distributions, is shown in Figure 4. The maximum number of minimal candidates in any of the solved instances is 450, and the maximum number of faults in such a candidate is 12. These numbers are very high, and suggest that the problem definition could be refined. For instance, in the Set Hypothesis Space, the preference relation could be enriched by saying that a hypothesis is preferred over another hypothesis that contains two more faults. This type of constraint can be easily handled by our framework. Figure 4: Runtime distribution (number of problems solved vs. time limit) for all diagnosis algorithms compared in the experiment. The profile of the algorithms' performance is very different from the first experiments. We believe that this is due to the features of the problems that differ significantly from the power network domain. The Junction Tree algorithm is able to solve a large majority of the instances. This is due to the fairly small number of states in the diagnosed system. As a consequence, the necessary operations, such as the automata determinisations, are relatively quick. On the other side of the spectrum, the approach based on BDD is able to solve only a small number of instances; this is due to the large number of events and transitions, as well as the number of fault events, that makes each iteration very expensive. For most instances in which we let the BDD-based diagnoser run longer than 900s, the computer ran out of memory, which suggests that this approach will not be able to catch up with the other approaches beyond the time limit. Comparing the different algorithms presented in this paper, we see that PFS is still better, in particular when combined with SAT. The performance of PFS and PLS are however similar for an oracle using heuristic search planning. ## 10 Conclusion and Future Work Prior to our work, diagnosis of discrete event systems has followed its own path, distinct from that initiated by de Kleer, Reiter, and Williams for diagnosis for static systems (Reiter, 1987; de Kleer & Williams, 1987). In this article, we extended the consistency-based theory of model based diagnosis to handle diagnosis of systems beyond these static ones. We showed how to apply the consistency-based approach to all types of systems, notably discrete event systems and hybrid dynamic ones. We showed that, for such systems, diagnosis can be computed via a series of consistency tests that each decide whether the model allows for a behaviour that i) satisfies certain specified assumptions and ii) agrees with the observations. We also showed how to perform these tests in practice, e.g., by using propositional SAT or classical planning. Extending the consistency-based diagnosis approach to a larger class of systems, and, in particular, to dynamic systems prompted us to consider more elaborate definitions of diagnosis and minimal diagnosis. Some applications, for instance, require us to determine the number or order of fault occurrences. In other applications, certain faults are orders of magnitude less likely than others and should therefore be ignored whenever more likely behaviours exist, which leads us to unconventional definitions of preferred hypotheses. These diagnosis problems are not trivial to solve a priori as they now feature an infinite search space, but we showed that our theory can easily handle them as it only requires us to specify the assumptions appropriately in the consistency tests. Specifically, as we proved, each assumption should indicate that the diagnosis hypothesis of the behaviour that the test is looking for should be better, not better, worse, or not worse than a specified diagnosis hypothesis. We then just need the test solver to be able to express these assumptions. We proposed several strategies to generate the diagnosis tests and showed properties and termination conditions for these strategies. We also extended the definition of conflict, a central concept in model based diagnosis, to align with our general theory. Since the beginning of this work, we applied this theory to a range of applications. We used this theory in combination with SAT modulo theory (SMT) to diagnose hybrid systems (Grastien, 2014); and with model checking to diagnose timed automata (Feng & Grastien, 2020). The consistency-based approach also allowed us to reason about the observations themselves and compute a subset of observations that are sufficient to derive the diagnosis (Christopher, Cordier, & Grastien, 2014). There has been recently an increased interest in planning for richer problems involving constraints similar to the ones used in diagnosis tests. This is motivated by a range of applications: **Top-\(k\) Planning**: computes several plans that need to be significantly different (Nguyen, Do, Gerevini, Serina, Srivastava, & Kambhampati, 2012; Katz & Sohrabi, 2020). **Legibility**: asks for a plan whose purpose is clear for the observer (Chakraborti, Kulkarni, Sreedharan, Smith, & Kambhampati, 2019). **Normative Constraint Signalling**: requires the plan to communicate to a (partial) observer that it follows some normative constraints (Grastien, Benn, & Thiebaux, 2021). **Model Reconciliation**: assumes two agents with different models of the world, and searches for a minimal change to one model so that the agents agree on the optimal plan (Chakraborti, Sreedharan, Zhang, & Kambhampati, 2017). **Goal Recognition**: is the problem of determining what an agent is trying to achieve (Pereira, Vered, Meneguzzi, & Ramirez, 2019). **Plan Explanation**: provides an explanation alongside the plan that justifies why there is no better plan than the proposed one (Eifler et al., 2020). All these problems require reasoning about plans with similar or different properties. The search strategies developed in this paper can be used to help some these problems.
2301.13802
Armouring of a frictional interface by mechanical noise
A dry frictional interface loaded in shear often displays stick-slip. The amplitude of this cycle depends on the probability that a slip event nucleates into a rupture, and on the rate at which slip events are triggered. This rate is determined by the distribution $P(x)$ of soft spots which yields if the shear stress is increased by some amount $x$. In minimal models of a frictional interface that include disorder, inertia and long-range elasticity, we discovered an 'armouring' mechanism, by which the interface is greatly stabilised after a large slip event: $P(x)$ then vanishes at small arguments, as $P(x)\sim x^\theta$ [1]. The exponent $\theta>0$, which exists only in the presence of inertia (otherwise $\theta=0$), was found to depend on the statistics of the disorder in the model, a phenomenon that was not explained. Here, we show that a single-particle toy model with inertia and disorder captures the existence of a non-trivial exponent $\theta>0$, which we can analytically relate to the statistics of the disorder.
Elisa El Sergany, Matthieu Wyart, Tom W. J. de Geus
2023-01-31T17:42:54Z
http://arxiv.org/abs/2301.13802v1
# Armouring of a frictional interface by mechanical noise ###### Abstract A dry frictional interface loaded in shear often displays stick-slip. The amplitude of this cycle depends on the probability that a slip event nucleates into a rupture, and on the rate at which slip events are triggered. This rate is determined by the distribution \(P(x)\) of soft spots which yields if the shear stress is increased by some amount \(x\). In minimal models of a frictional interface that include disorder, inertia and long-range elasticity, we discovered an 'armouring' mechanism, by which the interface is greatly stabilised after a large slip event: \(P(x)\) then vanishes at small arguments, as \(P(x)\sim x^{\theta}\)[1]. The exponent \(\theta>0\), which exists only in the presence of inertia (otherwise \(\theta=0\)), was found to depend on the statistics of the disorder in the model, a phenomenon that was not explained. Here, we show that a single-particle toy model with inertia and disorder captures the existence of a non-trivial exponent \(\theta>0\), which we can analytically relate to the statistics of the disorder. ## 1 Introduction We study systems in which disorder and elasticity compete, leading to intermittent, avalanche-type response under loading. Examples include an elastic line being pulled over a disordered pinning potential, or frictional interfaces [2, 3, 4]. When subject to an external load \(f\), such systems are pinned by disorder when the load is below a critical value \(f_{c}\). At \(f>f_{c}\), the system moves forward at a finite rate. At \(f=f_{c}\) the system displays a crackling-type response described by avalanches whose sizes and durations are distributed according to powerlaws. A key aspect of such systems is the distribution of soft spots [5]. If we define \(x\) as the force increase needed to trigger an instability locally, then increasing the remotely applied force by \(\Delta f\) will trigger \(n_{a}\propto\int_{0}^{\Delta f}P(x)dx\) avalanches, with \(P(x)\) the probability density of \(x\). The relevant behaviour of \(P(x)\) therefore is that at small \(x\). Let us assume that \(P(x)\sim x^{\theta}\) at small \(x\), such that \(n_{a}\propto(\Delta f)^{\theta+1}\). Classical models used to study the depinning transition consider an over-damped dynamics [2]. In that case, it can be shown that \(\theta=0\)[2]. This result is not true for certain phenomena, including the plasticity of amorphous solids or mean-field spin glasses. In these cases, due to the fact that elastic interactions are long-range and can vary in sign (which is not the case for the depinning transition, where a region that is plastically rearranged can only destabilise other regions), one can prove that \(\theta>0\), as reviewed in [5, 6]. Recently, we studied simple models of dry frictional interface [1, 7]. We considered disorder, long-range elastic interactions along the interface. These interactions are strictly positive as in the usual class of the depinning transition. However, we studied the role of inertia, that turns out to have dramatic effects. Inertia causes transient overshoots and undershoots of the stress resulting from a local plastic event. It thus generates a mechanical noise, that lasts until damping ultimately takes place. Remarkably, we found that right after system-spanning slip events, \(\theta>0\)[1] in the presence of inertia. Intuitively, such an 'armouring' mechanism results from the mechanical noise stemming inertial effects, that destabilises spots close to an instability (i.e. small \(x\)), thus depleting \(P(x)\) at small argument. This property is consequential: the number of avalanches of plastic events triggered after a system-spanning rupture is very small. As a consequence, the interface can increase its load when driven quasistatically in a finite system, without much danger of triggering large slip events. The interface therefore present larger stick-slip cycles due to this effect, as sketched in Fig. 1. Thus, one of the central quantities governing the stick-slip amplitude is \(\theta\)[1]. Our previous model [1] divided the interface in blocks whose mechanical response was given by a potential energy landscape that, as a function of slip, comprised a sequence of parabolic wells with equal curvature. We drew the widths \(w\) of each well randomly from a Weibull distribution, such that its dis tribution \(P_{w}(w)\sim w^{k}\) at small \(k\). We empirically found \(\theta\simeq 2.5\) for \(k=1\) and \(\theta\simeq 1.4\) for \(k=0.2\). Here we present a toy model for a region of space that stops moving at the end of a large slip event. In the most idealised view, we describe this region as a single particle that moves over a disordered potential energy landscape, and that slows down due to dissipation. We model this potential energy landscape by a sequence of parabolic potentials that have equal curvature \(\kappa\) but different widths taken from \(P_{w}(w)\), with \(w\) the width of a parabola. In this model, \(x=\kappa w/2\) and is thus proportional to the width of the well in which the particle stops. Below we prove that for such a model, \(P(x)\sim x^{k+2}\) if \(P_{w}(w)\sim w^{k}\). This result explains both why \(\theta>0\) and why this exponent in non-universal, as it depends on \(k\) that characterises the disorder. Although this prediction does not match quantitatively our previous observations, the agreement is already noticeable for such a simple model. We support our argument with analytical proofs, and verify our conclusion numerically. The generality of our argument suggests that the presence of a non-trivial exponent \(\theta\) may hold in other depinning systems, as long a inertia is present. ## 2 Model During a big slip event, all regions in space are moving but eventually slow down and stop. We model this by considering a single region in space in which a particle of finite mass is thrown into the potential energy landscape at a finite velocity. In the simplest case, this particle is "free", such that it experiences no external driving and stops due to dissipation, see Fig. 2. This corresponds to the Prandtl-Tomlinson [8, 9, 10] model that describes the dynamics of one (driven) particle in a potential energy landscape. The equation of motion of the "free" particle reads \[m\ddot{r}=f_{e}(r)-\eta\dot{r}. \tag{1}\] with \(r\) the particle's position, \(m\) its mass, and \(\eta\) a damping coefficient. \(f_{e}(r)\) is the restoring force due to the potential energy landscape. We consider a potential energy landscape that consists of a sequence of finite-sized, symmetric, quadratic wells, such that the potential energy inside a well \(i\) is given by \(U(r)=(\kappa/2)(r-r_{\rm min}^{i})^{2}+U_{0}^{i}\) for \(r_{y}^{i}<r\leq r_{y}^{i+1}\), with \(w_{i}\equiv r_{y}^{i+1}-r_{y}^{i}\) the width of the well, \(\kappa\) the elastic constant, \(r_{\rm min}^{i}\equiv(r_{y}^{i}+r_{y}^{i+1})/2\) the position of the center of the well, and \(U_{0}^{i}=\kappa(w^{i})^{2}/8\) an unimportant offset. The elastic force deriving from this potential energy is \(f_{e}(r)\equiv-\partial_{x}U(r)=\kappa(r_{\rm min}^{i}-r)\). With \(\kappa\) is constant, the landscape is parameterised by the distance between two subsequent cusps \(w_{i}\), which we assume identically distributed (iid) according to a distribution \(P_{w}(w)\). We consider underdamped dynamics corresponding to \(\eta^{2}<4m\kappa\). Within a well, the dynamics is simply that of a underdamped oscillator, as recalled in Appendix A. Figure 1: (a) Sketch of stick-slip response: “slip” events punctuate periods in which the interface is macroscopically stuck, but microscopic events (“avalanches”) do occur. The number of avalanches \(n_{a}\propto(\Delta f)^{\theta+1}\), which can be linked to (b) the distribution of soft spots. \(x\) is thereby the amount of force needed to trigger an instability locally. Right after a large slip event, its distribution empirically scales like \(P(x)\sim x^{\theta}\) at small \(x\) as indicated (log-scale implied). Figure 2: Evolution of the kinetic energy \(E\) as a function of position \(r\) (in red) of the “free” particle ‘thrown’ into a potential energy landscape (shown in the inset). Every entry into a new well is indicated using a marker. A thin green line shows the evolution of the total energy (with the definition of the inset, it has the local minimum of the last well as arbitrary offset). Stopping well Distribution.We are interested in the width of the well in which the particle eventually stops. Suppose that a particle enters a well of width \(w\) with a kinetic energy \(\mathcal{E}\). The particle stops in that well if \(\mathcal{E}<E_{c}(w)\), with \(E_{c}\) the minimum kinetic energy with which the particle needs to enter a well of width \(w\) to be able to exit. The distribution of wells in which particles stop in that case is \[P_{s}(w)\sim P_{w}(w)P(\mathcal{E}<E_{c}(w)), \tag{2}\] with \(P_{w}(w)\) the probability density of well widths, and \(P_{s}(w)\) the probability of well widths in which the particle stops. Within one well, the particle is simply a damped harmonic oscillator as has been studied abundantly. In the limit of a weakly damped system, the amount of kinetic energy lost during one cycle is \(\Delta E=\kappa w^{2}(1-\exp(-2\pi/Q))/8\) with the quality factor \(Q=\sqrt{4m\kappa/\eta^{2}-1}\). The minimal kinetic energy with which the particle needs to enter the well in order to be able to exist is thus \(E_{c}=\Delta E\propto w^{2}\) (see Appendix B for the exact calculation of \(E_{c}\)). Furthermore, if \(P(\mathcal{E})\) is a constant at small argument (as we will argue below), then \[P(\mathcal{E}<E_{c}(w))=\int_{0}^{E_{c}}P(\mathcal{E})\mathrm{d}\mathcal{E} \sim E_{c}(w). \tag{3}\] Therefore, the particle stops in a well whose width is distributed as \[P_{s}(w)\sim w^{2}P_{w}(w). \tag{4}\] Central result.Once stopped, the force, \(x\), by which we need to tilt the well in which the particle stopped, in order for it to exit again is \(x=\kappa w/2\)1, such that our central result is that Footnote 1: Without external forces, the particle ends in the local minimum – the center of the well. \[P(x)\sim x^{2}P_{w}(x). \tag{5}\] For example, if \(P_{w}(w)\sim w^{k}\) at small \(w\), we predict that \[P(x)\sim x^{2+k}. \tag{6}\] Energy at entry.We will now argue that the density of kinetic energy with which the particle enters the final well, \(P(\mathcal{E})\), is finite at small \(\mathcal{E}\). For one realisation, \(\mathcal{E}\) results from passing many wells with random widths. If its kinetic energy is much larger than the potential energy of the typical wells, it will not stop. We thus consider that the particle energy has decreased up to some typical kinetic energy \(E_{0}\) of the order of the typical potential energy \(\kappa\langle w^{2}\rangle/8\). If the particle exits the next well, at exit it will have a kinetic energy \(\mathcal{K}=E_{0}-\Delta E(E_{0},w)\). For a given \(E_{0}\) and distributed \(w\), we have: \[P(\mathcal{E})=\int dw\,P_{w}(w)\,\delta(\mathcal{K}(E_{0},w)-\mathcal{E}). \tag{7}\] It thus implies that: \[P(\mathcal{E}=0)=P_{w}(w^{*})/\left|\partial_{w}\mathcal{K}\right|_{w=w^{*}} \tag{8}\] \(w^{*}\) is the well width for which the particle reaches the end of the well with zero velocity, i.e. \(E_{0}=E_{c}(w^{*})\). By assumption, \(P_{w}(w^{*})>0\). Furthermore we prove in Appendix C that \(\partial_{w}\mathcal{K}|_{w=w^{*}}=\kappa w^{*}/2>0\). Overall, it implies that \(P(\mathcal{E}=0)>0\), i.e. \(P(\mathcal{E})\) does not vanish as \(E\to 0\), from which our conclusions follow. Here we give a simple argument for \(\partial_{w}\mathcal{K}|_{w=w^{*}}=\kappa w^{*}/2>0\). Given \(E_{0}\), but an infinitesimally smaller well of width \(w^{*}-\delta w\), the particle will enter the next well. Because the velocity is negligible in the vicinity of \(w^{*}\), the damping is negligible. Therefore, \(\delta\mathcal{K}\) is of the order of the difference in potential energy on a scale \(\delta w\), \(\delta U=U(w^{*})-U(w^{*}-\delta w)\approx\kappa w^{*}\delta w/2\), as we illustrate in Fig. 3. We thus find that \(\partial_{w}\mathcal{K}|_{w=w^{*}}=\lim_{\delta w\to 0}\delta K/\delta w= \kappa w^{*}/2\). Figure 3: Evolution of the kinetic energy \(E\) (red), potential energy \(U\) (black), and total energy \(E+V\) (green) for a particle that has entered a well of width \(w^{*}\) with a kinetic energy \(E_{0}=E_{c}(w^{*})\) such that it stops just. Consequently, \(\partial_{r}(E+V)|_{w^{*}/2}=0\), which can be decomposed in \(\partial_{r}V|_{w^{*}/2}=\kappa w^{*}/2\) such that \(\partial_{r}E|_{w^{*}/2}=-\kappa w^{*}/2\), as indicated using thin lines. Numerical support Objective.We now numerically verify our prediction that \(P(x)\sim x^{k+2}\) (Eq. (6)). We simulate a large number of realisations of a potential energy landscape constructed from randomly drawn widths (considering different distributions \(P_{w}(w)\)) and constant curvature. We study the distribution of stopping wells if a "free" particle is 'thrown' into the landscape at a high initial velocity (much larger than \(v_{c}(\langle w\rangle)\) such that particle transverses many wells before stopping). Map.We find an analytical solution for Eq. (1) in the form of a map. In particular, we derive the evolution of the position in a well based on an initial position \(-w/2\) and velocity in Appendix A. This maps the velocity with which the particle enters a well at position \(w/2\), to an exit velocity which corresponds to the entry velocity of the next well, etc. Stopping well.We record the width of the stopping well, \(x\), and the velocity \(\mathcal{V}\) with which the particle enters the final well. We find clear evidence for the scaling \(P(x)\sim x^{k+2}\) in Fig. 4. Perturbing the evolution with random force kicks2 changes nothing to our observations, as included in Fig. 4 (see caption). We, furthermore, show that the probability density of the kinetic energy with which the particle enters the final well, \(P(\mathcal{E})\), is constant as small argument in Fig. 5. Footnote 2: Such the for each well is tilted with a random force that we take independent and identically distributed (iid) according to a normal distribution with zero mean. ## 5 Concluding remarks Our central result is that \(P(x)\sim x^{2}P_{w}(x)\) in our toy model. For a disorder \(P_{w}(w)\sim w^{k}\) we thus find \(P(x)\sim x^{k+2}\). We expect this result to qualitatively apply to generic depinning systems in the presence of inertia. In particular they are qualitatively (but not quantitatively) consistent with our previous empirical observations \(\theta\simeq 2.5\) for \(k=1\)[1] and \(\theta\simeq 1.4\) for \(k=0.2\). A plausible limitation of our approach is underlined by the following additional observation: in Ref. [1], it was found that for \(x\) to be small, the stopping well was typically small (by definition), but also that the next well had to be small. Such correlations can exist only if the degree of freedom considered had visited the next well, before coming back and stopping. This scenario cannot occur in our simple description where the particle only moves forward, except when it oscillates in its final well. Figure 4: Width of the stopping well, \(x\), for different \(P_{w}(w)\): a uniform, Weibull, and powerlaw distribution, that scale as \(P_{w}(w)\sim w^{k}\) at small \(w\), as indicated in the legend (the bottom row for each distribution corresponds to perturbing the dynamcs with random force kicks, tilting individual wells by a force \(F=\mathcal{N}(0,0.1)\), with \(\mathcal{N}\) the normal distribution; the top row corresponds to \(F=0\)). To emphasise the scaling, the distributions have been rescaled by a fit of the prefactors: \(P(x)=c_{x}x^{k+2}\). Furthermore, we use \(m=\kappa=1\), \(\eta=0.1\), \(v_{0}=\mathcal{N}(100,10)\), and \(\langle w\rangle\approx 1\). Figure 5: The kinetic energy with which the particle enters the well in which it stops for different realisations, \(P(\mathcal{E})\), normalised by its prefactor \(c_{e}\) (that is here simply the density of the first bin). See Fig. 4 for legend.
2309.07933
A Lean-Congruence Format for EP-Bisimilarity
Enabling preserving bisimilarity is a refinement of strong bisimilarity that preserves safety as well as liveness properties. To define it properly, labelled transition systems needed to be upgraded with a successor relation, capturing concurrency between transitions enabled in the same state. We enrich the well-known De Simone format to handle inductive definitions of this successor relation. We then establish that ep-bisimilarity is a congruence for the operators, as well as lean congruence for recursion, for all (enriched) De Simone languages.
Rob van Glabbeek, Peter Höfner, Weiyou Wang
2023-09-13T20:51:32Z
http://arxiv.org/abs/2309.07933v1
# A Lean-Congruence Format for EP-Bisimilarity ###### Abstract Enabling preserving bisimilarity is a refinement of strong bisimilarity that preserves safety as well as liveness properties. To define it properly, labelled transition systems needed to be upgraded with a successor relation, capturing concurrency between transitions enabled in the same state. We enrich the well-known De Simone format to handle inductive definitions of this successor relation. We then establish that ep-bisimilarity is a congruence for the operators, as well as lean congruence for recursion, for all (enriched) De Simone languages. ## 1 Introduction Recently, we introduced a finer alternative to strong bisimilarity, called enabling preserving bisimilarity. The motivation behind this concept was to preserve liveness properties, which are _not_ always preserved by classical semantic equivalences, including strong bisimilarity. **Example 1.1** ([14]): Consider the following two programs, and assume that all variables are initialised to 0. **while(true)do** **choose** **if**truetheny := y+1; **if**x = 0thenx := 1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **while(true)do** **y := y+1; **end** **end** **while(true)do** **y := y+1; **end** **end** **while(true)do** **y := y+1; **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end**end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end**end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** end**end** **end** **end** end**end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** end** **end** **end** **end** **end**end** **end** **end** end**end** **end** **end** **end** **end** **end** **end** end** **end** **end** end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end**end** **end** **end**end** **end** **end** **end**end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end** **end**end** **end** **end** **end** **end** **end**end** **end** **end** **end** **end**end** **end** **end**end** **end** **end** **end** **end** **end**end** **end** **end**end** **end** **end**end** **end** **end** **end**end** **end**end** **end**end** **end**end** **end** **end** **end**end** **end** [MISSING_PAGE_POST] **end** [MISSING_PAGE_POST] **end [MISSING_PAGE_POST] **end**end** end**end**end** **end**end** **end**end** **end**end** **end**end** **end**end** **end**end** **end**end** **end**end**end** **end**end** **end**end** end**end**end** **end**end** **end**end** **end**end**end** **end**end** **end**end** **end**end** **end**end** **end**end**end** **end** **end**end** for each pair of related states \(p\) and \(q\) a relation \(R\) between the transitions enabled in \(p\) and \(q\), and this relation should be preserved when matching related transitions in the bisimulation game. When formalising this, we need transition systems upgraded with a _successor relation_ that matches each transition \(t\) enabled in a state \(p\) to a transition \(t^{\prime}\) enabled in \(p^{\prime}\), when performing a transition from \(p\) to \(p^{\prime}\) that does not affect \(t\). Intuitively, \(t^{\prime}\) describes the same system behaviour as \(t\), but the two transitions could be formally different as they may have different sources. It is this successor relation that distinguishes the transition systems in the example above. In [14], we showed that ep-bisimilarity is a congruence for all operators of Milner's Calculus of Communication Systems (CCS), enriched with a successor relation. We extended this result to the Algebra of Broadcast Communication with discards and Emissions (ABCdE), an extension of CCS with broadcast communication, discard actions and signal emission. ABCdE subsumes many standard process algebras found in the literature. In this paper, we introduce a new congruence format for structural operational semantics, which is based on the well-known De Simone Format and respects the successor relation. This format allows us to generalise the results of [14] in two ways: first, we prove that ep-bisimilarity is a congruence for all operators of _any_ process algebras that can be formalised in the De Simone format with successors. Applicable languages include CCS and ABCdE. Second, we show that ep-bisimilarity is a lean congruence for recursion [10]. Here, a lean congruence preserves equivalence when replacing closed subexpressions of a process by equivalent alternatives. ## 2 Enabling Preserving Bisimilarity To build our abstract theory of De Simone languages and De Simone formats, we briefly recapitulate the definitions of labelled transition systems with successors, and ep-bisimulation. A detailed description can be found in [14]. A _labelled transition system (LTS)_ is a tuple \((S,\mathit{Tr},\mathit{source},\mathit{target},\ell)\) with \(S\) and \(\mathit{Tr}\) sets of _states_ and \(\mathit{transitions}\), \(\mathit{source},\mathit{target}:\mathit{Tr}\to S\) and \(\ell:\mathit{Tr}\to\mathcal{L}\), for some set \(\mathcal{L}\) of transition labels. A transition \(t\in\mathit{Tr}\) of an LTS is _enabled_ in a state \(p\in S\) if \(\mathit{source}(t)=p\). The set of transitions enabled in \(p\) is \(\mathit{en}(p)\). **Definition 2.1** (Ltss [14]): A _labelled transition system with successors (LTSS)_ is a tuple \((S,\mathit{Tr},\mathit{source},\mathit{target},\ell,\leadsto\omega)\) with \((S,\mathit{Tr},\mathit{source},\mathit{target},\ell)\) an LTS and \(\leadsto\subseteq\mathit{Tr}\times\mathit{Tr}\times\mathit{Tr}\) the _successor relation_ such that if \((t,u,v)\in\leadsto\) (also denoted by \(t\leadsto_{u}v\)) then \(\mathit{source}(t)=\mathit{source}(u)\) and \(\mathit{source}(v)=\mathit{target}(u)\). **Example 2.2**: Remember that the 'classical' LTSs of Example 1.1 are identical. Let \(t_{1}\) and \(t_{2}\) be the two transitions corresponding to y:=y+1 in the first and second state, respectively, and let \(u\) be the transition for assignment x:=1. The assignments of x and y in the right-hand program are independent, hence \(t_{1}\leadsto_{u}t_{2}\) and \(u\leadsto_{t_{1}}u\). For the other program, the situation is different: as the instructions correspond to a single component (program), all transitions affect each other, i.e. \(\leadsto=\emptyset\). **Definition 2.3** (Ep-bisimilarity [14]): Let \((S,\mathit{Tr},\mathit{source},\mathit{target},\ell,\leadsto\omega)\) be an LTSS. An _enabling preserving bisimulation (ep-bisimulation)_ is a relation \(\mathcal{R}\subseteq S\times S\times\mathcal{P}(\mathit{Tr}\times\mathit{Tr})\) satisfying 1. if \((p,q,R)\in\mathcal{R}\) then \(R\subseteq\mathit{en}(p)\times\mathit{en}(q)\) such that 1. \(\forall t\in\mathit{en}(p)\). \(\exists\,u\in\mathit{en}(q)\). \(t\ R\ u\), 2. \(\forall u\in\mathit{en}(q)\). \(\exists t\in\mathit{en}(p)\). \(t\ R\ u\), and 3. if \(t\ R\ u\) then \(\ell(t)=\ell(u)\); and 2. if \((p,q,R)\in\mathcal{R}\) and \(v\ R\ w\), then \((\mathit{target}(v),\mathit{target}(w),R^{\prime})\in\mathcal{R}\) for some \(R^{\prime}\) such that 1. if \(t\ R\ u\) and \(t\leadsto_{v}t^{\prime}\) then \(\exists\,u^{\prime}\). \(u\leadsto_{w}u^{\prime}\wedge t^{\prime}\ R^{\prime}\ u^{\prime}\), and 2. if \(t\ R\ u\) and \(u\leadsto_{w}u^{\prime}\) then \(\exists\,t^{\prime}\). \(t\leadsto_{v}t^{\prime}\wedge t^{\prime}\ R^{\prime}\ u^{\prime}\). Two states \(p\) and \(q\) in an LTSS are _enabling preserving bisimilar (ep-bisimilar)_, denoted as \(p\leftrightarroweq_{ep}q\), if there is an enabling preserving bisimulation \(\mathcal{R}\) such that \((p,q,R)\in\mathcal{R}\) for some \(R\). Without Items 2.a and 2.b, the above is nothing else than a reformulation of the classical definition of strong bisimilarity. An ep-bisimulation additionally maintains for each pair of related states \(p\) and \(q\) a relation \(R\) between the transitions enabled in \(p\) and \(q\). Items 2.a and 2.b strengthen the condition on related target states by requiring that the successors of related transitions are again related relative to these target states. It is this requirement which distinguishes the transition systems for Example 1.1. [14] **Lemma 2.4**: [Proposition 10 of [14]]\(\leftrightarroweq_{ep}\) is an equivalence relation. ## 3 An Introductory Example: CCS with Successors Before starting to introduce the concepts formally, we want to present some motivation in the form of the well-known Calculus of Communicating Systems (CCS) [18]. In this paper we use a proper recursion construct instead of agent identifiers with defining equations. As in [4], we write \(\langle X|S\rangle\) for the \(X\)-component of a solution of the set of recursive equations \(S\). CCS is parametrised with set \(\mathcal{C}\) of _handshake communication names_. \(\tilde{\mathcal{C}}\coloneqq\{\tilde{c}\mid c\in\mathcal{C}\}\) is the set of _handshake communication co-names_. \(Act_{CCS}\coloneqq\mathcal{C}\cup\tilde{\mathcal{C}}\cup\{\tau\}\) is the set of _actions_, where \(\tau\) is a special _internal action_. Complementation extends to \(\mathcal{C}\cup\tilde{\mathcal{C}}\) by \(\tilde{c}\coloneqq c\). Below, \(c\) ranges over \(\mathcal{C}\cup\tilde{\mathcal{C}}\) and \(\alpha\), \(\ell\), \(\eta\) over \(Act_{CCS}\). A _relabelling_ is a function \(f:\mathcal{C}\rightarrow\mathcal{C}\); it extends to \(Act_{CCS}\) by \(f(\tilde{c})=\overline{f(c)}\), \(f(\tau)\coloneqq\tau\). The process signature \(\Sigma\) of CCS features binary infix-written operators \(+\) and \(|\), denoting _choice_ and _parallel composition_, a constant \(\mathbf{0}\) denoting _inaction_, a unary _action prefixing_ operator \(\alpha\_{\_}{\ldots}\) for each action \(\alpha\in Act_{CCS}\), a unary _restriction_ operator \(\_\backslash L\) for each set \(L\subseteq\mathcal{C}\), and a unary _relabelling_ operator \(\_\backslash f\) for each relabelling \(f:\mathcal{C}\rightarrow\mathcal{C}\). The semantics of CCS is given by the set \(\mathcal{R}\) of _transition rules_, shown in Table 1. Here \(\overline{L}\coloneqq\{\tilde{c}\mid c\in L\}\). Each rule has a unique name, displayed in blue.2 The rules are displayed as templates, following the standard convention of labelling transitions with _label variables_\(c\), \(\alpha\), \(\ell\), etc. and may be accompanied by side conditions in green, so that each of those templates corresponds to a set of (concrete) transition rules where label variables are "instantiated" to labels in certain ranges and all side conditions are met. The rule names are also schematic and may contain variables. For example, all instances of the transition rule template \(+_{\_}\) are named \(+_{\_}\), whereas there is one rule name \(\stackrel{{\alpha}}{{\rightarrow}}\) for each action \(\alpha\in Act_{CCS}\). Footnote 2: Our colourings are for readability only. \begin{table} \begin{tabular}{|c c c|} \hline \(\overline{\alpha.x\stackrel{{\alpha}}{{\longrightarrow}}x} \stackrel{{\alpha}}{{\rightarrow}}\) & \(\overline{x\stackrel{{\alpha}}{{\longrightarrow}}x^{\prime}} \stackrel{{\alpha}}{{\longrightarrow}}+_{\_}\) & \(\overline{x\stackrel{{\alpha}}{{\longrightarrow}}y^{\prime}} \stackrel{{\alpha}}{{\longrightarrow}}y^{\prime}\) \\ \(\overline{x\stackrel{{\eta}}{{\longrightarrow}}x^{\prime}}|_{\_}\) & \(\underset{x\stackrel{{ c}}{{\longrightarrow}}x^{\prime}}{x}|y^{ \prime}\) & \(\underset{c}{{\longrightarrow}}\) & \(\overline{x\stackrel{{\eta}}{{\longrightarrow}}y^{\prime}} \stackrel{{\alpha}}{{\longrightarrow}x^{\prime}}|_{\_}\) \\ \(\overline{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}\stackrel{{ \ell}}{{\longrightarrow}x^{\prime}}\backslash L\) & \(\underset{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}{x}|\) & \(\overline{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}{x}|\) & \(\overline{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}{x}|\) & \(\overline{x\stackrel{{\ell}}{{\longrightarrow}}x^{\prime}}{x}|\) \\ \hline \end{tabular} \end{table} Table 1: Structural operational semantics of CCS The transition system specification \((\Sigma,\mathcal{R})\) is in De Simone format [23], a special rule format that guarantees properties of the process algebra (for free), such as strong bisimulation being a congruence for all operators. Following [14], we leave out the infinite sum \(\sum_{i\in t}x_{i}\) of CCS [18], as it is strictly speaking not in De Simone format. In this paper, we will extend the De Simone format to also guarantee properties for ep-bisimulation. As seen, ep-bisimulation requires that the structural operational semantics is equipped with a successor relation \(\leadsto\). The meaning of \(\chi\leadsto\zeta\)\(\chi^{\prime}\) is that transition \(\chi\) is unaffected by \(\zeta\) - denoted \(\chi\leadsto\zeta\) - and that when doing \(\zeta\) instead of \(\chi\), afterwards a variant \(\chi^{\prime}\) of \(\chi\) is still enabled. Table 2 shows the _successor rules_ for CCS, which allow the relation \(\leadsto\) to be derived inductively. It uses the following syntax for transitions \(\chi\), which will be formally introduced in Section 6. The expression \(t\!+\!_{\!\!\!\perp}Q\) refers to the transition that is derived by rule \(+\!_{\!\!\!\perp}\) of Table 1, with \(t\) referring to the transition used in the unique premise of this rule, and \(Q\) referring to the process in the inactive argument of the \(+\)-operator. The syntax for the other transitions is analogous. A small deviation of this scheme occurs for recursion: \(rec_{Act}(X,S,t)\) refers to the transition derived by rule \(rec_{Act}\) out of the premise \(t\), when deriving a transition of a recursive call \(\langle X|S\rangle\). In Table 2 each rule is named, in orange, after the number of the clause of Definition 20 in [14], were it was introduced. The primary source of concurrency between transition \(\chi\) and \(\zeta\) is when they stem from opposite sides of a parallel composition. This is expressed by Rules 7a and 7b. We require all obtained successor statements \(\chi\leadsto\zeta\)\(\chi^{\prime}\) to satisfy the conditions of Definition 2.1 - this yields \(Q^{\prime}=target(w)\) and \(P^{\prime}=target(v)\); in [14]\(Q^{\prime}\) and \(P^{\prime}\) were written this way. In all other cases, successors of \(\chi\) are inherited from successors of their building blocks. When \(\zeta\) stems from the left side of a \(+\) via rule \(+\!_{\!\!\!\perp}\) of Table 1, then any transition \(\chi\) stemming from the right is discarded by \(\zeta\), so \(\chi\not\to\zeta\). Thus, if \(\chi\leadsto\zeta\) then these transitions have the form \(\chi=t\!+\!_{\!\!\!\perp}Q\) and \(\zeta=v\!+\!_{\!\!\!\perp}Q\), and we must have \(t\leadsto v\). So \(t\leadsto v\,t^{\prime}\) for some transition \(t^{\prime}\). As the execution of \(\zeta\) discards the summand \(Q\), we also obtain \(\chi\leadsto\zeta\,t^{\prime}\). This motivates Rule 3a. Rule 4a follows by symmetry. In a similar way, Rule 8a covers the case that \(\chi\) and \(\zeta\) both stem from the left component of a parallel composition. It can also happen that \(\chi\) stems form the left component, whereas \(\zeta\) is a synchronisation, involving both components. Thus \(\chi=t|_{\!\!\!\perp}Q\) and \(\zeta=v|_{\!\!\!\perp}w\). For \(\chi\leadsto\zeta\) to hold, it must be that \(t\leadsto v\), whereas the \(w\)-part of \(\zeta\) cannot interfere with \(t\). This yields the Rule 8b. Rule 8c is explained in a similar train from the possibility that \(\zeta\) stems from the left while \(\chi\) is a synchronisation of both components. Rule 9 follows by symmetry. In case both \(\chi\) and \(\zeta\) are synchronisations involving both components, i.e., \(\chi=t|_{\!\!\!\perp}u\) and \(\zeta=v|_{\!\!\!\perp}w\), it must be that \(t\leadsto v\) and \(u\leadsto w\). Now the resulting variant \(\chi^{\prime}\) of \(\chi\) after \(\zeta\) is simply \(t^{\prime}|u^{\prime}\), where \(t\leadsto v\,t^{\prime}\) and \(u\leadsto v\,u^{\prime}\). This underpins Rule 10. If the common source \(O\) of \(\chi\) and \(\zeta\) has the form \(P[f]\), \(\chi\) and \(\zeta\) must have the form \(t[f]\) and \(v[f]\). Whether \(t\) and \(v\) are concurrent is not influenced by the renaming. So \(t\leadsto v\). The variant of \(t\) that remains after doing \(v\) is also not affected by the renaming, so if \(t\leadsto v\,t^{\prime}\) then \(\chi\leadsto\zeta\,t^{\prime}[f]\). The case that \(O=P\backslash L\) is equally trivial. This yields Rules 11a and 11b. In case \(O=\langle X|S\rangle\), \(\chi\) must have the form \(rec_{Act}(X,S,t)\), and \(\zeta\) has the form \(rec_{Act}(X,S,v)\), where \(t\) and \(v\) are enabled in \(\langle S_{X}|S\rangle\). Now \(\chi\leadsto\zeta\) only if \(t\leadsto v\), so \(t\leadsto v\,t^{\prime}\) for some transition \(t^{\prime}\). The recursive call disappears upon executing \(\zeta\), and we obtain \(\chi\leadsto\zeta\,t^{\prime}\). This yields Rule 11c. **Example 3.1**: The programs from Example 1.1 could be represented in CCS as \(P\!:=\langle X|S\rangle\) where \(S=\left\{\begin{array}{l}X=a.X+b.Y\\ Y=a.Y\end{array}\right\}\) and \(Q:=\langle Z|\{Z=a.Z\}\rangle|b.\mathbf{0}\). Here \(a,b\in Act_{CCS}\) are the atomic actions incrementing \(y\) and \(x\). The relation matching \(P\) with \(Q\) and \(\langle Y,S\rangle\) with \(\langle Z|\{Z=a.Z\}\rangle|\mathbf{0}\) is a strong bisimulation. Yet, \(P\) and \(Q\) are not ep-bisimilar, as the rules of Table 2 derive \(u\leadsto_{t_{1}}u\) (cf. Example 2.2) where \(u=\langle Z|\{Z=a.Z\}\rangle|_{\mbox{\tiny{R}}}\overset{b}{\to}\mathbf{0}\) and \(t_{1}=rec_{Act}(Z,\{Z{=a.Z\},\overset{a}{\to}Q\})|_{\mbox{\tiny{L}}}b.\mathbf{0}\). This cannot be matched by \(P\), thus violating condition 2.b. of Definition 2.3. In this paper we will introduce a new De Simone format for transition systems with successors (TSSS). We will show that \(\trianglelefteq_{ep}\) is a congruence for all operators (as well as a lean congruence for recursion) in any language that fits this format. Since the rules of Table 2 fit this new De Simone format, it follows that \(\trianglelefteq_{ep}\) is a congruence for the operators of CCS. Informally, the conclusion of a successor rule in this extension of the De Simone format must have the form \(\zeta\leadsto_{\xi}\zeta^{\prime}\) where \(\zeta\), \(\xi\) and \(\zeta^{\prime}\) are _open transitions_, denoted by _transition expressions_ with variables, formally introduced in Section 6. Both \(\zeta\) and \(\xi\) must have a leading operator R and S of the same type, and the same number of arguments. These leading operators must be rule names of the same type. Their arguments are either process variables \(P,Q,...\) or transition variables \(t,u,...\), as determined by the trigger sets \(I_{\mbox{\tiny{R}}}\) and \(I_{\mbox{\tiny{S}}}\) of R and S. These are the sets of indices listing the arguments for which rules R and S have a premise. If the \(i^{\mbox{\tiny{th}}}\) arguments of R and S are both process variables, they must be the same, but for the rest all these variables are different. For a subset \(I\) of \(I_{\mbox{\tiny{R}}}\cap I_{\mbox{\tiny{S}}}\), the rule has premises \(t_{i}\leadsto_{u_{i}}t_{i}^{\prime}\) for \(i\in I\), where \(t_{i}\) and \(u_{i}\) are the \(i^{\mbox{\tiny{th}}}\) arguments of R and S, and \(t_{i}^{\prime}\) is a fresh variable. Finally, the right-hand side of the conclusion may be an arbitrary univariate transition expression, containing no other variables than: * the \(t_{i}^{\prime}\) for \(i\in I\), * a \(t_{i}\) occurring in \(\zeta\), with \(i\notin I_{\mbox{\tiny{S}}}\), * a fresh process variable \(P_{i}^{\prime}\) that must match the target of the transition \(u_{i}\) for \(i\in I_{\mbox{\tiny{S}}}\setminus I\), * _or_ a fresh transition variable whose source matches the target of \(u_{i}\) for \(i\in I_{\mbox{\tiny{S}}}\setminus I\), and * any \(P\) occurring in both \(\zeta\) and \(\xi\), _or_ any fresh transition variable whose source must be \(P\). The rules of Table 2 only feature the first three possibilities; the others occur in the successor relation of ABCdE - see Section 8. ## 4 Structural Operational Semantics Both the De Simone format and our forthcoming extension are based on the syntactic form of the operational rules. In this section, we recapitulate foundational definitions needed later on. Let \(\mathcal{V}_{\mathcal{P}}\) be an infinite set of _process variables_, ranged over by \(X,Y,x,y,x_{i}\), etc. \begin{table} \begin{tabular}{|c|} \hline \(t\leadsto_{v}t^{\prime}\) \(3a\) \(\frac{u\leadsto_{w}u^{\prime}}{P_{+\mbox{\tiny{R}}}u\leadsto_{p_{+\mbox{\tiny{R }}}}u^{\prime}}\)\(4a\) \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}Q\leadsto_{v_{\mbox{\tiny{R}}} }t^{\prime}|_{\mbox{\tiny{L}}}Q}\)\(7a\) \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}u\leadsto_{v_{\mbox{\tiny{V}}} }u^{\prime}}\)\(10\) \(\frac{u\leadsto_{w}u^{\prime}}{P|_{\mbox{\tiny{R}}}u\leadsto_{v_{\mbox{\tiny{V}}} }P^{\prime}|_{\mbox{\tiny{R}}}u}\)\(7b\) \\ \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}Q\leadsto_{v_{\mbox{\tiny{V}}} \parallel_{\mbox{\tiny{L}}}Q}t^{\prime}|_{\mbox{\tiny{L}}}Q}\)\(8a\) \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}Q}\)\(8b\) \(\frac{t\leadsto_{v}t^{\prime}}{t|_{\mbox{\tiny{L}}}Q}\)\(8c\) \(\frac{u\leadsto_{w}u^{\prime}}{P|_{\mbox{\tiny{R}}}u\leadsto_{v_{\mbox{\tiny{V}}} }P^{\prime}|_{\mbox{\tiny{R}}}u^{\prime}}\)\(9a\) \(\frac{u\leadsto_{w}u^{\prime}}{P|_{\mbox{\tiny{R}}}u\leadsto_{v_{\mbox{\tiny{V}}} }w^{\prime}|_{\mbox{\tiny{R}}}u^{\prime}}\)\(9b\) \(\frac{u\leadsto_{w}u^{\prime}}{t|_{\mbox{\tiny{L}}}u\leadsto_{v_{\mbox{\tiny{V}}} }P^{\prime}|_{\mbox{\tiny{R}}}u^{\prime}}\)\(9c\) \(\frac{t\leadsto_{v}t^{\prime}}{t^{\prime}|\mbox{\tiny{L}}}\)\(11a\)\(\frac{t\leadsto_{v}t^{\prime}}{t^{\prime}|\mbox{\tiny{L}}}\)\(11b\)\(\frac{t\leadsto_{v}t^{\prime}}{rec_{Act}(X,S,t)\leadsto_{rec_{Act}(X,S,t)}t^{\prime}}\)\(11c\) \\ \hline \end{tabular} \end{table} Table 2: Successor rules for CCS **Definition 4.1** (Process Expressions [9]): An _operator declaration_ is a pair \((Op,n)\) of an _operator symbol_\(Op\notin\mathcal{V}_{\mathcal{P}}\) and an _arity_\(n\in\mathbb{N}\). An operator declaration \((c,0)\) is also called a _constant declaration_. A _process signature_ is a set of operator declarations. The set \(\mathbb{P}^{\,r}(\Sigma)\) of _process expressions_ over a process signature \(\Sigma\) is defined inductively by: * \(\mathcal{V}_{\mathcal{P}}\subseteq\mathbb{P}^{\,r}(\Sigma)\), * if \((Op,n)\in\Sigma\) and \(p_{1},\ldots,p_{n}\in\mathbb{P}^{\,r}(\Sigma)\) then \(Op(p_{1},\ldots,p_{n})\in\mathbb{P}^{\,r}(\Sigma)\), and * if \(V_{S}\subseteq\mathcal{V}_{\mathcal{P}}\), \(S:V_{S}\rightarrow\mathbb{P}^{\,r}(\Sigma)\) and \(X\in V_{S}\), then \(\langle X|S\rangle\in\mathbb{P}^{\,r}(\Sigma)\). A process expression \(c()\) is abbreviated as \(c\) and is also called a _constant_. An expression \(\langle X|S\rangle\) as appears in the last clause is called a _recursive call_, and the function \(S\) therein is called a _recursive specification_. It is often displayed as \(\{X=S_{X}\mid X\in V_{S}\}\). Therefore, for a recursive specification \(S\), \(V_{S}\) denotes the domain of \(S\) and \(S_{X}\) represents \(S(X)\) when \(X\in V_{S}\). Each expression \(S_{Y}\) for \(Y\in V_{S}\) counts as a subexpression of \(\langle X|S\rangle\). An occurrence of a process variable \(y\) in an expression \(p\) is _free_ if it does not occur in a subexpression of the form \(\langle X|S\rangle\) with \(y\in V_{S}\). For an expression \(p\), \(\mathit{var}(p)\) denotes the set of process variables having at least one free occurrence in \(p\). An expression is _closed_ if it contains no free occurrences of variables. Let \(\mathbb{P}^{\,r}(\Sigma)\) be the set of closed process expressions over \(\Sigma\). **Definition 4.2** (Substitution): A _\(\Sigma\)-substitution_\(\sigma\) is a partial function from \(\mathcal{V}_{\mathcal{P}}\) to \(\mathbb{P}^{\,r}(\Sigma)\). It is _closed_ if it is a total function from \(\mathcal{V}_{\mathcal{P}}\) to \(\mathbb{P}^{\,r}(\Sigma)\). If \(p\in\mathbb{P}^{\,r}(\Sigma)\) and \(\sigma\) a \(\Sigma\)-substitution, then \(p[\sigma]\) denotes the expression obtained from \(p\) by replacing, for \(x\) in the domain of \(\sigma\), every free occurrence of \(x\) in \(p\) by \(\sigma(x)\), while renaming bound process variables if necessary to prevent name-clashes. In that case \(p[\sigma]\) is called a _substitution instance_ of \(p\). A substitution instance \(p[\sigma]\) where \(\sigma\) is given by \(\sigma(x_{i})=q_{i}\) for \(i\in I\) is denoted as \(p[q_{i}/x_{i}]_{i\in I}\), and for \(S\) a recursive specification \(\langle p|S\rangle\) abbreviates \(p[\langle Y|S\rangle/Y]_{Y\in V_{S}}\). These notions, including "free" and "closed", extend to syntactic objects containing expressions, with the understanding that such an object is a substitution instance of another one if the same substitution has been applied to each of its constituent expressions. We assume fixed but arbitrary sets \(\mathcal{L}\) and \(\mathcal{N}\) of _transition labels_ and _rule names_. **Definition 4.3** (Transition System Specification [17]): Let \(\Sigma\) be a process signature. A _\(\Sigma\)-(transition) literal_ is an expression \(p\stackrel{{ a}}{{\longrightarrow}}q\) with \(p,q\in\mathbb{P}^{\,r}(\Sigma)\) and \(a\!\in\!\mathcal{L}\). A _transition rule_ over \(\Sigma\) is an expression of the form \(\frac{H}{\lambda}\) with \(H\) a finite list of \(\Sigma\)-literals (the _premises_ of the transition rule) and \(\lambda\) a \(\Sigma\)-literal (the _conclusion_). A _transition system specification (TSS)_ is a tuple \((\Sigma,\mathcal{R},\mathbb{N})\) with \(\mathcal{R}\) a set of transition rules over \(\Sigma\), and \(\mathbb{N}:\mathcal{R}\rightarrow\mathcal{N}\) a (not necessarily injective) _rule-naming function_, that provides each rule \(r\in\mathcal{R}\) with a name \(\mathbb{N}(r)\). **Definition 4.4** (Proof): Assume literals, rules, substitution instances and rule-naming. A _proof_ of a literal \(\lambda\) from a set \(\mathcal{R}\) of rules is a well-founded, upwardly branching, ordered tree where nodes are labelled by pairs \((\mu,\mathbb{r})\) of a literal \(\mu\) and a rule name \(\mathbb{R}\), such that * the root is labelled by a pair \((\lambda,\mathbb{s})\), and * if \((\mu,\mathbb{r})\) is the label of a node and \((\mu_{1},\mathbb{r}_{1}),\ldots,(\mu_{n},\mathbb{r}_{n})\) is the list of labels of this node's children then \(\frac{\mu_{1},\ldots,\mu_{n}}{\mu}\) is a substitution instance of a rule in \(\mathcal{R}\) with name \(\mathbb{R}\). **Definition 4.5** (Associated LTS [13]): The _associated LTS_ of a TSS \((\Sigma,\mathcal{R},\mathbb{N})\) is the LTS \((S,\mathit{Tr},\mathit{source},\)\(\mathit{target},\ell)\) with \(S\coloneqq\mathbb{P}^{\,r}(\Sigma)\) and \(\mathit{Tr}\) the collection of proofs \(\pi\) of closed \(\Sigma\)-literals \(p\stackrel{{ a}}{{\longrightarrow}}q\) from \(\mathcal{R}\), where \(\mathit{source}(\pi)=p\), \(\ell(\pi)=a\) and \(\mathit{target}(\pi)=q\). Above we deviate from the standard treatment of structural operational semantics [17, 9] on four counts. Here we employ CCS to motivate those design decisions. In Definition 4.5, the transitions \(\mathit{Tr}\) are taken to be proofs of closed literals \(p\stackrel{{ a}}{{\longrightarrow}}q\) rather than such literals themselves. This is because there can be multiple \(a\)-transitions from \(p\) to \(q\) that need to be distinguished when taking the concurrency relation between transitions into account. For example, if \(p:=\langle X|\{X=a.X+c.X\}\rangle\) and \(q:=\langle Y|\{Y=a.Y\}\rangle\) then \(p|q\) has three outgoing transitions: \[\begin{array}{cc}\infer{\begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p\stackrel{{ a}}{{\longrightarrow}}p\stackrel{{ a}}{{\longrightarrow}}}\\ \infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p\stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\\ \infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\\ \infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{a.q\stackrel{{ a}}{{\longrightarrow}}q\stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{a.q\stackrel{{ a}}{{\longrightarrow}}q\stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{a.q\stackrel{{ a}}{{\longrightarrow}}q\stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{\begin{array}{cc} \infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p\stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.q\stackrel{{ a}}{{\longrightarrow}}q \stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.q\stackrel{{ a}}{{\longrightarrow}}q \stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ c}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ a}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p+c.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a.p\stackrel{{ a}}{{\longrightarrow}}p \stackrel{{ c}}{{\longrightarrow}}}{{\longrightarrow}}\end{array}}{ \begin{array}{cc}\infer{a. **Definition 5.1** (De Simone Format): A TSS \((\Sigma,\mathcal{R},\textsc{n})\) is in _De Simone format_ if for every recursive call \(\langle X|S\rangle\) and every \(\alpha\in Act\) and \(\ell\in\mathcal{L}\backslash Act\), it has transition rules \[\frac{\langle S_{X}|S\rangle\stackrel{{\alpha}}{{ \longrightarrow}}y}{\langle X|S\rangle\stackrel{{\alpha}}{{ \longrightarrow}}y}\ rec_{Act}\qquad\text{and}\qquad\frac{\langle S_{X}|S \rangle\stackrel{{\ell}}{{\longrightarrow}}y}{\langle X|S \rangle\stackrel{{\ell}}{{\longrightarrow}}\langle X|S\rangle}\ rec_{In}\quad \text{for some}\quad y\notin var(\langle S_{X}|S\rangle),\] and each of its other transition rules (_De Simone rules_) has the form \[\frac{\{x_{i}\stackrel{{ a_{i}}}{{\longrightarrow}}y_{i}\mid i \in I\}}{Op(x_{1},\ldots,x_{n})\stackrel{{ a}}{{\longrightarrow}}q}\] where \((Op,n)\in\Sigma\), \(I\subseteq\{1,\ldots,n\}\), \(a,a_{i}\in\mathcal{L}\), \(x_{i}\) (for \(1\leq i\leq n\)) and \(y_{i}\) (for \(i\in I\)) are pairwise distinct process variables, and \(q\) is a univariate process expression containing no other free process variables than \(x_{i}\) (\(1\leq i\leq n\wedge i\notin I\)) and \(y_{i}\) (\(i\in I\)), having the properties that * each subexpression of the form \(\langle X|S\rangle\) is closed, and * if \(a\in\mathcal{L}\backslash Act\) then \(a_{i}\in\mathcal{L}\backslash Act\) (\(i\in I\)) and \(q=Op(z_{1},\ldots,z_{n})\), where \(z_{i}:=\begin{cases}y_{i}&\text{if }i\in I\\ x_{i}&\text{otherwise}.\end{cases}\) Here _univariate_ means that each variable has at most one free occurrence in it. The last clause above guarantees that for any indicator transition \(t\), one with \(\ell(t)\in\mathcal{L}\backslash Act\), we have \(target(t)=source(t)\). For a De Simone rule of the above form, \(n\) is the _arity_, \((Op,n)\) is the _type_, \(a\) is the _label_, \(q\) is the _target_, \(I\) is the _trigger set_ and the tuple \((\ell_{i},\ldots,\ell_{n})\) with \(\ell_{i}=a_{i}\) if \(i\in I\) and \(\ell_{i}=*\) otherwise, is the _trigger_. Transition rules in the first two clauses are called _recursion rules_. We also require that if \(\textsc{n}(r)=\textsc{n}(r^{\prime})\) for two different De Simone rules \(r,r^{\prime}\in\mathcal{R}\), then \(r,r^{\prime}\) have the same type, target and trigger set, but different triggers. The names of the recursion rules are as indicated in blue above, and differ from the names of any De Simone rules. Many process description languages encountered in the literature, including CCS [18] as presented in Section 3, SCCS [19], ACP [4] and Meije [3], are De Simone languages. ## 6 Transition System Specifications with Successors In Section 4, a _process_ is denoted by a closed process expression; an open process expression may contain variables, which stand for as-of-yet unspecified subprocesses. Here we will do the same for transition expressions with variables. However, in this paper a transition is defined as a proof of a literal \(p\stackrel{{ a}}{{\longrightarrow}}q\) from the operational rules of a language. Elsewhere, a transition is often defined as a provable literal \(p\stackrel{{ a}}{{\longrightarrow}}q\), but here we need to distinguish transitions based on these proofs, as this influences whether two transitions are concurrent. It turns out to be convenient to introduce an _open proof_ of a literal as the semantic interpretation of an open transition expression. It is simply a proof in which certain subproofs are replaced by proof variables. **Definition 6.1** (Open Proof): Given definitions of literals, rules and substitution instances, and a rule-naming function n, an _open proof_ of a literal \(\lambda\) from a set \(\mathcal{R}\) of rules using a set \(\mathcal{V}\) of _(proof) variables_ is a well-founded, upwardly branching, ordered tree of which the nodes are labelled either by pairs \((\mu,\textsc{r})\) of a literal \(\mu\) and a rule name r, or by pairs \((\mu,px)\) of a literal \(\mu\) and a variable \(px\in\mathcal{V}\) such that * the root is labelled by a pair \((\lambda,\chi)\), * if \((\mu,px)\) is the label of a node then this node has no children, * if two nodes are labelled by \((\mu,px)\) and \((\mu^{\prime},px)\) separately then \(\mu=\mu^{\prime}\), and * if \((\mu,\textsc{R})\) is the label of a node and \((\mu_{1},\chi_{1}),\ldots,(\mu_{n},\chi_{n})\) is the list of labels of this node's children then \(\frac{\mu_{1},\ldots,\mu_{n}}{\mu}\) is a substitution instance of a rule named R. Let \(\mathcal{V}_{\mathcal{T}}\) be an infinite set of _transition variables_, disjoint from \(\mathcal{V}_{\mathcal{P}}\). We will use \(tx,ux,vx,ty,tx_{i}\), etc. to range over \(\mathcal{V}_{\mathcal{T}}\). **Definition 6.2** (Open Transition): Fix a TSS \((\Sigma,\mathcal{R},\textsc{N})\). An _open transition_ is an open proof of a \(\Sigma\)-literal from \(\mathcal{R}\) using \(\mathcal{V}_{\mathcal{T}}\). For an open transition \(\hat{t}\), \(var_{\mathcal{T}}(\hat{t})\) denotes the set of transition variables occurring in \(\hat{t}\); if its root is labelled by \((p\xrightarrow{a}q,\chi)\) then \(src_{\circ}(\hat{t})=p\), \(\ell_{\circ}(\hat{t})=a\) and \(tar_{\circ}(\hat{t})=q\). The _binding function_\(\beta_{\hat{t}}\) of \(\hat{t}\) from \(var_{\mathcal{T}}(\hat{t})\) to \(\Sigma\)-literals is defined by \(\beta_{\hat{t}}(tx)=\mu\) if \(tx\in var_{\mathcal{T}}(\hat{t})\) and \((\mu,tx)\) is the label of a node in \(\hat{t}\). Given an open transition, we refer to the subproofs obtained by deleting the root node as its _direct subtransitions_. All occurrences of transition variables are considered _free_. Let \(\mathbb{T}^{r}(\Sigma,\mathcal{R},\textsc{N})\) be the set of open transitions in the TSS \((\Sigma,\mathcal{R},\textsc{N})\) and \(\mathbb{T}^{r}(\Sigma,\mathcal{R},\textsc{N})\) the set of closed open transitions. We have \(\mathbb{T}^{r}(\Sigma,\mathcal{R},\textsc{N})=\textit{Tr}\). Let \(en_{\circ}(p)\) denote \(\{\hat{t}\mid src_{\circ}(\hat{t})=p\}\). **Definition 6.3** (Transition Expression): A _transition declaration_ is a tuple \((\textsc{R},n,I)\) of a _transition constructor_ R, an arity \(n\in\textsc{N}\) and a trigger set \(I\subseteq\{1,\ldots,n\}\). A _transition signature_ is a set of transition declarations. The set \(\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P}},\Sigma_{\mathcal{T}})\) of _transition expressions_ over a process signature \(\Sigma_{\mathcal{P}}\) and a transition signature \(\Sigma_{\mathcal{T}}\) is defined inductively as follows. * if \(tx\in\mathcal{V}_{\mathcal{T}}\) and \(\mu\) is a \(\Sigma\)-literal then \((tx::\mu)\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P}},\Sigma_{\mathcal{T}})\), * if \(E\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P}},\Sigma_{\mathcal{T}})\), \(S:\mathcal{V}_{\mathcal{P}}\rightarrow\mathbb{P}^{r}(\Sigma_{\mathcal{P}})\) and \(X\in\mathrm{dom}(S)\) then \(rec_{Act}(X,S,E),rec_{In}(X,S,E)\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P }},\Sigma_{\mathcal{T}})\), and * if \((\textsc{R},n,I)\in\Sigma_{\mathcal{T}}\), \(E_{i}\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P}},\Sigma_{\mathcal{T}})\) for each \(i\in I\), and \(E_{i}\in\mathbb{P}^{r}(\Sigma_{\mathcal{P}})\) for each \(i\in\{1,\ldots,n\}\setminus I\), then \(\textsc{R}(E_{1},\ldots,E_{n})\in\mathbb{T}\mathbb{E}^{r}(\Sigma_{\mathcal{P }},\Sigma_{\mathcal{T}})\). Given a TSS \((\Sigma,\mathcal{R},\textsc{N})\) in De Simone format, each open transition \(\hat{t}\in\mathbb{T}^{r}(\Sigma,\mathcal{R})\) is named by a unique transition expression in \(\mathbb{T}\mathbb{E}^{r}(\Sigma,\Sigma_{\mathcal{T}})\); here \(\Sigma_{\mathcal{T}}=\{(\textsc{N}(r),n,I)\mid r\in\mathcal{R}\text{ is a De Simone rule, }n\text{ is its arity and }I\text{ is its trigger set}\}\): * if the root of \(\hat{t}\) is labelled by \((\mu,tx)\) where \(tx\in\mathcal{V}_{\mathcal{T}}\) then \(\hat{t}\) is named \((tx::\mu)\), * if the root of \(\hat{t}\) is labelled by \((\langle X|S\rangle\xrightarrow{a}q,\textsc{R})\) where \(a\in Act\) then \(\hat{t}\) is named \(rec_{Act}(X,S,E)\) where \(E\) is the name of the direct subtransition of \(\hat{t}\), * if the root of \(\hat{t}\) is labelled by \((\langle X|S\rangle\xrightarrow{\ell}\langle X|S\rangle,\textsc{R})\) where \(\ell\in\mathcal{L}\backslash Act\) then \(\hat{t}\) is named \(rec_{In}(X,S,E)\) where \(E\) is the name of the direct subtransition of \(\hat{t}\), and * if the root of \(\hat{t}\) is labelled by \((Op(p_{1},\ldots,p_{n})\xrightarrow{a}q,\textsc{R})\) then \(\hat{t}\) is named \(\textsc{R}(E_{1},\ldots,E_{n})\) where, letting \(n\) and \(I\) be the arity and the trigger set of the rules named R, \(E_{i}\) for each \(i\in I\) is the name of the direct subtransitions of \(\hat{t}\) corresponding to the index \(i\), and \(E_{i}=p_{i}\) for each \(i\in\{1,\ldots,n\}\setminus I\). We now see that the first requirement for the rule-naming function in Definition 5.1 ensures that every open transition is uniquely identified by its name. **Definition 6.4** (Transition Substitution): Let \((\Sigma,\mathcal{R},\textsc{N})\) be a TSS. A \((\Sigma,\mathcal{R})\)_-substitution_ is a partial function \(\sigma_{\mathcal{T}}:(\mathcal{V}_{\mathcal{P}}\rightarrow\mathbb{P}^{r}( \Sigma))\cup(\mathcal{V}_{\mathcal{T}}\rightarrow\mathbb{T}^{r}(\Sigma, \mathcal{R}))\). It is _closed_ if it is a total function \(\sigma_{\mathcal{T}}:(\mathcal{V}_{\mathcal{P}}\rightarrow\mathbb{P}^{r}( \Sigma))\cup(\mathcal{V}_{\mathcal{T}}\rightarrow\mathbb{T}^{r}(\Sigma, \mathcal{R}))\). A \((\Sigma,\mathcal{R})\)-substitution \(\sigma_{\mathcal{T}}\)_matches_ all process expressions. It matches an open transition \(\hat{t}\) whose binding function is \(\beta_{\hat{t}}\) if for all \((tx,\mu)\in\beta_{\hat{t}}\), \(\sigma_{\mathcal{T}}(tx)\) being defined and \(\mu=(p\xrightarrow{a}q)\) implies \(\ell_{\circ}(\sigma_{\mathcal{T}}(tx))=a\) and \(src_{\circ}(\sigma_{\mathcal{T}}(tx)),tar_{\circ}(\sigma_{\mathcal{T}}(tx))\) being the substitution instances of \(p,q\) respectively by applying \(\sigma_{\mathcal{T}}\!\mid\!\mathcal{V}_{\mathcal{P}}\). If \(E\in\mathbb{P}^{r}(\Sigma)\cup\mathbb{T}^{r}(\Sigma,\mathcal{R})\) and \(\sigma_{\mathcal{T}}\) is a \((\Sigma,\mathcal{R})\)-substitution matching \(E\), then \(E[\sigma_{\mathcal{T}}]\) denotes the expression obtained from \(E\) by replacing, for \(tx\in\mathcal{V}_{\mathcal{T}}\) in the domain of \(\sigma_{\mathcal{T}}\), every subexpression of the form \((tx\mathrel{\mathop{:}\mskip * if \(i\in I\) then \(xe_{i}=(tx_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}x_{i}^{\prime})\) and \(ye_{i}=(ty_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}y_{i}^{ \prime})\), * if \(i\notin I\) then \(xe_{i}\) is either \(x_{i}\) or \((tx_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}x_{i}^{ \prime})\), and \(ye_{i}\) is either \(x_{i}\) or \((ty_{i}::x_{i}\stackrel{{ya_{i}}}{{\longrightarrow}}y_{i}^{ \prime})\), * R and S are \(n\)-ary transition constructors such that the open transitions \(\textsc{R}(xe_{1},\ldots,xe_{n})\), \(\textsc{S}(ye_{1},\ldots,ye_{n})\) and \(\hat{v}\) satisfy \[src_{\circ}(\textsc{R}(xe_{1},\ldots,xe_{n}))=src_{\circ}(\textsc{S}(ye_{1}, \ldots,ye_{n}))\] and \(src_{\circ}(\hat{v})=tar_{\circ}(\textsc{S}(ye_{1},\ldots,ye_{n}))\), * \(\hat{v}\) is univariate and contains no other variable expressions than * \(x_{i}\) or \((tz_{i}::x_{i}\stackrel{{za_{i}}}{{\longrightarrow}}z_{i}^{ \prime})\) (\(1\leq i\leq n\wedge xe_{i}=ye_{i}=x_{i}\)), * \((tx_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}x_{i}^{ \prime})\) (\(1\leq i\leq n\wedge xe_{i}\neq x_{i}\wedge ye_{i}=x_{i}\)), * \(y_{i}^{\prime}\) or \((tz_{i}::y_{i}^{\prime}\stackrel{{za_{i}}}{{\longrightarrow}}z_{i}^ {\prime})\) (\(1\leq i\leq n\wedge i\notin I\wedge ye_{i}\neq x_{i}\)), * \((tz_{i}::y_{i}^{\prime}\stackrel{{za_{i}}}{{\longrightarrow}}z_{i}^ {\prime})\) (\(i\in I\)), and * if \(\ell_{\circ}(\textsc{S}(ye_{1},\ldots,ye_{n}))\in\mathscr{L}\backslash Act\) then for \(i\in I\), \(ya_{i}\in\mathscr{L}\backslash Act\); for \(i\notin I\), either \(xe_{i}=x_{i}\) or \(ye_{i}=x_{i}\); and \(\hat{v}=\textsc{R}(ze_{1},\ldots,ze_{n})\), where \[ze_{i}:=\begin{cases}(tz_{i}::y_{i}^{\prime}\stackrel{{za_{i}}}{{ \longrightarrow}}z_{i}^{\prime})&\text{ if }i\in I\\ &\text{ }xe_{i}&\text{ if }i\notin I\text{ and }ye_{i}=x_{i}\\ &\text{ }y_{i}^{\prime}&\text{ otherwise.}\end{cases}\] The last clause above is simply to ensure that if \(t\leadsto_{u}v\) for an indicator transition \(u\), that is, with \(\ell(u)\notin Act\), then \(v=t\). The other conditions of Definition 7.1 are illustrated by the Venn diagram of Figure 1. The outer circle depicts the indices \(1,\ldots,n\) numbering the arguments of the operator \(Op\) that is the common type of the De Simone rules named R and S; \(I_{\text{R}}\) and \(I_{\text{S}}\) are the trigger sets of R and S, respectively. In line with Definition 6.3, \(xe=x_{i}\) for \(i\in I_{\text{R}}\), and \(xe=(tx_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}x_{i}^{ \prime})\) for \(i\notin I_{\text{R}}\). Likewise, \(ye=x_{i}\) for \(i\in I_{\text{S}}\), and \(ye=(ty_{i}::x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}y_{i}^{ \prime})\) for \(i\notin I_{\text{S}}\). So the premises of any rule named S are \(\{x_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}y_{i}^{\prime}\mid i \in I_{\text{S}}\}\). By Definition 5.1 the target of such a rule is a univariate process expression \(q\) with no other variables than \(z_{1},\ldots,z_{n}\), where \(z_{i}:=x_{i}\) for \(i\in I_{\text{S}}\) and \(z_{i}:=y_{i}^{\prime}\) for \(i\notin I_{\text{S}}\). Since \(src_{\circ}(\hat{v})=q\), the transition expression \(\hat{v}\) must be univariate, and have no variables other than \(ze_{i}\) for \(i=1,\ldots,n\), where \(ze_{i}\) is either the process variable \(z_{i}\) or a transition variable expression \((tz_{i}::z_{i}\stackrel{{xa_{i}}}{{\longrightarrow}}z_{i}^{ \prime})\). Figure 1: Inclusion between index sets \(I,I_{\text{R}},I_{\text{S}},I_{\text{T}},I_{G}\subseteq\{1,..,n\}\). One has \((I_{\text{R}}\cap I_{G})\backslash I_{\text{S}}\subseteq I_{\text{T}}\). The annotations \(n_{i}\) show the location of index \(i\) (suppressed for unary operators) of rule \(n\). \(I\) is the set of indices \(i\) for which the above successor rule has a premise. Since this premise involves the transition variables \(tx_{i}\) and \(ty_{i}\), necessarily \(I\subseteq I_{\mathbb{R}}\cap I_{\mathbb{S}}\). Let \(I_{G}\) be the set of indices for which \(ze_{i}\) occurs in \(\hat{v}\), and \(I_{\mathbb{T}}\subseteq I_{G}\) be the subset where \(ze_{i}\) is a transition variable. The conditions on \(\hat{v}\) in Definition 7.1 say that \(I\cap I_{G}\subseteq I_{\mathbb{T}}\) and \((I_{\mathbb{R}}\cap I_{G})\backslash I_{\mathbb{S}}\subseteq I_{\mathbb{T}}\). For \(i\in I\cap I_{G}\), the transition variable \(tz_{i}\) is inherited from the premises of the rule, and for \(i\in(I_{\mathbb{R}}\cap I_{G})\backslash I_{\mathbb{S}}\) the transition variable \(tz_{i}\) is inherited from its source. In order to show that most classes of indices allowed by our format are indeed populated, we indicated the positions of the indices of the rules of CCS and (the forthcoming) ABCdE from Tables 2 and 5. Any De Simone language, including CCS, SCCS, ACP and Meije, can trivially be extended to a language with successors, e.g. by setting \(\mathcal{U}=\emptyset\). This would formalise the assumption that the parallel composition operator of these languages is governed by a _scheduler_, scheduling actions from different components in a nondeterministic way. The choice of \(\mathcal{U}\) from Table 2 instead formalises the assumption that parallel components act independently, up to synchronisations between them. We now present the main theorem of this paper, namely that ep-bisimulation is a lean congruence for all languages that can be presented in De Simone format with successors. A lean congruence preserves equivalence when replacing closed subexpressions of a process by equivalent alternatives. Being a lean congruence implies being a congruence for all operators of the language, but also covers the recursion construct. **Theorem 7.2** (Lean Congruence): Ep-bisimulation is a lean congruence for all De Simone languages with successors. Formally, fix a TSSS \((\Sigma,\mathcal{R},\aleph,\mathcal{U})\) in De Simone format. If \(p\in\mathbb{P}^{\,r}(\Sigma)\) and \(\rho,\nu\) are two closed \(\Sigma\)-substitutions with \(\forall x\in\mathcal{V}_{\mathcal{P}}\). \(\rho(x)\leftrightarroweq_{ep}\nu(x)\) then \(p[\rho]\leftrightarroweq_{ep}p[\nu]\). The proof can be found in Appendix A of the full version of this paper [16]. In contrast to a lean congruence, a full congruence would also allow replacement within a recursive specification of subexpressions that may contain recursion variables bound outside of these subexpressions. As our proof is already sophisticated, we consider the proof of full congruence to be beyond the scope of the paper. In fact we are only aware of two papers that provide a proof of full congruence via a rule format [22, 10]. We carefully designed our De Simone format with successors and can state the following conjecture. **Conjecture 7.3**: Ep-bisimulation is a full congruence for all De Simone languages with successors. ## 8 A Larger Case Study: The Process Algebra ABCdE The _Algebra of Broadcast Communication with discards and Emissions_ (ABCdE) stems from [14]. It combines CCS [18], its extension with broadcast communication [21, 12, 11], and its extension with signals [5, 7, 8, 11]. Here, we extend CCS as presented in Section 3. ABCdE is parametrised with sets \(\mathcal{C}\) of _handshake communication names_ as used in CCS, \(\mathcal{B}\) of _broadcast communication names_ and \(\mathcal{S}\) of _signals_. \(\bar{\mathcal{S}}\coloneqq\{\bar{s}\mid s\in\mathcal{S}\}\) is the set of signal emissions. The collections \(\mathcal{B}\)!, \(\mathcal{B}\)? and \(\mathcal{B}\): of _broadcast_, _receive_, and _discard_ actions are given by \(\mathcal{B}\sharp\coloneqq\{b\sharp\mid b\in\mathcal{B}\}\) for \(\sharp\in\{!,?,:\}\). \(Act\coloneqq\mathcal{C}\cup\bar{\mathcal{C}}\cup\{\tau\}\cup\mathcal{B}\!\! \cup\!\mathcal{B}\!\!\cup\!\mathcal{S}\) is the set of _actions_, with \(\tau\) the _internal action_, and \(\mathcal{L}\coloneqq Act\cup\mathcal{B}\!\!\!:\cup\bar{\mathcal{S}}\) is the set of _transition labels_. Complementation extends to \(\mathcal{C}\cup\bar{\mathcal{C}}\cup\mathcal{S}\cup\mathcal{S}\cup\bar{ \mathcal{S}}\) by \(\bar{\epsilon}\coloneqq c\). Below, \(c\) ranges over \(\mathcal{C}\cup\bar{\mathcal{C}}\cup\mathcal{S}\cup\bar{\mathcal{S}}\), \(\eta\) over \(\mathcal{C}\cup\bar{\mathcal{C}}\cup\{\tau\}\cup\mathcal{S}\cup\bar{\mathcal{S}}\), \(\alpha\) over \(Act\), \(\ell\) over \(\mathcal{L}\), \(\gamma\) over \(In\coloneqq\mathcal{L}\backslash Act\), \(b\) over \(\mathcal{B}\), \(\sharp,\sharp_{1},\sharp_{2}\) over \(\{!,?,:\}\), \(s\) over \(\mathcal{S}\), \(S\) over recursive specifications and \(X\) over \(V_{S}\). A _relabelling_ is a function \(f:(\mathcal{C}\to\mathcal{C})\cup(\mathcal{B}\to\mathcal{B})\cup(\mathcal{S} \to\mathcal{S})\); it extends to \(\mathcal{L}\) by \(f(\bar{c})=\overline{f(c)}\), \(f(\tau)\coloneqq\tau\) and \(f(b\sharp)=f(b)\sharp\). Next to the constant and operators of CCS, the process signature \(\Sigma\) of ABCdE features a unary _signalling_ operator \(\underline{-}\)'s for each signal \(s\in\mathcal{S}\). The semantics of ABCdE is given by the transition rule templates displayed in Tables 1 and 3. The latter augments CCS with mechanisms for broadcast communication and signalling. The rule \(|_{\mathbb{C}}\) presents the core of broadcast communication [21], where any broadcast-action \(b!\) performed by a component in a parallel composition needs to synchronise with either a receive action \(b?\) or a discard action \(b\): of any other component. In order to ensure associativity of the parallel composition, rule \(|_{\mathbb{C}}\) also allows receipt actions of both components (\(\sharp_{1}=\sharp_{2}=?\)), or a receipt and a discard, to be combined into a receipt action. A transition \(p\stackrel{{ b:}}{{\longrightarrow}}q\) is derivable only if \(q=p\). It indicates that the process \(p\) is unable to receive a broadcast communication \(b!\) on channel \(b\). The Rule \(b!\) allows the nil process (inaction) to discard any incoming message; in the same spirit \(b!\alpha\). allows a message to be discarded by a process that cannot receive it. A process offering a choice can only perform a discard-action if both choice-options can discard the corresponding broadcast (Rule \(+_{\mathbb{C}}\)). Finally, by rule \(rec_{In}\), a recursively defined process \(\langle X|S\rangle\) can discard a broadcast iff \(\langle S_{X}|S\rangle\) can discard it. The variant \(rec_{In}\) of \(rec_{Act}\) is introduced to maintain the property that \(target(\theta)=source(\theta)\) for any indicator transition \(\theta\). A signalling process \(p\)'\(s\) emits the signal \(s\) to be read by another process. A typically example is a traffic light being red. Signal emission is modelled as an indicator transition, which does not change the state of the emitting process. The first rule \((^{-s})\) models the emission \(\bar{s}\) of signal \(s\) to the environment. The environment (processes running in parallel) can read the signal by performing a read action \(s\). This action synchronises with the emission \(\bar{s}\), via rule \(|_{\mathbb{C}}\) of Table 1. Reading a signal does not change the state of the emitter. Rules \(+_{\mathbb{L}}^{\varepsilon}\) and \(+_{\mathbb{R}}^{\varepsilon}\) describe the interaction between signal emission and choice. Relabelling and restriction are handled by rules \(\backslash L\) and \([f]\) of Table 1, respectively. These operators do not prevent the emission of a signal, and emitting signals never changes the state of the emitting process. Signal emission \(p\)'\(s\) does not block other transitions of \(p\). It is trivial to check that the TSS of ABCdE is in De Simone format. The transition signature of ABCdE (Table 4) is completely determined by the set of transition rule templates in Tables 1 and 3. We have united the rules for handshaking and broadcast communication by assigning the same name \(|_{\mathbb{C}}\) to all their instances. When expressing transitions in ABCdE as expressions, we use infix notation for the binary transition constructors, and prefix or postfix notation for unary ones. For example, the transition \(b!\)\(0\)\(()\) is shortened to \(b!\)\(0\), \(\stackrel{{\alpha}}{{\rightarrow}}(p)\) to \(\stackrel{{\alpha}}{{\rightarrow}}p\), \(\backslash L(t)\) to \(t\backslash L\), and \(|_{\mathbb{L}}(t,p)\) to \(t|_{\mathbb{L}}p\). \begin{table} \begin{tabular}{|c c c c c|} \hline \(\begin{array}{c}\hline\hline\mathbf{0}\stackrel{{ b:}}{{ \longrightarrow}}\mathbf{0}\end{array}\) & \(\begin{array}{c}b!\mathbf{0}\end{array}\) & \(\begin{array}{c}\alpha\neq b?\\ \alpha.x\stackrel{{ b:}}{{\longrightarrow}}\alpha.x\end{array}\) & \(\begin{array}{c}b!\alpha.\alpha.\end{array}\) & \(\begin{array}{c}\frac{x\stackrel{{ b:}}{{ \longrightarrow}}x^{\prime},\ y\stackrel{{ b:}}{{ \longrightarrow}}y^{\prime}}{x+y\stackrel{{ b:}}{{ \longrightarrow}}x^{\prime}+y^{\prime}}+_{\mathbb{C}}\end{array}\) \\ \(\begin{array}{c}\frac{x\stackrel{{ b\sharp_{1}}}{{ \longrightarrow}}x^{\prime},\ y\stackrel{{ b\sharp_{2}}}{{ \longrightarrow}}y^{\prime}}{x|y\stackrel{{ b\sharp}}{{ \longrightarrow}}x^{\prime}|y^{\prime}}\end{array}\) & \(|_{\mathbb{C}}\) & with \(\begin{array}{c}\circ\\ \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\end{array}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array} Table 5 extends the successor relation of CCS (Table 2) to ABCdE. \(P,Q\) are process variables, \(t,v\) transition variables enabled at \(P\), \(u,w\) transition variables enabled at \(Q\), \(P^{\prime},Q^{\prime}\) the targets of \(v,w\), respectively and \(t^{\prime},u^{\prime}\) transitions enabled at \(P^{\prime},Q^{\prime}\), respectively. To express those rules in the same way as Definition 7.1, we replace the metavariables \(P\), \(Q\), \(t\), \(u\), etc. with variable expressions as indicated on the right. Here \(xa_{i}\), \(ya_{i}\), \(za_{i}\) are label variables that should be instantiated to match the trigger of the rules and side conditions. As ABCdE does not feature operators of arity \(>\)2, the index \(i\) from Definition 7.1 can be 1 or 2 only. To save duplication of rules 8b, 8c, 9b, 9c and 10 we have assigned the same name \(|_{\mathrm{c}}\) to the rules for handshaking and broadcast communication. The intuition of the rules of Table 5 is explained in detail in [14]. In the naming convention for transitions from [14] the sub- and superscripts of the transition constructors \(+\), \(|\) and \(\hat{s}\), and of the recursion construct, were suppressed. In most cases that yields no ambiguity, as the difference between \(|_{\mathrm{L}}\) and \(|_{\mathrm{R}}\), for instance, can be detected by checking which of its two arguments are of type transition versus process. Moreover, it avoids the duplication in rules 3a, 4a, 5, 6, 11c and 11d. The ambiguity between \(+_{\mathrm{L}}\) and \(+_{\mathrm{L}}^{\varepsilon}\) (or \(+_{\mathrm{R}}\) and \(+_{\mathrm{R}}^{\varepsilon}\)) was in [14] resolved by adorning rules 3-6 with a side condition \(\ell(v)\notin\mathcal{S}\) or \(\ell(w)\notin\mathcal{S}\), and the ambiguity between \(rec_{Act}\) and \(rec_{In}\) (or \(\hat{s}_{Act}\) and \(\hat{s}_{In}\)) by adorning rules 11c and 11d with a side condition \(\ell(v)\in Act\); this is not needed here. It is easy to check that all rules are in the newly introduced De Simone format, except Rule 1. However, this rule can be converted in to a collection of De Simone rules by substituting \(R(xe_{1},\ldots,xe_{n})\) for \(\chi\) and \(S(ye_{1},\ldots,ye_{n})\) for \(\zeta\), adding a premise in the form of \(xe_{i}\leadsto_{ye_{i}}(t_{\mathrm{Z}}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}))\) if \(i\in I_{\mathrm{R}}\cap I_{\mathrm{S}}\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Constructor & \(\xrightarrow{\alpha}\) & \((\rightarrow^{s})\) & \(b\):\(\mathbf{0}\) & \(b\):\(\alpha\) & \(+_{\mathrm{L}}\) & \(+_{\mathrm{R}}\) & \(+_{\mathrm{C}}\) & \(+_{\mathrm{L}}^{\varepsilon}\) & \(+_{\mathrm{R}}^{\varepsilon}\) & \(|_{\mathrm{L}}\) & \(|_{\mathrm{C}}\) & \(|_{\mathrm{R}}\) & \(\backslash L\) & \([f]\) & \(\hat{s}_{Act}\) & \(\hat{s}_{In}\) \\ \hline Arity & 1 & 1 & 0 & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 1 & 1 & 1 & 1 \\ \hline Trigger Set & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\{1\}\) & \(\{2\}\) & \(\{1,2\}\) & \(\{1\}\) & \(\{2\}\) & \(\{1\}\) & \(\{1,2\}\) & \(\{2\}\) & \(\{1\}\) & \(\{1\}\) & \(\{1\}\) \\ \hline \end{tabular} \end{table} Table 4: Transition signature of ABCdE \begin{table} \begin{tabular}{|c|c|} \hline Meta & Variable Expression \\ \hline \(P\) & \(x_{1}\) \\ \(Q\) & \(x_{2}\) \\ \(P^{\prime}\) & \(y_{1}^{\prime}\) \\ \(Q^{\prime}\) & \(y_{2}^{\prime}\) \\ \(t\) & \((x_{1}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{ \iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota} \bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\bar{\iota}\ for each pair of rules of the same type named r and s.3 The various occurrences of 1 in Figure 1 refer to these substitution instances. It follows that \(\trianglelefteq_{ep}\) is a congruence for the operators of ABCdE, as well as a lean congruence for recursion. Footnote 3: This yields \(1^{2}+2\cdot 1+5\cdot 3+3\cdot 2+2\cdot 1=26\) rules of types \((\mathbf{0},0)\), \((\alpha_{\_},\_},1)\), \((+,2)\), \((\overset{\ast}{s},1)\) and \(\langle X|S\rangle\) not included in Tables 2 and 5. ## 9 Related Work & Conclusion In this paper we have added a successor relation to the well-known De Simone format. This has allowed us to prove the general result that enabling preserving bisimilarity - a finer semantic equivalence relation than strong bisimulation - is a lean congruence for all languages with a structural operational semantics within this format. We do not cover full congruence yet, as proofs for general recursions are incredible hard and usually excluded from work justifying semantic equivalences. There is ample work on congruence formats in the literature. Good overview papers are [2, 20]. For system description languages that do not capture time, probability or other useful extensions to standard process algebras, all congruence formats target strong bisimilarity, or some semantic equivalence or preorder that is strictly coarser than strong bisimilarity. As far as we know, the present paper is the first to define a congruence format for a semantic equivalence that is finer than strong bisimilarity. Our congruence format also ensures a lean congruence for recursion. So far, the only papers that provide a rule format yielding a congruence property for recursion are [22] and [10], and both of them target strong bisimilarity. In Sections 3 and 8, we have applied our format to show lean congruence of ep-bisimilarity for the process algebra CCS and ABCdE, respectively. This latter process algebra features broadcast communication and signalling. These two features are representative for issues that may arise elsewhere, and help to ensure that our results are as general as possible. Our congruence format can effortlessly be applied to other calculi like CSP [6] or ACP [4]. In order to evaluate ep-bisimilarity on process algebras like CCS, CSP, ACP or ABCdE, their semantics needs to be given in terms of labelled transition systems extended with a successor relation \(\leadsto\). This relation models concurrency between transitions enabled in the same state, and also tells what happens to a transition if a concurrent transition is executed first. Without this extra component, labelled transition systems lack the necessary information to capture liveness properties in the sense explained in the introduction. In a previous paper [14] we already gave such a semantics to ABCdE. The rules for the successor relation presented in [14], displayed in Tables 2 and 5, are now seen to fit our congruence format. We can now also conclude that ep-bisimulation is a lean congruence for ABCdE. In [15, Appendix B] we contemplate a very different approach for defining the relation \(\leadsto\). Following [11], we understand each transition as the synchronisation of a number of elementary particles called _synchrons_. Then relations on synchrons are proposed in terms of which the \(\leadsto\)-relation is defined. It is shown that this leads to the same \(\leadsto\)-relation as the operational approach from [14] and Tables 2 and 5.
2309.11189
Increasing Ticketing Allocative Efficiency Using Marginal Price Auction Theory
Most modern ticketing systems rely on a first-come-first-serve or randomized allocation system to determine the allocation of tickets. Such systems has received considerable backlash in recent years due to its inequitable allotment and allocative inefficiency. We analyze a ticketing protocol based on a variation of the marginal price auction system. Users submit bids to the protocol based on their own utilities. The protocol awards tickets to the highest bidders and determines the final ticket price paid by all bidders using the lowest winning submitted bid. Game theoretic proof is provided to ensure the protocol more efficiently allocates the tickets to the bidders with the highest utilities. We also prove that the protocol extracts more economic rents for the event organizers and the non-optimality of ticket scalping under time-invariant bidder utilities.
Boxiang Fu
2023-09-20T10:23:39Z
http://arxiv.org/abs/2309.11189v1
# Increasing Ticketing Allocative Efficiency Using Marginal Price Auction Theory ###### Abstract Most modern ticketing systems rely on a first-come-first-serve or randomized allocation system to determine the allocation of tickets. Such systems has received considerable backlash in recent years due to its inequitable allotment and allocative inefficiency. We analyze a ticketing protocol based on a variation of the marginal price auction system. Users submit bids to the protocol based on their own utilities. The protocol awards tickets to the highest bidders and determines the final ticket price paid by all bidders using the lowest winning submitted bid. Game theoretic proof is provided to ensure the protocol more efficiently allocates the tickets to the bidders with the highest utilities. We also prove that the protocol extracts more economic rents for the event organizers and the non-optimality of ticket scalping under time-invariant bidder utilities. ## 1 Introduction Current ticket allocation systems used by most major ticketing websites operate on a first-come-first-serve or randomized allocation basis. Such a system has caused considerable backlash over recent years due to its opaque criteria for allocation and the need to compete for who can refresh the ticketing webpage the fastest in the milliseconds after tickets are released for sale (see Ref. [1]). Economically, current systems are also largely inefficient in allocating the tickets to the consumers with the highest utility for the tickets, thereby resulting in a loss in total allocative efficiency. We propose a ticketing protocol based on the marginal price auction system. The protocol allocates the tickets to the bidders with the highest bids and the price paid by all bidders is the lowest winning submitted bid. The protocol provably increases the total allocative efficiency compared to current allocation systems by assigning the tickets to the group of consumers with the highest utility. We also prove that the proposed system increases the economic rents extracted for the seller as well as offering a partial solution to the ticket scalping problem by proving that rational bidders with time-invariant utilities will refrain from buying scalped tickets. Protocol Description We begin by briefly summarizing ticketing systems based on a first-come-first-serve protocol (see Ref. [2]). Prior to the tickets going on sale, the seller publicly announces a time at which the bulk of the tickets are available for purchase. Users typically enter into the ticketing webpage prior to the tickets going on sale and compete on refreshing the webpage immediately after the ticket sale time commences. Users are then served based on their chronological time-stamp registered with the ticketing webpage. The tickets are progressively sold until the allotment has been exhausted or until all users wishing to purchase has been served. Fig. 1 briefly outlines the timeline of a first-come-first-serve ticketing system. Such a system is inefficient both in terms of time and allocation. Most first-come-first-serve systems require the user to be physically on the webpage waiting in the queue to be able to participate in the allocation, with queuing time possibly taking hours for large events (see Ref. [1] for the case of Taylor Swift's 2023 Australian tour). Economically, the system is also not allocative efficient in most cases. In the common case where demand exceeds supply, the first-come-first-serve system allocates tickets based on chronological ordering, and potentially leaves many buyers with higher utility without an allocation (see Fig. 5 and Example 1). We propose an alternative system for ticket allocation based on the marginal price auction system. The system is a multi-unit generalization of the Vickrey auction system (see Ref. [3]). In a marginal price auction, a fixed number of units of a homogeneous commodity is put forward for auction. Bidders submit bids for the units via a (usually) sealed-bid auction. The auctioneer allocates the units to the bidders with the highest bids until the allocation is exhausted. The price paid on each unit for all bidders is the lowest winning submitted bid (see Fig. 2). The marginal price auction system has some particularly useful game theoretic properties that are explored in the next section. For now, we outline our proposed ticket allocation mechanism. The timeline of our proposed marginal price ticket allocation system is outlined in Fig. 3. Instead of publicly announcing a ticket sale commencement time, the seller instead announces a time window for bid submission. During this window, bidders are free to submit bids for one or more tickets. Collateral may be taken to ensure the bid is genuine. A price floor may also be optionally implemented by the seller so that only bids exceeding the floor are accepted. Once the time window elapses, bidding is closed and all outstanding bids are entered into the auction. A marginal price auction system ranks the bids according to their monetary amount and allocates tickets to the highest bids until the allocation is exhausted. The price paid is determined by the Figure 1: Timeline of Key Steps in a First-Come-First-Serve Ticketing System lowest winning submitted bid. Tickets are then released to the successful bidders with a requirement to pay the ticket price within a set timeframe and any excess collateral or rebates is released back to the bidders. The protocol for the marginal price allocation mechanism is summarized in Fig. 4. After the bidding window is opened, users are first required to validate their identities if they have not done so prior. This entails signing up to the protocol so that an unique identifier can be attributed to the user (see Ref. [4]). For users wishing to bid multiple units, multiple identifiers should be provided by the user. These should ideally be the identities of the individuals hoping to attend the event. Such identification is crucial to allow us to associate a user submitting multiple bids as a proxy for multiple natural persons submitting multiple one unit bids. This allows us to ensure the validity of Theorem 1 and also reduce potential malicious activity such as intentionally bidding in large quantities by ticket scalpers to reduce overall available supply. Once user identification is validated, bids may be submitted through the protocol and bids exceeding the price floor are entered into the central database. Ideally, collateral equalling 100% of Figure 3: Timeline of Key Steps in a Marginal Price Ticketing System Figure 2: Ticket Allocation and Pricing in a Marginal Price Auction System the bid amount should also be posted concurrently with the bid to ensure the bid is genuine. This may be relaxed to cover less than 100% if additional guarantees can be put in place to ensure the bid is honest (e.g. the number of times the user has bid, the number of verified identities associated with the user, etc). This step can also provide useful information to the event organizers to gauge the popularity of the event. If the number of submitted bids greatly exceeds capacity, it could allow organizers to schedule additional shows to increase supply. Next, the event organizers may optionally choose to disclose an indicative final price prior to the end of the bidding window to stimulate bidding. This could be as rudimentary as determining the lowest winning bid of all the submitted bids up until this time. However, since the auction is no longer sealed-bid, its dynamics may be affected and the optimal bidding strategy may not be the one proven in Theorem 1. Once the bidding window elapses, the bidding webpage closes and the protocol no longer accepts incoming bids. The protocol then initiates a marginal price auction on all outstanding bids (see Algorithm 1). Bids are ranked in descending price order and tickets are allocated to the highest bids until the ticket allocation is exhausted, and the price of all tickets is determined by the lowest winning bid. In the case of multiple bids at the lowest winning bid price, a randomized lottery or Figure 4: Description of Steps in a Marginal Price Ticketing Protocol the chronological order of the bids may be used to allocate the remaining tickets. After the auction is executed, the tickets are released to the successful bidders and any excess collateral is released. If the collateral amount is less than the final ticket price, the bidder may be required to pay the remaining amount within a predetermined settlement period. Optionally, a rebate (both monetary and/or non-monetary) could be distributed to the winning bidders after the auction should the final settlement price greatly exceed the original price floor ticket price. Its rationale is explained in the next section. ``` 0:\(\mathbf{b}=(b_{1},b_{2},\ldots b_{i},\ldots b_{N})\)\(\triangleright\) Submitted bids in chronological order 0:\(m\)\(\triangleright\) Price floor 0:\(K\)\(\triangleright\) Number of available tickets if\(N\leq K\)then return User identifiers of \(\mathbf{b}\) and ticket price \(m\) elseif\(N>K\)then c\(\leftarrow\) DescendingSort(\(\mathbf{b}\)) return User identifiers of \(c_{i}\) with \(i\leq K\) and ticket price \(c_{K}\) ``` **Algorithm 1** Marginal Price Ticket Auction ## 3 Properties and Proofs A marginal price auction system has a number of nice game theoretic properties that allows the system to more efficiently allocate tickets based on the user's individual valuations. In essence, the marginal price auction system allocates the tickets to the group with the highest utility for the tickets, as opposed to a first-come-first-serve allocation in conventional ticketing systems. First, we prove that for rational bidders with demand for only one ticket, the optimal strategy for each bidder is to bid their true value of the item. From this, we show that the marginal price auction system extracts economic rents for the seller that is greater than or equal to the rents extracted from the first-come-first-serve system. We also show that the total valuation of successful bidders from the marginal price auction system is greater than or equal to the total valuation of the successful bidders from the first-come-first-serve system. This increases allocative efficiency and allots the limited number of tickets available to the group of bidders with the highest valuations. Finally, we show that the system offers a partial solution to the ticket scalping problem by proving that it is not optimal to buy from scalpers in the case of time-invariant bidder valuations. The first theorem is a standard result of marginal price auction systems found in most auction theory textbooks. The exposition used here is based on a variation of the proof found in Ref. [5]. Throughout this section we assume that each bidder has demand for one ticket only. This is a valid assumption in the case of event ticketing problems as one person can only maximally enjoy one unit of the ticket by being physically present at the event. We relax this one ticket assumption in the protocol implementation description by introducing an identity verification mechanism so that a user submitting multiple bids can be regarded as a proxy for multiple natural persons submitting multiple one unit bids. For ease of exposition we regard users that bid at exactly the final price as losing the bid (i.e. they are left without a ticket). For physical implementation purposes, a randomization procedure may be used so that all bidders who bid at exactly the final price is entered into a lottery and a subset is randomly chosen to be allocated the remaining tickets. **Theorem 1**.: _In a marginal price auction with single-unit bidder demand, the optimal strategy for all bidders is to bid their own true valuation._ Proof.: Let \(N\) denote the number of bidders in the auction and \(K\) denote the number of available units with \(N>K\). Also, let \(v_{i}\) denote bidder \(i\)'s valuation for one unit of the item, \(b_{i}\) denote bidder \(i\)'s submitted single-unit bid for the item, and let \(\mathbf{c}=(c_{1},c_{2},\ldots c_{i},\ldots c_{N})\) denote the \(N\)-vector of submitted bids by the \(N\) bidders arranged in descending price order (similarly \(\mathbf{c}^{-i}\) is the descending order bid vector without bid \(i\)). The final price set by the marginal price auction is given by the lowest winning bid at \[p=c_{K}\] The payoff to bidder \(i\) is given by the payoff function \[P_{i}(v_{i},\mathbf{c})=\begin{cases}v_{i}-p&\text{if }b_{i}>p\\ 0&\text{otherwise}\end{cases}\] We claim that \(b_{i}=v_{i}\). Suppose by contradiction that \(b_{i}>v_{i}\). We have the following cases: _Case 1_: \(p\geq b_{i}>v_{i}\). Bidder \(i\) loses the auction and receives a payoff of 0 regardless of their action. _Case 2_: \(b_{i}>p\geq v_{i}\). The payoff to bidder \(i\) is \(P_{i}(v_{i},\mathbf{c})=v_{i}-p\leq 0\) and is weakly dominated by the alternate strategy \(\tilde{b_{i}}=v_{i}\) with payoff \(P_{i}(v_{i},\mathbf{c}^{-i},\tilde{b_{i}})=0\). _Case 3_: \(b_{i}>v_{i}>p\). Since both \(b_{i}>p=c_{K}\) and \(v_{i}>p=c_{K}\), it makes no difference bidding at \(b_{i}\) or \(v_{i}\) as it only permutes the location of bidder \(i\)'s bid in the first \(K-1\) places of vector \(\mathbf{c}\). So bidder \(i\) wins the bid regardless and pays the same price \(p=c_{K}\). The three exhaustive cases shows that the strategy \(b_{i}>v_{i}\) is weakly dominated by the strategy \(\tilde{b_{i}}=v_{i}\). Next, suppose that \(b_{i}<v_{i}\). We have the following cases: _Case 1_: \(p\geq v_{i}>b_{i}\). Bidder \(i\) loses the auction and receives a payoff of 0 regardless of their action. _Case 2_: \(v_{i}>p\geq b_{i}\). The payoff to bidder \(i\) is \(P_{i}(v_{i},\mathbf{c})=0\) and is weakly dominated by the alternate strategy \(\tilde{b_{i}}=v_{i}\) with payoff \[P_{i}(v_{i},\mathbf{c}^{-i},\tilde{b_{i}})=\begin{cases}v_{i}-\tilde{p}&\text {if }v_{i}>c_{K-1}\\ 0&\text{otherwise}\end{cases}\] where \(\tilde{p}=c_{K-1}\) is now the lowest winning bid due to the insertion of bid \(\tilde{b_{i}}\) into the first \(K-1\) slots of \(\mathbf{c}\). _Case 3_: \(v_{i}>b_{i}>p\). As with the previous _Case 3_, bidder \(i\) wins the bid regardless and pays the same price \(p=c_{K}\). Thus, both strategies \(b_{i}>v_{i}\) and \(b_{i}<v_{i}\) are weakly dominated by \(\tilde{b}_{i}=v_{i}\). We conclude that the optimal bidding strategy for bidder \(i\) is to bid their own true valuation. The theorem above is not true in general if bidders have demand for more than one unit (see Ref. [5]). Hence, an identity verification mechanism is needed so that we regard a user submitting multiple bids as proxies for multiple natural persons. The mechanism effectively allows the seller to circumvent determining the pricing of the tickets based on imperfect information and instead rely on the marginal price auction mechanism to allow bidders to reveal their own reservation price through the bidding process. The theorem above guarantees that rational bidders will reveal their own willingness-to-pay during the bidding process and disclose this information to the seller. The mechanism also allows the seller to extract more economic rents than the first-come-first-serve system, which we will prove below. We also impose a price floor at which bids must exceed to be successful at being allocated a ticket. This is typical in most modern ticketing systems (it is just the ticket price in first-come-first-serve systems). **Theorem 2**.: _In a marginal price auction with single-unit bidder demand and price floor, the economic rents extracted is greater than or equal to the economic rents extracted from a first-come-first-serve system._ Proof.: Let \(\mathbf{c}=(c_{1},c_{2},\ldots c_{i},\ldots c_{N})\) denote the \(N\)-vector of submitted bids by the \(N\) bidders arranged in descending price order. Let the price floor be denoted by \(\$m\) and \(K\) denote the number of units available to bid with \(N>K\). We have the following cases: _Case 1_: \(c_{K}\geq m\). There is enough demand above the price floor to exhaust the supply of \(K\) units available to bid. The economic rents obtained by the first-come-first-serve system is given by \(mK\) (allocated to the first \(K\) bidders with bids exceeding the price floor in chronological order), while the the economic rents obtained by the marginal price auction is given by \(c_{K}K\). Since \(c_{K}\geq m\), we have \(c_{K}K\geq mK\). _Case 2_: \(c_{K}<m\). There is not enough demand above the price floor to exhaust the supply of \(K\) units available to bid. The price floor ensures that only the \(k<K\) bidders with \(c_{1},c_{2},\ldots c_{k}\geq m\) are allocated at price \(\$m\) and \(K-k\) units are left unallocated. The economic rents extracted is \(mk\) for both systems. From the two cases, we conclude that the a marginal price auction extracts economic rents that is greater than or equal to that extracted from a first-come-first-serve system. Below we provide a two simple examples on the different economic rents extracted by both systems. **Example 1**.: Let the number of bidders be \(N=6\) and the number of units available to bid be \(K=3\). Let the price floor be \(m=20\) with the chronological bid vector \(\mathbf{b}=(35,15,40,20,25,20)\). The descending price vector is then \(\mathbf{c}=(40,35,25,20,20,15)\). The first-come-first-serve system sets the ticket price at \(m=20\) and the successful bidders are the 1st, 3rd, and 4th entries in the chronological bid vector \(\mathbf{b}\). The economic rents extracted for the seller is \(3\times 20=60\). The marginal price auction system sets the ticket price at \(c_{3}=25\) and the successful bidders entered into the auction in the chronological order of 1st, 3rd, and 5th. The economic rents extracted for the seller is \(3\times 25=75\). The excess economic rents extracted amounts to $15 and the 4th chronologically-ordered bidder would no longer be successful in the auction. **Example 2**.: Let the number of bidders be \(N=6\) and the number of units available to bid be \(K=3\). Let the price floor be \(m=30\) with the chronological bid vector \(\mathbf{b}=(35,15,40,20,25,20)\). The descending price vector is then \(\mathbf{c}=(40,35,25,20,20,15)\). Both systems set the ticket price at the price floor \(m=30\) and the successful bidders are the 1st and 3rd entries in the chronological bid vector \(\mathbf{b}\). The economic rents extracted for both systems is \(2\times 30=60\). In this scenario, the seller may consider lowering the price floor prior to the bidding window closing to allow enough bids to exceed the price floor so that all units are allocated. The next theorem shows that the marginal price auction system has higher allocative efficiency compared to the first-come-first-serve system (see Fig. 5). **Theorem 3**.: _Assuming single-unit bidder demand and price floor, the sum of the valuations of the successful bidders in a marginal price auction system is greater than or equal to the sum of the valuations of successful bidders in a first-come-first-serve system._ Figure 5: Ticket Allocation of a Marginal Price Auction System (L) and a First-Come-First-Serve System (R) Proof.: Let \(N\) denote the number of bidders in the auction. Let the price floor be denoted by \(\$m\) and \(v_{i}\) denote bidder \(i\)'s valuation for one unit of the item. Let \(\mathbf{b}^{b_{i}\geq m}=(b_{1},b_{2},\ldots b_{i},\ldots b_{k})\) denote the \(k\)-vector of submitted bids that exceed the price floor arranged in chronological order with \(k\leq N\), and let \(\mathbf{c}^{b_{i}\geq m}=(c_{1},c_{2},\ldots c_{i},\ldots c_{k})\) be the sorted \(\mathbf{b}^{b_{i}\geq m}\) vector in descending price order. The marginal price auction system allocates the units based on the leading entries of vector \(\mathbf{c}^{b_{i}\geq m}\) while the first-come-first-serve system allocates units based on the leading entries of vector \(\mathbf{b}^{b_{i}\geq m}\). Since \(\mathbf{c}^{b_{i}\geq m}\) is sorted based on descending price order, the sum of its leading entries is greater than or equal to the sum of the leading entries of \(\mathbf{b}^{b_{i}\geq m}\). From Theorem 1, we know that the optimal bidding strategy is \(b_{i}=v_{i}\). Hence the sum of the bids is equal to the sum of the valuations. Thus, the sum of the valuations of the successful bidders in a marginal price auction system is greater than or equal to the sum of the valuations of successful bidders in a first-come-first-serve system. While the marginal price auction system does improve overall allocative efficiency, it nevertheless erodes consumer surplus and redistributes the surplus to the sellers (see Fig. 6). To ensure that consumers still enjoy some benefits of switching to the marginal price ticketing system, a welfare transfer in the form of a rebate and/or excess collateral return mechanism may be implemented if the final settlement price greatly exceeds the original price floor ticket price (see Fig. 7). Non-monetary rebates (e.g. merchandise) may also be distributed if there is perceived value by the bidders. It is important to note that this rebate must be done after the auction has taken place, and should not occur frequently enough so as to change the expectations of the bidders. Changing expectations will result in deviations in the optimal strategy of bidders, and could render Theorem 1 invalid. Finally, we prove that it is not optimal to buy from ticket scalpers in the case of time-invariant bidder valuations. **Theorem 4**.: _If individual valuations are time-invariant, then it is not optimal for bidders to buy from ticket scalpers after an unsuccessful bid._ Proof.: Let \(p\) be the final price set by the marginal price auction and let \(v_{i}\) denote bidder \(i\)'s valuation for one ticket. If bidder \(i\) is unsuccessful in the auction, by Theorem 1, the individual valuation is less than the final price (\(v_{i}<p\)). For economically rational ticket scalpers, the scalping price \(\tilde{p}\) is given by \(\tilde{p}\geq p\). Assuming individual valuations are time-invariant, we have \(v_{i}<p\leq\tilde{p}\). So bidder \(i\)'s valuation of the ticket is below the scalping price, and the bidder is better off not buying the ticket from the scalper. Theorem 4 is particularly relevant for event ticketing purposes as it partially solves the ticket scalping problem. Although not necessarily a negative externality in the economics sense as ticket scalpers do serve a purpose to equilibrate limited supply with demand, it is nevertheless regarded as socially unacceptable and banned in most countries due to the erosion of consumer surplus (for the case of Australia, see Ref. [6]). The marginal price auction mechanism partially solves this as bidders with time-invariant valuations will refrain from purchasing tickets from scalpers. Therefore, individuals that could potentially buy from scalpers are restricted to the subset of bidders that have time-varying valuations, new bidders that did not participate in the original auction and/or bidders who may wish to obtain better seating for the event. ## 4 Simulation We provide a simple simulation of the marginal price auction system summarized in Table 1. The simulation assumes three scenarios covering small, medium, and large events with capacity \(K=100\), \(1000\), and \(10000\) respectively. We also assume a price floor of \(m=100\) with the number of bidders equaling \(1.5\times K\) and have valuations according to the normal distribution N(\(\mu=125\), \(\sigma=25\)) (see Ref. [7]). The emphasis here is not on the assumptions and such analysis is best left to the econometricians. Here we focus on key distinctive features of the marginal price ticket allocation system. The simulation substantiates the proofs of Theorem 2 and Theorem 3. We see an increase in both economic rents extracted and total bidder valuation from the marginal price auction system as compared to the first-come-first-serve system. However, we also see an erosion of consumer surplus due to the need to pay a higher ticket price. It may become socially unacceptable for the ticket price to be substantially above the price floor. In such cases, a rebate mechanism should be used to redistribute the surplus back to the consumers. Overall, the simulation shows that total allocative Figure 6: Consumer and Seller Surplus of a Marginal Price Auction System (L) and a First-Come-First-Serve System (R) efficiency is increased by using the marginal price auction system. ## 5 Conclusion Through this paper, we have analyzed a ticketing protocol based on the marginal price auction system. During the bidding window, bidders can submit bids for the tickets and post collateral. The protocol allocates the tickets to the highest bids and the ticket price is determined by the lowest winning bid. Tickets are then released to the successful bidders with a requirement to pay within a specified timeframe and collateral is given back to all bidders. We also proved that the mechanism allows for a more allocative efficient ticketing system. Additionally, more economic rents can be obtained by the event organizers and we also showed that it is not optimal for bidders to buy from ticket scalpers under time-invariant valuations. Finally, we provide a simple simulation to substantiate our proofs. Figure 7: Welfare Transfer from Sellers to Consumers from Rebate
2309.10297
Approximate ultrahomogeneity in $L_pL_q$ lattices
We show that for $1\leq p, q<\infty$ with $p/q \notin \mathbb{N}$, the doubly atomless separable $L_pL_q$ Banach lattice $L_p(L_q)$ is approximately ultrahomogeneous (AUH) over the class of its finitely generated sublattices. The above is not true when $p/q \in \mathbb{N}$. However, for any $p\neq q$, $L_p(L_q)$ is AUH over the finitely generated lattices in the class $BL_pL_q$ of bands of $L_pL_q$ lattices.
Mary Angelica Tursi
2023-09-19T04:01:52Z
http://arxiv.org/abs/2309.10297v1
# Approximate ultrahomogeneous in \(L_{p}L_{q}\) lattices ###### Abstract. We show that for \(1\leq p,q<\infty\) with \(p/q\notin\mathbb{N}\), the doubly atomless separable \(L_{p}L_{q}\) Banach lattice \(L_{p}(L_{q})\) is approximately ultrahomogeneous (AUH) over the class of its finitely generated sublattices. The above is not true when \(p/q\in\mathbb{N}\). However, for any \(p\neq q\), \(L_{p}(L_{q})\) is AUH over the finitely generated lattices in the class \(BL_{p}L_{q}\) of bands of \(L_{p}L_{q}\) lattices. ## 1. Introduction In this paper, we explore the homogeneity properties (or lack thereof) of the class of \(L_{p}L_{q}\) lattices under various conditions. The following is taken from [6]: A Banach lattice \(X\) is an **abstract \(L_{p}L_{q}\) lattice** if there is a measure space \((\Omega,\Sigma,\mu)\) such that \(X\) can be equipped with an \(L_{\infty}(\Omega)\)-module and a map \(N:X\to L_{p}(\Omega)_{+}\) such that * For all \(\phi\in L_{\infty}(\Omega)_{+}\) and \(x\in X_{+}\), \(\phi\cdot x\geq 0\), * For all \(\phi\in L_{\infty}(\Omega)\) and \(x\in X\), \(N[\phi\cdot x]=|\phi|N[x]\). * For all \(x,y\in X\), \(N[x+y]\leq N[x]+N[y]\) * If \(x\) and \(y\) are disjoint, \(N[x+y]^{q}=N[x]^{q}+N[y]^{q}\), and if \(|x|\leq|y|\), then \(N[x]\leq N[y]\). * For all \(x\in X\), \(\|x\|=\|N[x]\|_{L_{p}}\). When the abstract \(L_{p}L_{q}\) space is separable, it has a concrete representation: Suppose \((\Omega,\Sigma,\mu)\) and \((\Omega^{\prime},\Sigma^{\prime},\mu^{\prime})\) are measure spaces. Denote by \(L_{p}(\Omega;L_{q}(\Omega^{\prime}))\) the space of Bochner-measurable functions \(f:\Omega\to L_{q}(\Omega^{\prime})\) such that the function \(N[f]\), with \(N[f](\omega)=\|f(\omega)\|_{q}\) for \(\omega\in\Omega\), is in \(L_{p}(\Omega)\). The class of _bands_ in \(L_{p}L_{q}\) lattices, which we denote by \(BL_{p}L_{q}\), has certain analogous properties to those of \(L_{p}\) spaces, particularly with respect to its isometric theory. \(L_{p}L_{q}\) lattices (and their sublattices) have been extensively studied for their model theoretic properties in [6] and [7]. It turns out that while abstract \(L_{p}L_{q}\) lattices themselves are not axiomatizable, the larger class \(BL_{p}L_{q}\) is axiomatizable with certain properties corresponding to those of \(L_{p}\) spaces. For instance, it is known that the class of atomless \(L_{p}\) lattices is separably categorical, meaning that there exists one unique atomless separable \(L_{p}\) lattice up to lattice isometry. Correspondingly, the class of _doubly atomless \(BL_{p}L_{q}\) lattices is also separably categorical; in particular, up to lattice isometry, \(L_{p}([0,1];L_{q}[0,1])\), which throughout will just be referred to as \(L_{p}(L_{q})\), is the unique separable doubly atomless \(BL_{p}L_{q}\) lattice (see [7, Proposition 2.6]). Additionally, when \(p\neq q\), the lattice isometries of \(L_{p}L_{q}\) lattices can be characterized in a manner echoing those of linear isometries over \(L_{p}\) spaces (with \(p\neq 2\)). Recall from [1, Ch. 11 Theorem 5.1] that a map \(T:L_{p}(0,1)\to L_{p}(0,1)\) is a surjective linear isometry iff \(Tf(t)=h(t)f(\phi(t))\), where \(\phi\) is a measure-preserving transformation and \(h\) is related to \(\phi\) through Radon-Nikodym derivatives. If we want \(T\) to be a _lattice_ isometry as well, then we also have \(h\) positive (and the above characterization will also work for \(p=2\)). In [3] (for the case of \(q=2\)) and [13], a corresponding characterization of linear isometries is found for spaces of the form \(L_{p}(X;Y)\), for certain \(p\) and Banach spaces \(Y\). In particular, for \(L_{p}L_{q}\) lattices with \(p\neq q\): given \(f\in L_{p}(\Omega;L_{q}(\Omega^{\prime}))\), where \(f\) is understood as a map from \(\Omega\) to \(L_{q}\), any surjective linear isometry \(T\) is of the form \[Tf(x)=S(x)\big{(}e(x)\phi f(x)\big{)},\] where \(\phi\) is a set isomorphism (see [3] and [13] for definitions) \(e\) is a measurable function related to \(\phi\) via Radon-Nikodym derivatives, and \(S\) is a Bochner-measurable function from \(\Omega\) to the space of linear maps from \(L_{q}\) to itself such that for each \(x\), \(S(x)\) is a linear isometry over \(L_{q}\). In [11], Raynaud obtained results on linear subspaces of \(L_{p}L_{q}\) spaces, showing that for \(1\leq q\leq p<\infty\), some \(\ell_{r}\) linearly isomorphically embeds into \(L_{p}(L_{q})\) iff it embeds either to \(L_{p}\) or to \(L_{q}\). However, when \(1\leq p\leq q<\infty\), for \(p\leq r\leq q\), the space \(\ell_{r}\) isometrically embeds as a lattice in \(L_{p}(L_{q})\), and for any \(p\)-convex and \(q\)-concave Orlicz function \(\phi\), the lattice \(L_{\phi}\) embeds lattice isomorphically into \(L_{p}(L_{q})\). Thus, unlike with \(L_{p}\) lattices whose infinite dimensional sublattices are determined up to lattice isometry by the number of atoms, the sublattices of \(L_{p}L_{q}\) are not so simply classifiable. In fact, the lattice isometry classes behave more like the \(L_{p}\) linear isometries, at least along the positive cone, as is evident in certain equimeasurability results for \(L_{p}L_{q}\) lattices. In [11], Raynaud also obtained the following on uniqueness of measures, a variation of a result which will be relevant in this paper: let \(\alpha>0,\alpha\notin\mathbb{N}\), and suppose two probability measures \(\nu_{1}\) and \(\nu_{2}\) on \(\mathbb{R}_{+}\) are such that for all \(s>0\), \[\int_{0}^{\infty}(t+s)^{\alpha}\ d\nu_{1}(t)=\int_{0}^{\infty}(t+s)^{\alpha} \ d\nu_{1}(t).\] Then \(\nu_{1}=\nu_{2}\). Linde gives an alternate proof of this result in [8]. Various versions and expansions of the above result appear in reference to \(L_{p}\) spaces: for instance, an early result from Rudin generalizes the above to equality of integrals over \(\mathbb{R}^{n}\): ([12]). Assume that \(\alpha>0\) with \(\alpha\notin 2\mathbb{N}\), and suppose that for all \(\mathbf{v}\in R^{n}\), \[\int_{\mathbb{R}^{n}}(1+\mathbf{v}\cdot z)^{\alpha}\ d\nu_{1}(z)=\int_{ \mathbb{R}^{n}}(1+\mathbf{v}\cdot z)^{\alpha}\ d\nu_{2}(z)\] Then \(\nu_{1}=\nu_{2}\). An application of this result is a similar condition by which one can show that one collection of measurable functions \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}\), with \(\mathbf{f}=(f_{1},...,f_{n})\) is equimeasurable with another collection \(\mathbf{g}=(g_{1},...,g_{n})\) By defining \(\nu_{1}\) and \(\nu_{2}\) as pushforward measures of \(F\) and \(G\). In the case of \(L_{p}\) spaces, if \(f\) and \(g\) are corresponding basic sequences whose pushforward measures satisfy the above for \(\alpha=p\), then they generate isometric Banach spaces. Raynaud's result shows the converse is true for \(\alpha\neq 4,6,8,...\). A similar result in\(L_{p}(L_{q})\) from [7] holds for \(\alpha=p/q\notin\mathbb{N}\) under certain conditions, except instead of equimeasurable \(\mathbf{f}\) and \(\mathbf{g}\), when the \(f_{i}\)'s and \(g_{i}^{\prime}s\) are mutually disjoint and positive and the map \(f_{i}\mapsto g_{i}\) generates a lattice isometry, \((N[f_{1}],...,N[f_{n}])\) and \((N[g_{1}],...,N[g_{n}])\) are equimeasurable. Recall that a space \(X\) is _approximately ultrahomogeneous_ (AUH) over a class \(\mathcal{G}\) of finitely generated spaces if for all appropriate embeddings \(f_{i};E\hookrightarrow X\) with \(i=1,2\), for all \(E\in\mathcal{G}\) generated by \(e_{1},...,e_{n}\in E\), and for all \(\varepsilon>0\), there exists an automorphism \(\phi:X\to X\) such that for each \(1\leq j\leq n\), \(\|\phi\circ f_{1}(e_{j})-f_{2}(e_{j})\|<\varepsilon\). In the Banach space setting, the embeddings are linear embeddings and the class of finitely generated spaces are finite dimensional spaces. In the lattice setting, the appropriate maps are isometric lattice embeddings, and one can either choose finite dimensional or finitely generated lattices. The equimeasurability results described above can be used to show an approximate ultrahomogeneity of \(L_{p}([0,1])\) over its finite dimensional linear subspaces only so long as \(p\notin 2\mathbb{N}\) (see [10]). Conversely, the cases where \(p\in 2\mathbb{N}\) are not AUH over finite dimensional linear subspaces, with counterexamples showing linearly isometric spaces whose corresponding basis elements are not equimeasurability. Alternate methods using continuous Fraisse Theory have since then been used to give alternate proofs of linear approximate ultrahomogeneity of \(L_{p}\) for \(p\notin 2\mathbb{N}\) (see [5]) as well as lattice homogeneity of \(L_{p}\) for all \(1\leq p<\infty\) (see [2], [5]). This paper is structured as follows: in section 2, we first establish basic notation and give a characterization of finite dimensional \(BL_{p}L_{q}\) lattices. This characterization is used in subsequent sections for establishing both equimeasurability and ultrahomogeneity results later on. In section 3 we show that when \(p\neq q\), \(L_{p}(L_{q}):=L_{p}(L_{q})\) is AUH over the larger class of finite dimensional (and finitely generated) \(BL_{p}L_{q}\) spaces. This is done by characterizing representations of \(BL_{p}L_{q}\) sublattices \(L_{p}(L_{q})\) in such a way that induces automorphisms over \(L_{p}(L_{q})\) making the homogeneity diagram commute. The results here play a role in subsequent sections as well. In section 4, we prove that if in addition \(p/q\notin\mathbb{N}\), \(L_{p}(L_{q})\) is also AUH over the class of its finitely generated sublattices. First, we determine the isometric structure of finite dimensional sublattices of \(L_{p}(L_{q})\) lattices by giving an alternate proof of [7, Proposition 3.2] showing that two sublattices \(E\) and \(F\) of \(L_{p}(L_{q})\), with the \(e_{i}\)'s and \(f_{i}\)'s each forming the basis of atoms, are lattice isometric iff \((N[e_{1}],...,N[e_{n}])\) and \((N[f_{1}),...,N[f_{n}])\) are equimeasurable. The equimeasurability result allows us to reduce a homogeneity diagram involving a finite dimensional sublattice of \(L_{p}(L_{q})\) to one with a finite dimensional \(BL_{p}L_{q}\) lattice, from which, in combination with the results in section 3, the main result follows. Section 5 considers the case of \(p/q\in\mathbb{N}\). Here, we provide a counterexample to equimeasurability in the case that \(p/q\in\mathbb{N}\) and use this counterexample to show that in such cases, \(L_{p}(L_{q})\) is not AUH over the class of its finite dimensional lattices. ## 2. Preliminaries We begin with some basic notation and definitions. Given a measurable set \(A\subseteq\mathbb{R}^{n}\), we let \(\mathbf{1}_{A}\) refer to the characteristic function over \(A\). For a lattice \(X\), let \(B(X)\) be the unit ball, and \(S(X)\) be the unit sphere. For elements \(e_{1},...,e_{n}\) in some lattice \(X\), use bracket notation \(<e_{1},...,e_{n}>_{L}\) to refer to the Banach lattice generated by the elements \(e_{1},...,e_{n}\). In addition, we write \(<e_{1},...,e_{n}>\) without the \(L\) subscript to denote that the generating elements \(e_{i}\) are also mutually disjoint positive elements in the unit sphere. Throughout, we will also use boldface notation to designate a finite sequence of elements: for instance, for \(x_{1},...,x_{n}\in\mathbb{R}\) or \(x_{1},...,x_{n}\in X\) for some lattice \(x\), let \(\mathbf{x}=(x_{1},...,x_{n})\). Use the same notation to denote a sequence of functions over corresponding elements: for example, let \((f_{1},...,f_{n})=\mathbf{f}\), or \((f_{1}(x_{1}),...f_{n}(x_{n}))=\mathbf{f}(\mathbf{x})\), or \((f(x_{1}),...,f(x_{n}))=f(\mathbf{x})\). Finally, for any element \(e\) or tuple \(\mathbf{e}\) of elements in some lattice \(X\), let \(\boldsymbol{\beta}(e)\) and \(\boldsymbol{\beta}(\mathbf{e})\) be the band generated by \(e\) and \(\mathbf{e}\) in \(X\), respectively. Recall that Bochner integrable functions are the norm limits of simple functions \(f:\Omega\to L_{q}(\Omega^{\prime})\), with \(f(\omega)=\sum_{1}^{n}r_{i}\mathbf{1}_{A_{i}}(\omega)\mathbf{1}_{B_{i}}\), where \(\mathbf{1}_{A_{i}}\) and \(\mathbf{1}_{B_{i}}\) are the characteristic functions for \(A_{i}\in\Sigma\) and \(B_{i}\in\Sigma^{\prime}\), respectively. One can also consider \(f\in L_{p}(\Omega;L_{q}(\Omega^{\prime}))\) as a \(\Sigma\otimes\Sigma^{\prime}\)-measurable function such that \[\|f\|=\bigg{(}\int_{\Omega}\|f(\omega)\|_{q}^{p}\ d\omega\bigg{)}^{1/p}=\bigg{(} \int_{\Omega}\bigg{(}\int_{\Omega^{\prime}}|f(\omega,\omega^{\prime})|^{q}\ d \omega^{\prime}\bigg{)}^{p/q}\ d\omega\bigg{)}^{1/p}\] Unlike the more familiar \(L_{p}\) lattices, the class of abstract \(L_{p}L_{q}\) lattices is not itself axiomatizable; however, the slightly more general class \(BL_{p}L_{q}\) of bands in \(L_{p}(L_{q})\) lattices is axiomatizable. Additionally, if \(X\) is a separable \(BL_{p}L_{q}\) lattice, it is lattice isometric to a lattice of the form \[\bigg{(}\bigoplus_{p}L_{p}(\Omega_{n};\ell_{q}^{n})\bigg{)}\oplus_{p}L_{p}( \Omega_{\infty};\ell_{q})\] \[\oplus_{p}\bigg{(}\bigoplus_{p}L_{p}(\Omega_{n}^{\prime};L_{q}(0,1)\oplus_{q} \ell_{q}^{n})\bigg{)}\] \[\oplus_{p}L_{p}(\Omega_{\infty}^{\prime};L_{q}(0,1)\oplus_{q}\ell_{q}).\] \(BL_{p}L_{q}\) lattices may also contain what are called _base disjoint_ elements. \(x\) and \(y\) are base disjoint if \(N[x]\perp N[y]\). Based on this, we call \(x\) a _base atom_ if whenever \(0\leq y,z\leq x\) with \(y\) and \(z\) base disjoint, then either \(N[y]=0\) or \(N[z]=0\). Observe this implies that \(N[x]\) is an atom in \(L_{p}\). Alternatively, we call \(x\) a _fiber atom_ if any disjoint \(0\leq y,z\leq x\) are also base disjoint. Finally, we say that \(X\) is _doubly atomless_ if it contains neither base atoms nor fiber atoms. Another representation of \(BL_{p}L_{q}\) involves its finite dimensional subspaces. We say that \(X\) is an \((\mathcal{L}_{p}\mathcal{L}_{q})_{\lambda}\) lattice, with \(\lambda\geq 1\) if for all disjoint \(x_{1},...,x_{n}\in X\) and \(\varepsilon>0\), there is a finite dimensional \(F\) of \(X\) that is \((1+\varepsilon)\)-isometric to a finite dimensional \(BL_{p}L_{q}\) space containing \(x_{1}^{\prime},...,x_{n}^{\prime}\) such that for each \(1\leq i\leq n\), \(\|x_{i}-x_{i}^{\prime}\|<\varepsilon\). Henson and Raynaud proved that in fact, any lattice \(X\) is a \(BL_{p}L_{q}\) space iff \(X\) is \((\mathcal{L}_{p}\mathcal{L}_{q})_{1}\) (see [6]). This equivalence can be used to show the following: **Proposition 2.1**.: _(Henson, Raynaud) If \(X\) is a separable \(BL_{p}L_{q}\) lattice, then it is the inductive limit of finite dimensional \(BL_{p}L_{q}\) lattices._ The latter statement is not explicitly in the statement of Lemma 3.5 in [6], but the proof showing that any \(BL_{p}L_{q}\) lattice is \((\mathcal{L}_{p}\mathcal{L}_{q})_{1}\) was demonstrated by proving the statement itself. Throughout this paper, we refer to this class of finite dimensional \(BL_{p}L_{q}\) lattices as \(B\mathcal{K}_{p,q}\). Observe that if \(E\in B\mathcal{K}_{p,q}\), then it is of the form \(\oplus_{p}(\ell_{q}^{m_{i}})_{1}^{N}\) where for \(1\leq k\leq N\), the atoms \(e(1,1),...,e(k,m_{k})\) generate \(\ell_{q}^{m_{k}}\). **Proposition 2.2**.: _Let \(E\) be a \(B\mathcal{K}_{p,q}\) sublattice of \(L_{p}(L_{q})\) with atoms \(e(k,j)\) as described above. Then the following are true:_ 1. _There exist disjoint measurable_ \(A(k)\subseteq[0,1]\) _such that for all_ \(i\)_,_ \(\operatorname{supp}(e(k,j))\subseteq A(k)\times[0,1]\)_,_ 2. _For all_ \(k\) _and for all_ \(j,j^{\prime}\)_,_ \(N[e(k,j)]=N[e(k,j^{\prime})]\)_._ _Conversely, if \(E\) is a finite dimensional sublattice of \(L_{p}(L_{q})\) satisfying properties (1) and (2), then \(E\) is in \(B\mathcal{K}_{p,q}\)._ In order to prove this theorem, we first need the following lemma: **Lemma 2.3**.: _Let \(0<r<\infty\), with \(r\neq 1\). suppose \(x_{1},...,x_{n}\in L_{r}+\) are such that_ \[\|\sum_{1}^{n}x_{k}\|_{r}^{r}=\sum\|x_{k}\|_{r}^{r}\] _Then the \(x_{i}\)'s are mutually disjoint._ Proof.: If \(r<1\), then \[\int x_{i}(t)^{r}+x_{j}(t)^{r}\ dt=\|x_{i}\|_{r}^{r}+\|x_{j}\|_{r}^{r}=\int(x_{ i}(t)+x_{j}(t))^{r}\ dt \tag{1}\] Now observe that for all \(t\), \((x_{i}(t)+x_{j}(t))^{r}\leq x_{i}(t)^{r}+x_{j}(t)^{r}\), with equality iff either \(x_{i}(t)=0\) or \(x_{j}(t)=0\), so \((x_{i}+x_{j})^{r}-x_{i}^{r}-x_{j}^{r}\in L_{1}+\). Combined with the above equality in line (1), since \(\|(x_{i}+x_{j})^{r}-x_{i}^{r}-x_{j}^{r}\|_{1}=0\), it follows that \(x_{i}(t)^{r}+x_{j}(t)^{r}=(x_{i}(t)+x_{j}(t))^{r}\) a.e., so \(x_{i}\) must be disjoint from \(x_{j}\) when \(i\neq j\). If \(r>1\), proceed as in the proof for \(r<1\), but with the inequalities reversed, given that in this instance \(x_{i}(t)^{r}+x_{j}(t)^{r}\leq(x_{i}(t)+x_{j}(t))^{r}\) for all \(t\). **Remark 2.4**.: The above implies that a \(BL_{p}L_{q}\) lattice \(X\) is base atomless if it contains no bands lattice isometric to some \(L_{p}\) or \(L_{q}\) space. Indeed, if there were a base atom \(e\), then any two \(0\leq x\perp y\leq e\) would have to have \(N\)-norms multiple to each other, so \(<x,y>\) is lattice isometric to \(\ell_{q}^{2}\). Resultantly, the band generated by \(e\) is an \(L_{q}\) space. Similarly, if \(e\) is a fiber atom, then any \(0\leq x\perp y\leq e\) is also base disjoint, which implies that the band generated by \(e\) is an \(L_{p}\) space. We now conclude with the proof of Proposition 2.2: Proof of Proposition 2.2.: Observe that for each appropriate pair \((k,j)\), \[\bigg{(}\int_{0}^{1}N[e(k,j)]^{p}(s)\ ds\bigg{)}^{q/p}=\|N^{q}[e(k,j)]\|_{p/q}=1\] For notational ease, let \(E(k,j)=N^{q}[e(k,j)]\). Pick \(j_{1},...,j_{n}\) with each \(j_{k}\leq m_{k}\). Then, by disjointness of the \(e(k,j)\)'s, for all \((a_{k})_{k}\geq 0\) and all \(x=\sum_{k}a_{k}e(k,j_{k})\), \[\|\sum a_{k}e(k,j_{k})\|^{q} =\bigg{(}\int_{0}^{1}\bigg{(}\sum_{k}a_{k}^{q}E(k,j_{k})(s)\bigg{)} ^{p/q}\ ds\bigg{)}^{q/p}\] \[=\bigg{|}\bigg{|}\sum a_{k}^{q}E(k,j_{k})\bigg{|}\bigg{|}_{p/q}.\] Now since the \(e(k,j_{k})\)'s are isometric to \(\ell_{p}\), \[\bigg{|}\bigg{|}\sum a_{k}^{q}E(k,j_{k})\bigg{|}\bigg{|}_{p/q}^{p/q}=\sum_{i} a_{k}^{p}=\sum_{k}(a_{k}^{q})^{p/q}=\sum_{k}\|a_{k}^{q}E(k,j_{k})\|_{p/q}^{p/q}.\] Since the \(E(k,j_{k})\)'s are all positive and \(p\neq q\), by Lemma 2.3, the \(E(k,j_{k})\)'s are disjoint, that is, the \(e(k,j_{k})^{\prime}s\) are base disjoint. For \(1\leq k\leq N\), let \(A(1),...,A(n)\) be mutually disjoint measurable sets each supporting each \(E(k,j)\) for \(1\leq j\leq n_{k}\). Then each \(e(k,j)\) is supported by \(A(k)\times[0,1]\). Now we prove (2). Fix \(k\), Then using similar computations as above, and since the \(e(k,j)\)'s for fixed \(k\) generate \(\ell_{q}^{m_{k}}\): \[\|\sum_{j}a_{j}e(k,j)\|^{q}=\bigg{|}\bigg{|}\sum_{j}a_{j}^{q}E(k,j)\bigg{|} \bigg{|}_{p/q}=\sum_{j}a_{j}^{q}=\sum_{j}a_{j}^{q}\|E(k,j)\|_{p/q}\] By Minkowski's inequality, as \(p\neq q\), equality occurs only when \(E(k,j)(s)=E(k,j^{\prime})(s)\) a.e. for all \(1\leq j,j^{\prime}\leq n_{i}\). To show the converse, it is enough to give the computation: \[\|\sum_{k,j}a(k,j)e(k,j)\| =\bigg{(}\int_{0}^{1}\bigg{[}\int\bigg{(}\sum_{k,j}a(k,j)e(k,j)(s,t)\bigg{)}^{q}\ dt\bigg{]}^{p/q}\ ds\bigg{)}^{1/p}\] \[=\bigg{(}\sum_{k}\int_{0}^{1}\bigg{[}\sum_{j=1}^{n_{i}}|a(k,j)|^ {q}E(k,j)(s)\bigg{]}^{p/q}\ ds\bigg{)}^{1/p}\] \[=\bigg{(}\sum_{k}\bigg{[}\sum_{j=1}^{n_{k}}|a(k,j)|^{q}\bigg{]} ^{p/q}\int_{0}^{1}E(k,1)^{p/q}(s)\ ds\bigg{)}^{1/p}\] \[=\bigg{(}\sum_{k}\bigg{[}\sum_{j=1}^{n_{k}}|a(k,j)|^{q}\bigg{]} ^{p/q}\bigg{)}^{1/p}\] The following results will allow us to reduce homogeneity diagrams to those in which the atoms \(e(k,j)\) of some \(E\in B\mathcal{K}_{p,q}\) are mapped by both embeddings to characteristic functions of measurable \(A(k,j)\subseteq[0,1]^{2}\). In fact, we can further simplify such diagrams to cases where \(E\) is generated by such \(e(k,j)\)'s which additionally are _base-simple_, i.e., \(N[e(k,j)]\) is a simple function. **Proposition 2.5**.: _Let \(1\leq p\neq q<\infty\) and let \(e\in S(L_{p}(L_{q}))_{+}\) be an element with full support over \([0,1]^{2}\). Then there exists a lattice automorphism \(\phi\) from \(L_{p}(L_{q})\) to itself such that \(\phi(\mathbf{1})=e\). Furthermore, \(\phi\) can be constructed to bijectively map both simple functions to simple functions and base-simple functions to base-simple functions._ Proof.: The proof is an expansion of the technique used in Lemma 3.3 from [5]. Given a function \(g(y)\in L_{q_{+}}\), define \(\tilde{g}(y)_{q}\) by \(\tilde{g}(y)_{q}=\int_{0}^{y}g(t)^{q}\ dt\), and for notation, use \(e_{x}(y)=e(x,y)\). Since \(e\) has full support, we may assume that for all \(0\leq x\leq 1\), \(N[e](x)>0\). From there, Define \(\phi\) by \[\phi(f)(x,y)=f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\tilde{e}_{x}(y)_{q}}{N^{q }[e](x)}\bigg{)}e(x,y)\] \(e\geq 0\) and the rest of the function definition is a composition, so \(\phi\) is a lattice homomorphism. To show it is also an isometry, simply compute the norm, using substitution in the appropriate places: \[\|\phi(f)\|^{p}= \int_{0}^{1}\bigg{|}\int_{0}^{1}f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\tilde{e}_{x}(y)_{q}}{N^{q}[e](x)}\bigg{)}^{q}e(x,y)^{q}\ dy\bigg{|}^{p/ q}\ dx\] \[= \int_{0}^{1}\bigg{|}\int_{0}^{1}f(\widetilde{N[e]}(x)_{p},y)^{q} \ dy\bigg{|}^{p/q}N^{p}[e](x)\ dx\] \[= \int_{0}^{1}N[f](\widetilde{N[e]}(x)_{p})^{p}N^{p}[e](x)\ dx\] \[= \int_{0}^{1}N^{p}[f](x)\ dx=\|f\|^{p}.\] To show surjectivity, let \(B\subseteq[0,1]^{2}\) be a measurable set. Note that any \((x^{\prime},y^{\prime})\in[0,1]^{2}\) can be expressed as \((\widetilde{N[e]}(x)_{p},\frac{\tilde{e}_{x}(y)_{q}}{N^{q}[e](x)})\) for some \(x,y\), since \(\widetilde{N[e]}(x)_{p}\) is an increasing continuous function from \(0\) to \(1\), while \(\tilde{e}_{x}(y)_{q}\) is continuously increasing from \(0\) to \(N^{q}[e](x)\). Thus there exists \(B^{\prime}\) such that \(\phi(\mathbf{1}_{B^{\prime}})=\mathbf{1}_{B}\cdot e\), implying that \(\phi\)'s image is dense in the band generated by \(\boldsymbol{\beta}(e)=L_{p}(L_{q})\) since \(e\) has full support. Therefore, \(\phi\) is also surjective. Finally, \(\phi\) consists of function composition into \(f\) multiplied by \(e\), so if \(e\) and \(f\) are simple, then it has a finite image, so if \(f\) is simple, then the product is also simple, \(\phi\) maps simple functions to simple functions, Conversely, if \(\phi(f)\) is simple, then \(\phi(f)/e\) is also simple. Thus \(f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\tilde{e}_{x}(y)_{q}}{N[e](x)}\bigg{)}\) has a finite image. It follows that \(f\) itself has a finite image. Using similar reasoning, if \(N[e]\) is simple, then whenever \(N[f]\) is simple, \(N[\phi(f)]\) must also be simple, and likewise the converse is true, since by the computation above, \(N[\phi(f)](x)=N[f](\widetilde{N[e]}(x)_{p})\cdot N[e](x)\). ## 3. Approximate Ultrahomogeneity of \(L_{p}(L_{q})\) over \(BL_{p}L_{q}\) spaces In this section, we show that for any \(1\leq p\neq q<\infty\), \(L_{p}(L_{q})\) is AUH over \(B\mathcal{K}_{p,q}\). Let \(\mathbf{f}:=(f_{1},...,f_{n})\) and \(\mathbf{g}:=(g_{1},...,g_{n})\) be sequences of measurable functions and let \(\lambda\) be a measure in \(\mathbb{R}\). Then we say that \(\mathbf{f}\) and \(\mathbf{g}\) are _equimeasurable_ if for all \(\lambda\)-measurable \(B\subseteq\mathbb{R}^{n}\), \[\lambda(t:\mathbf{f}(t)\in B)=\lambda(t:\mathbf{g}(t)\in B)\] We also say that functions \(\mathbf{f}\) and \(\mathbf{g}\) in \(L_{p}(L_{q})\) are _base-equimeasurable_ if \(N(\mathbf{f})\) and \(N(\mathbf{g})\) are equimeasurable. Lusky's main proof in [10] of linear approximate ultrahomogeneity in \(L_{p}(0,1)\) for \(p\neq 4,6,8,...\) hinges on the equimeasurability of generating elements for two copies of some \(E=<e_{1},...,e_{n}>\) in \(L_{p}\) containing \(\mathbf{1}\). But when \(p=4,6,8,...\), there exist finite dimensional \(E\) such that two linearly isometric copies of \(E\) in \(L_{p}\) do not have equimeasurable corresponding basis elements. However, if homogeneity properties are limited to \(E\) with mutually disjoint basis elements, then \(E\) is linearly isometric to \(\ell_{p}^{n}\), and for all \(1\leq p<\infty\), \(L_{p}\) is AUH over all \(\ell_{p}^{n}\) spaces. Note that here, an equimeasurability principle (albeit a trivial one) also applies: Any two copies of \(\ell_{p}^{n}=<e_{1},...,e_{n}>\) into \(L_{p}(0,1)\) with \(\sum_{k}e_{k}=n^{1/p}\cdot\mathbf{1}\) have (trivially) equimeasurable corresponding basis elements to each other as well. In the \(L_{p}(L_{q})\) setting, similar results arise, except rather than comparing corresponding basis elements \(f_{i}(e_{1}),...,f_{i}(e_{n})\) of isometric copies \(f_{i}(E)\) of \(E\), equimeasurability results hold in the \(L_{q}\)-norms \(N[f_{i}(e_{j})]\) under similar conditions, with finite dimensional \(BL_{p}L_{q}\) lattices taking on a role like \(\ell_{p}^{n}\) does in \(L_{p}\) spaces. The following shows that equimeasurability plays a strong role in the approximate ultrahomogeneity of \(L_{p}(L_{q})\) by showing that any automorphism fixing \(\mathbf{1}\) preserves base-equimeasurability for characteristic functions: **Proposition 3.1**.: _Suppose \(p\neq q\), and let \(T:L_{p}(L_{q})\) be a lattice automorphism with \(T(\mathbf{1})=\mathbf{1}\). Then there exists a function \(\phi\in L_{p}(L_{q})\) and a measure preserving transformation \(\psi\) over \(L_{p}\) such that for a.e. \(x\in[0,1]\) \(\phi(x,\cdot)\) is also a measure preserving transformation inducing an isometry over \(L_{q}\), and for all \(f\),_ \[Tf(x,y)=f(\psi(x),\phi(x,y)).\] _Furthermore, for all measurable \(B_{1},...,B_{n}\subseteq[0,1]^{2}\) with \(\mathbf{1}_{B_{i}}\)'s mutually disjoint, \((\mathbf{1}_{B_{1}},...,\mathbf{1}_{B_{n}})\) and \((T\mathbf{1}_{B_{1}},...,T\mathbf{1}_{B_{n}})\) are base-equimeasurable._ Proof.: By the main result in [13], there exists a strongly measurable function \(\Phi:[0,1]\to B(L_{q})\), a set isomorphism \(\Psi\) over \(L_{p}\) (see [13] for a definition on set isomorphisms), and some \(e(x)\in L_{p}\) related to the radon-Nikodym derivative of \(\Psi\) such that \[Tf(x)(y)=\Phi(x)(e(x)\Psi f(x))(y),\] and for a.e. \(x\), \(\Phi(x)\) is a linear isometry over \(L_{q}\). Observe first that \(T\) sends any characteristic function \(1_{A\times[0,1]}\in L_{p}(L_{q})\) constant over \(y\) to characteristic function \(\mathbf{1}_{\psi(A)\times[0,1]}\) for some \(\psi(A)\subseteq[0,1]\), so since \(1_{A\times[0,1]}\in L_{p}(L_{q})\) is constant over \(y\), we can just refer to it as \(\mathbf{1}_{A}\). Also, since \(T\) is a lattice isometry, \(\mu(A)=\mu(\psi(A))\), so \(\psi\) is measure preserving. Finally, observe that \(N[\mathbf{1}_{A}]=\mathbf{1}_{A}\). Thus, for any simple function \(g:=\sum c_{i}\mathbf{1}_{A_{i}}\in L_{p}(L_{q})_{+}\) constant over \(y\) with the \(A_{i}\)'s mutually disjoint, we have \(N[g]=g\), and \(Tg=g^{\prime}\). Then for all \(x\), \[N[g^{\prime}](x)=N[Tg](x)=N[\Phi(x)(eg^{\prime})](x)=e(x)N[\Phi(x)(g^{\prime}) ][x]=|e(x)|N[g^{\prime}](x)\] It follows that \(|e(x)|=1\). We can thus adjust \(\Phi\) by multiplying by \(-1\) where \(e(x)=-1\). Note also that \(\Phi\) acts as a lattice isometry over \(L_{p}\) when restricted to elements constant over \(y\), so by Banach's theorem in [1], the map \(\Phi f(x)\) can be interpreted as \(\Phi(x)(\ f(\psi(x))\ )\), where \(\psi\) is a measure preserving transformation over \([0,1]\) inducing \(\Psi\). By Banach's theorem again for \(\Phi(x)\), this \(\Phi\) can be interpreted by \(\Phi f(x,y)=e^{\prime}(x,y)f(\psi(x),\phi(x,y))\), with \(\phi(x,\cdot)\) a measure preserving transformation for a.e. \(x\). But since \(T\mathbf{1}=\mathbf{1}\), this \(e^{\prime}(x,y)=1\) as well. It remains to prove equimeasurability. Let \(\mathbf{1}_{\mathbf{B}}=(\mathbf{1}_{B_{1}},...,\mathbf{1}_{B_{n}})\), and observe that since for a.e. \(x\), \(\phi(x,\cdot)\) is a measure preserving transformation inducing a lattice isometry over \(L_{q}\), it follows that \[N^{q}[\mathbf{1}_{B_{i}}](x)=\mu(y:(x,y)\in B_{i})=\mu(y:(x,\phi(x,y))\in B_{ i}),\] While \[N^{q}[T\mathbf{1}_{B_{i}}](x)=\mu(y:(\psi(x),\phi(x,y))\in B_{i})\] \[=\mu(y:(\psi(x),y)\in B_{i})=N^{q}[\mathbf{1}_{B_{i}}](\psi(x)).\] Thus for each \(A=\prod_{i}A_{i}\) with \(A_{i}\subseteq[0,1]\) measurable, since \(\psi\) is also a measure preserving transformation, \[\mu(x:(N^{q}[\mathbf{1}_{\mathbf{B}}](x)\in A)=\mu(x:(N^{q}[\mathbf{1}_{ \mathbf{B}}](\psi(x))\in A)=\mu(x:(N^{q}[T\mathbf{1}_{\mathbf{B}}](x)\in A),\] and we are done. The following theorem describes a comparable equimeasurability property of certain copies of \(L_{p}L_{q}\) in \(L_{p}(L_{q})\) for any \(1\leq p\neq q<\infty\): **Theorem 3.2**.: _Let \(1\leq p\neq q<\infty\), and suppose that \(f_{i}:E\to L_{p}(L_{q})\) are lattice embeddings with \(E\in BK_{p,q}\) generated by a \((k,j)\)-indexed collection of atoms \(\mathbf{e}:=(e(k,j))_{k,j}\) with \(1\leq k\leq n\) and \(1\leq j\leq m_{k}\) as described in Proposition 2.2. Suppose also that \(f(\sum_{k,j}e(k,j))=\mathbf{1}\cdot\|\sum e(k,j)\|\). Then \((f_{1}(\mathbf{e}))\) and \((f_{2}(\mathbf{e}))\) are base-equimeasurable._ Proof.: Let \(\eta=\|\sum_{k,j}e(k,j)\|\), and note first that each \(\frac{1}{\eta}f_{i}(e(k,j))\) is of the form \(\mathbf{1}_{A_{i}(k,j)}\) for some measurable \(A_{i}(k,j)\subseteq[0,1]^{2}\). Second, \(N^{q}[\mathbf{1}_{A_{i}(k,j)}](s)=\mu(A_{i}(k,j)(s))\) with \(A_{i}(k,j)(s)\subseteq[0,1]\) measurable for a.e. \(s\), so by Proposition 2.2, for each fixed \(k\) and each \(j,j^{\prime}\), \(\mu(A_{i}(k,j)(s))=\mu(A_{i}(k,j^{\prime})(s))=\frac{1}{m_{k}}\mathbf{1}_{A_{ i}(k)}(s)\) with \(A_{i}(1),...,A_{i}(n)\subseteq[0,1]\) almost disjoint. It follows that for each appropriate \(k,j\), \(\frac{1}{\eta}=\frac{1}{m_{k}^{1/q}}\mu(A_{i}(k))^{1/p}\), so \(\mu(A_{i}(k))=\left(\frac{m_{k}^{1/q}}{\eta}\right)^{p}\). To show equimeasurability, observe that for a.e. \(t\), we have \(N^{q}[\mathbf{1}_{A_{i}(k,j)}](s)=\frac{1}{m_{k}}\) iff \(s\in A_{i}(k)\), and \(0\) otherwise. Let \(\mathbf{B}\subseteq\prod_{k}\mathbb{R}^{m_{k}}\) be a measurable set. Note then that any \((k,j)\)-indexed sequence \((N[f_{i}(\mathbf{e})](s))\) is of the form \(\mathbf{c_{s}^{i}}\in\prod_{k}\mathbb{R}^{m_{k}}\) with \(c_{s}^{i}(k,j)=\left(\frac{1}{m_{k}}\right)^{1/q}\) for some unique \(k\), and \(c_{s}^{i}(k,j)=0\) otherwise. It follows then that for some \(I\subseteq 1,...,n\), \[\mu(s:\mathbf{c_{s}^{i}}\in\mathbf{B})=\sum_{k\in I}\mu(A_{i}(k))=\sum_{k\in I }\bigg{(}\frac{m_{k}^{1/q}}{\eta}\bigg{)}.\] Since the above holds independent of our choice of \(i\), we are done. **Remark 3.3**.: The above proof shows much more than base-equimeasurability for copies of \(BK_{p,q}\) lattices in \(L_{p}(L_{q})\). Indeed, if \(\mathbf{1}\in E=<(e(k,j))_{k,j}>\) with \(E\in BK_{p,q}\), then each atom is in fact base-simple, and \(\sum e(k,j)=\eta\cdot\mathbf{1}\) where \(\eta=(\sum_{k}m_{k}^{p/q})^{1/p}\). Furthermore, there exist measurable sets \(A(1),...,A(n)\) partitioning \([0,1]\) with \(\mu(A(k))=\frac{m_{k}^{p/q}}{\eta^{p}}\) such that \(N[e(k,j)]=\frac{\eta}{m_{k}^{1/q}}\mathbf{1}_{A(k)}\). Based on this, we can come up with a "canonical" representation of \(E\), with \(e(k,j)\mapsto\eta\cdot\mathbf{1}_{W_{k}\times V_{k,j}}\), where \[W_{k}=\big{[}\sum_{l=1}^{k-1}\mu(A(l)),\sum_{l=1}^{k}\mu(A(l))\big{]}\text{, and }V_{k,j}=\bigg{[}\frac{j-1}{m_{k}},\frac{j}{m_{k}}\bigg{]}.\] This canonical representation will become relevant in later results. Having characterized representations of lattice in \(BK_{p,q}\), we now move towards proving the AUH result. Before the final proof, we use the following perturbation lemma. **Lemma 3.4**.: _Let \(f:E\to L_{p}(L_{q})\) be a lattice embedding of a lattice \(E=<e_{1},...,e_{n}>\). Then for all \(\varepsilon>0\), there exists an embedding \(g:E\to L_{p}(L_{q})\) such that \(g(E)\) fully supports \(L_{p}(L_{q})\) and \(\|f-g\|<\varepsilon\)._ Proof.: Let \(M_{k}=supp\big{(}N[f(e_{k})]\big{)}\backslash supp\big{(}N[f(\sum_{1}^{n-1}e_ {k})]\big{)}\). For each \(e_{k}\), we will construct \(e^{\prime}_{k}\) disjoint from \(f(E)\) with support in \(M_{k}\times[0,1]\). Let \(M^{\prime}\) be the elements in \([0,1]^{2}\) disjoint from \(f(E)\). Starting with \(n=1\), Observe that \(M^{\prime}\) can be partitioned by \(M^{\prime}\cap M_{k}\times[0,1]:=M^{\prime}_{k}\). Let \[\eta_{k}(x,y)=\varepsilon^{1/q}\frac{N[f(e_{k})](x)}{\mu(M^{\prime}_{k}(x))^{ 1/q}}\mathbf{1}_{M^{\prime}_{k}}(x,y).\] When \(\mu(M^{\prime}_{k}(x))=0\), let \(\eta_{k}(x,y)=0\) as well. Now, let \(g^{\prime}:E\to L_{p}(L_{q})\) be the lattice homomorphism induced by \[g^{\prime}(e_{k})=(1-\varepsilon)^{1/q}f(e_{k})\cdot\mathbf{1}_{M_{k}}+\eta_ {n}+f(e_{k})\cdot\mathbf{1}_{M^{c}_{k}}.\] First, we show that \(g^{\prime}\) is an embedding. Observe that for each \(k\), \[N^{q}[g^{\prime}(e_{k})](x)= \int\eta^{q}_{k}(x,y)+(1-\varepsilon)f(e_{k})^{q}(x,y)\ dy\] \[= \int\varepsilon\frac{N^{q}[f(e_{k})](x)}{\mu(M^{\prime}_{k}(x))} \cdot\mathbf{1}_{M^{\prime}_{k}}(x,y)+(1-\varepsilon)f(e_{k})^{q}(x,y)\ dy\] \[= \varepsilon N^{q}[f(e_{k})](x)+(1-\varepsilon)\int f(e_{k})^{q}(x,y)\ dy\] \[= \varepsilon N^{q}[f(e_{k})](x)+(1-\varepsilon)N^{q}[f(e_{k})](x)= N^{q}[f(e_{k})](x).\] It easily follows that \(g^{\prime}(E)\) is in fact isometric to \(f(E)\), and thus to \(E\). Furthermore, for every \(k\), \[\|f(e_{k})-g^{\prime}(e_{k})\|= \|\mathbf{1}_{M_{k}}[(1-(1-\varepsilon)^{1/q})f(e_{k})+\eta_{k}]\|\] \[\leq (1-(1-\varepsilon)^{1/q})+\varepsilon.\] The above can get arbitrarily small. Now, if \(supp(N(\sum e_{k}))=[0,1]\), let \(g=g^{\prime}\), and we are done. Otherwise, let \(\tilde{M}=\cup_{k}M_{k}\), and observe that \(\sum g^{\prime}(e_{k})\) fully supports \(L_{p}(\tilde{M};L_{q})\). Observe also that \(L_{p}(L_{q})=L_{p}(\tilde{M};L_{q})\oplus_{p}L_{p}(\tilde{M}^{c};L_{q})\). However, both \(L_{p}(\tilde{M};L_{q})\) and \(L_{p}(\tilde{M}^{c};L_{q})\) are lattice isometric to \(L_{p}(L_{q})\) itself. So there exists an isometric copy of \(E\) fully supporting \(L_{p}(\tilde{M}^{c};L_{q})\). Let \(e^{\prime}_{1},...,e^{\prime}_{n}\in L_{p}(\tilde{M}^{c};L_{q})\) be the corresponding basic atoms of this copy, and let \(g(e_{i})=(1-\varepsilon^{p})^{1/p}g^{\prime}(e_{i})+\varepsilon\cdot e^{ \prime}_{n}\). Then for \(x\in E\), \[\|g(x)\|^{p}=(1-\varepsilon)\|g^{\prime}(x)\|^{p}+\varepsilon\|x\|^{p}=\|x\|^{p}.\] Using similar reasoning as in the definition of \(g^{\prime}\), one also gets \(\|g-g^{\prime}\|<(1-(1-\varepsilon)^{1/p})+\varepsilon\), so \(g\) can also arbitrarily approximate \(f\). Observe that the lemma above allows us to reduce the approximate homogeneity question down to cases where the copies of a \(B\mathcal{K}_{p,q}\) lattice fully support \(L_{p}(L_{q})\). Combined with Proposition 2.5, we can further reduce the possible scenarios to cases where for each \(i\), \(f_{i}(x)=\mathbf{1}\) for some \(x\in E\). It turns out these reductions are sufficient for constructing a lattice automorphism that makes the homogeneity diagram commute as desired: **Theorem 3.5**.: _Suppose \(1\leq p\neq q<\infty\), and for \(i=1,2\), let \(f_{i}:E\to L_{p}(L_{q})\) be a lattice embedding with \(E:=<(e(k,j))_{k,j}>\in B\mathcal{K}_{p,q}\) and \(1\leq k\leq n\) and \(1\leq j\leq m_{k}\). Suppose also that each \(f_{i}(E)\) fully supports \(L_{p}(L_{q})\). Then there exists a lattice automorphism \(\phi\) over \(L_{p}(L_{q})\) such that \(\phi\circ f_{1}=f_{2}\)._ Proof.: Let \(\eta=\|\sum_{k,j}e(k,j)\|\); by Proposition 2.5, we can assume that for both \(i\)'s, we have \(f_{i}(\sum_{k,j}e(k,j))=\eta\cdot\mathbf{1}\). For notation's sake, let \(e_{i}(k,j):=f_{i}(e(k,j))\). By Proposition 2.2, for each \(i\) there exist mutually disjoint sets \(A_{i}(1),...,A_{i}(n)\) partitioning \([0,1]\) such that for each \(1\leq j\leq m_{k}\), \(supp(N[e_{i}(k,j)])=A_{i}(k)\). In addition, for the sets \(A_{i}(k,1),...,A_{i}(k,m_{k})\), where \(A_{i}(k,j):=supp(e_{i}(k,j))\), partition \(A_{i}(k)\times[0,1]\). It follows also from the statements in Remark 3.3 that \(\mu(A_{1}(k))=\mu(A_{2}(k))\) for each \(k\) and \(N^{q}[e_{i}(k,j)](x)=\frac{\eta^{q}}{m_{k}}\mathbf{1}_{A_{i}(k)}(x)\). To prove the theorem, it is enough to generate lattice automorphisms \(\phi^{i}\) mapping each band \(\boldsymbol{\beta}(e_{i}(k,j))\) to a corresponding band \(\boldsymbol{\beta}(\mathbf{1}_{W_{k}\times V_{k,j}})\) where \(W_{k}\) and \(V_{k,j}\) are defined as in Remark 3.3, with \(\mathbf{1}_{A_{i}(k,j)}\mapsto\mathbf{1}_{W_{k}\times V_{k,j}}\). To this end, we make a modified version of the argument in [7, Proposition 2.6] and adopt the notation in Proposition 2.5: construct lattice isometries \(\psi^{i}_{k,j}\) from \(L_{p}(A_{i}(k));L_{q}(V_{k,j}))\) to \(\boldsymbol{\beta}(e^{i}_{k,j})\) with \[\psi^{i}_{k,j}(f)(x,y)=f\bigg{(}x,\big{(}\widetilde{\mathbf{1}}_{A_{i}(k,j)} \big{)}_{x}(y)_{q}+\frac{j-1}{m_{k}}\bigg{)}\mathbf{1}_{A_{i}(k,j)}(x,y)\] By similar reasoning as in the proof of Proposition 2.5, \(\psi^{i}_{k,j}\) is a lattice embedding. Surjectivity follows as well. Indeed, since \(N^{q}[\mathbf{1}_{A_{i}(k,j)}](x)=\frac{1}{m_{k}}\), for a.e. \(x\in A_{i}(k)\) the function \(\big{(}\widetilde{\mathbf{1}}_{A_{i}(k,j)}\big{)}_{x}(y)_{q}+\frac{j-1}{m_{k}}\) matches \([0,1]\) continuously to \(V_{k,j}\) with \(supp(e_{i}(k,j)(x,\cdot))\) mapped a.e. surjectively to \(V_{k,j}\). So \(\psi^{i}_{k,j}\)'s image is dense in \(\boldsymbol{\beta}(e_{i}(k,j))\). Observe that \(\psi^{i}_{k,j}\) also preserves the random norm \(N\) along the base (that is: \(N[f]=N[\psi^{i}_{k,j}(f)]\). Resultantly, the function \(\psi^{i}_{k}:=\oplus_{j}\psi^{i}_{j,k}\) mapping \(L_{p}(A_{i}(k),L_{q}(0,1))\) to \(\oplus_{j}\boldsymbol{\beta}(e_{i}(k,j))\) is also a lattice automorphism. Indeed, for \(f=\sum_{1}^{m_{k}}f_{j}\) with \(f_{j}\in\boldsymbol{\beta}(e_{i}(k,j))\), one gets \[\|\psi_{k}^{i}(f)\| =\left|\left|N[\sum_{j}\psi_{k,j}^{i}(f_{j})]\right|\right|_{p}= \left|\left|\big{(}\sum_{j}N^{q}[\psi_{k,j}^{i}(f_{j})]\big{)}^{1/q}\right| \right|_{p}\] \[=\left|\left|\big{(}\sum_{j}N^{q}[f_{j}]\big{)}^{1/q}\right| \right|_{p}=\left|\left|N[\sum_{j}f_{j}]\right|\right|_{p}=\|f\|\] Now let \(\psi^{i}=\oplus_{k}\psi_{k}^{i}\), and observe that given \(f=\sum_{1}^{n}f_{k}\) with \(f_{k}\in L_{p}(A_{i}(k),L_{q}(0,1))\), since the \(f_{k}\)'s are base disjoint, we have \[\|\psi^{i}f\|^{p}=\sum_{1}^{n}\|\psi_{k}^{i}f_{k}\|^{p}=\sum_{1}^{n}\|f_{k}\|^ {p}=\|f\|^{p}.\] Thus \(\psi^{i}\) is a lattice automorphism over \(L_{p}(L_{q})\) mapping each \(1_{A_{i}(k)\times V_{k,j}}\) to \(\mathbf{1}_{A_{i}(k,j)}\). Use [5, Lemma 3.3] to construct a lattice isometry \(\rho_{i}:L_{p}\to L_{p}\) such that for each \(k\), \(\rho_{i}(\mathbf{1}_{W_{k}})=\mathbf{1}_{A_{i}(k)}\). By [1, Ch. 11 Theorem 5.1] this isometry is induced by a measure preserving transformation \(\bar{\rho}_{i}\) from [0,1] to itself such that \(\rho^{i}(f)(x)=f(\bar{\rho}_{i}(x))\). It is easy to show that \(\rho_{i}\) induces a lattice isometry with \(f(x,y)\mapsto f(\bar{\rho}_{i}(x),y)\). In particular, we have \(N[\rho_{i}f](x)=N[f](\bar{\rho}_{i}(x))\), and \(\rho_{i}(\mathbf{1}_{W_{k}\times V_{k,j}})=\mathbf{1}_{A_{i}(k)\times V_{k,j}}\), now let \(\phi^{i}(f)=(\psi^{i}\circ\rho^{i})(f)\), and we are done. Using the above, we can now show: **Theorem 3.6**.: _For \(1\leq p\neq q<\infty\), the lattice \(L_{p}(L_{q})\) is AUH for the class \(B\mathcal{K}_{p,q}\)._ Proof.: Let \(f_{i}:E\to L_{p}(L_{q})\) as required, and suppose \(\varepsilon>0\). use Lemma 3.4 to get copies \(E^{\prime}_{i}\) of \(f_{i}(E)\) fully supporting \(L_{p}(L_{q})\) such that for each atom \(e_{k}\in E\) and corresponding atoms \(e_{k}^{i}\in E^{\prime}_{i}\), we have \(\|f_{i}(e_{k})-e_{k}^{i}\|<\varepsilon/2\). now use Theorem 3.5 to generate a lattice automorphism \(\phi\) from \(L_{p}(L_{q})\) to itself such that \(\phi(e_{k}^{1})=e_{k}^{2}\). Then \[\|\phi(f_{1}(e_{k}))-f_{2}(e_{k}))\|\leq\|\phi(f_{1}(e_{k})-e_{k}^{1})\|+\|e_{ k}^{2}-f_{2}(e_{k})\|<\varepsilon\] **Remark 3.7**.: Observe that the doubly atomless \(L_{p}(L_{q})\) space is unique among separable \(BL_{p}L_{q}\) spaces that are AUH over \(B\mathcal{K}_{p,q}\). Indeed, this follows from the fact that such a space must be doubly atomless to begin with: let \(E\) be a one dimensional space generated by atom \(e\) and suppose \(X\) is not doubly atomless. Suppose also that \(E\) is embedded by some \(f_{1}\) into a part of \(X\) supported by some \(L_{p}\) or \(L_{q}\) band, and on the other hand is embedded by some \(f_{2}\) into \(F:=\ell_{p}^{2}(\ell_{q}^{2})\) with \(f_{2}(e)\) a unit in \(F\). Then one cannot almost extend \(f_{1}\) to some lattice embedding \(g:F\to X\) with almost commutativity. One can also expand this approximate ultrahomogeneity to separable sublattices with a weaker condition of almost commutativity in the diagram for generating elements: for any \(BL_{p}L_{q}\) sublattice \(E\) generated by elements \(<e_{1},...,e_{n}>_{L}\), for any \(\varepsilon>0\), and for all lattice embedding pairs \(f_{i}:E\to L_{p}(L_{q})\), there exists a lattice automorphism \(g:L_{p}(L_{q})\to L_{p}(L_{q})\) such that for all \(j=1,...,n\), \(\|g(f_{2}(e_{j}))-f_{1}(e_{j})\|<\varepsilon\). **Theorem 3.8**.: _For all \(1\leq p\neq q<\infty\), The lattice \(L_{p}(L_{q})\) is AUH for the class of finitely generated \(BL_{p}L_{q}\) lattices._ Proof.: Let \(E=<e_{1},...e_{n}>_{L}\), and let \(f_{i}:E\to L_{p}(L_{q})\) be lattice embeddings. We can assume that \(\|e_{k}\|\leq 1\) for each \(1\leq i\leq n\). By Proposition 2.1, \(E\) is the inductive limit of lattices in \(B\mathcal{K}_{p,q}\). Given \(\varepsilon>0\), pick a \(B\mathcal{K}_{p,q}\) lattice \(E^{\prime}=<e^{\prime}_{1},...,e^{\prime}_{m}>\subseteq E\) such that for each \(e_{k}\), there is some \(x_{k}\in B(E^{\prime})\) such that \(\|x_{k}-e_{k}\|<\frac{\varepsilon}{3}\). Each \(f_{i}|_{E^{\prime}}\) is an embedding into \(L_{p}(L_{q})\), so pick an automorphism \(\phi\) over \(L_{p}(L_{q})\) such that \(\|\phi\circ f_{1}|_{E^{\prime}}-f_{2}|_{E^{\prime}}\|<\frac{\varepsilon}{3}\). Then \[\|\phi f_{1}(e_{k})-f_{2}(e_{k})\|\leq\|\phi f_{1}(e_{k}-x_{k})\|+\|\phi f_{1}( x_{k})-f_{2}(x_{k})\|+\|f_{2}(x_{k}-e_{k})\|<\varepsilon.\] We can also expand homogeneity to include not just lattice embeddings but also disjointness preserving linear isometries, that is, if embeddings \(f_{i}:E\to L_{p}(L_{q})\) are not necessarily lattice homomorphisms but preserve disjointness, then there exists a disjointness preserving linear automorphism \(\phi\) over \(L_{p}(L_{q})\) satisfying almost commutativity: **Corollary 3.9**.: \(L_{p}(L_{q})\) _is AUH over finitely generated sublattices in \(BL_{p}(L_{q})\) with disjointness preserving embeddings._ Proof.: Use the argument in [5, Proposition 3.2] to show that \(L_{p}(L_{q})\) is disjointness preserving AUH over \(B\mathcal{K}_{p,q}\). From there, proceed as in the argument in Theorem 3.8 to extend homogeneity over \(B\mathcal{K}_{p,q}\) to that over \(BL_{p}L_{q}\). ## 4. Approximate Ultrahomogeneity of \(L_{p}(L_{q})\) when \(p/q\notin\mathbb{N}\) The above results largely focused approximate ultrahomogeneity over \(BL_{p}L_{q}\) lattices. What can be said, however, of _sublattices_ of \(L_{p}L_{q}\) spaces? The answer to this question is split into two cases: first, the cases where \(p/q\notin\mathbb{N}\), and the second is when \(p/q\in\mathbb{N}\). We address the first case in this section. It turns out that if \(p/q\notin\mathbb{N}\), then \(L_{p}(L_{q})\) is AUH for the class of its finitely generated sublattices. The argument involves certain equimea-surability properties of copies of fixed finite dimensional lattices in \(L_{p}(L_{q})\). Throughout, we will refer to the class of sublattices of spaces in \(B\mathcal{K}_{p,q}\) as simply \(\mathcal{K}_{p,q}\), and let \(\overline{\mathcal{K}_{p,q}}\) be the class of finitely generated sublattices of \(L_{p}(L_{q})\). The following result appeared as [7, Proposition 3.2], which is a multi-dimensional version based on Raynaud's proof for the case of \(n=1\) (see [11, lemma 18]). The approach taken here is a multi-dimensional version of the proof of Lemma 2 in [8]. **Theorem 4.1**.: _Let \(r=p/q\notin\mathbb{N}\), and suppose \(f_{i}:E\to L_{p}(L_{q})\) are lattice isometric embeddings with \(E=<e_{1},...,e_{n}>\). Suppose also that \(f_{1}(x)=f_{2}(x)=\mathbf{1}\) for some \(x\in E_{+}\). Then \(f_{1}(\mathbf{e})\) and \(f_{2}(\mathbf{e})\) are base-equimeasurable._ Throughout the proof, let \(\mu\) be a measure in some interval \(I^{n}\subseteq C:=\mathbb{R}_{+}^{n}\). To this end, we first show the following: **Lemma 4.2**.: _Suppose \(0<r\notin\mathbb{N}\), and \(\alpha,\beta\) are positive finite Borel measures on \(C\) such that for all \(\mathbf{v}\in C\) with \(v_{0}>0\),_ \[\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d\alpha(\mathbf{z})=\int_{C}|v_ {0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d\beta(\mathbf{z})<\infty.\] _Then \(\alpha=\beta\)._ Proof.: It is equivalent to prove that the signed measure \(\nu:=\alpha-\beta=0\). First, observe that since \(|\nu|\leq\alpha+\beta\), and for any \(\mathbf{v}\geq 0\), \(\int|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d|\nu|(\mathbf{z})<\infty\). Now, we show by induction on polynomial degree that for all \(k\in\mathbb{N}\), \(\mathbf{v}\geq 0\), and for all multivariate polynomials \(P(\mathbf{z})\) of degree \(k^{\prime}\leq k\), \[*\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k}P(\mathbf{z})\ d\nu(\mathbf{z} )=0.\] This is true for the base case \(k=0\) by assumption. Now assume it is true for \(k\in\mathbb{N}\) and let \(k^{\prime}:=\sum l_{i}\leq k\) with \(\mathbf{l}\in\mathbb{N}^{n}\). For notational ease, let \(\mathbf{z}^{\mathbf{l}}=z_{1}^{l_{1}}...z_{n}^{l_{n}}\). Then for each \(v_{i}\) and \(0<t<1\), \[\int_{R_{+}^{n}}\mathbf{z}^{\mathbf{l}}\frac{(v_{0}+\mathbf{v}\cdot\mathbf{z} +z_{i}t)^{r-k}-(v_{0}+\mathbf{v}\cdot\mathbf{z})^{r-k}}{t}\ d\nu(\mathbf{z})=0.\] Now, if \(k+1<r\) and \(t\in(0,1)\), then \[\left|\mathbf{z}^{\mathbf{l}}\frac{(v_{0}+\mathbf{v}\cdot\mathbf{ z}+z_{i}t)^{r-k}-(v_{0}+\mathbf{v}\cdot\mathbf{z})^{r-k}}{t}\right|\leq\mathbf{z}^{ \mathbf{l}}z_{i}(r-k)(v_{0}+\mathbf{v}\cdot\mathbf{z}+v_{i})^{r-k-1}\] \[\leq \frac{r-k}{\mathbf{v}^{\mathbf{l}}v_{i}}(v_{0}+\mathbf{v}\cdot \mathbf{z}+v_{i})^{r}\] Since in this case, \(0<r-k-1<r\) and \(|\nu|<\infty\), the right hand side must also be \(|\nu|\)-integrable. On the other hand, If \(k+1>r\), then we have \[\left|\mathbf{z}^{\mathbf{l}}\frac{(v_{0}+\mathbf{v}\cdot\mathbf{z}+v_{i}t)^{ r-k}-(v_{0}+\mathbf{v}\cdot\mathbf{z})^{r-k}}{t}\right|<|r-k|\frac{v_{0}^{r}}{ \mathbf{v}^{\mathbf{l}}v_{i}}\] which is also \(|\nu|\)-integrable. So now we apply Lebesgue's differentiation theorem over \(v_{i}\) to get, for any \(k\in\mathbb{N}\) and for each \(1\leq i\leq n\): \[\int_{C}\mathbf{z}^{\mathbf{l}}z_{i}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k-1}\ d \nu(\mathbf{z})=0,\] since \(r\notin\mathbb{N}\). A similar argument, deriving over \(v_{0}\), can be made to show that \[\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k-1}\ d\nu(\mathbf{z})=0\] One can make linear combinations of the above, which implies line \(*\). Now for fixed \(\mathbf{v}>0\), \(v_{0}>0\) we define a measure \(\Lambda\) on \(C\), where for measurable \(B\subseteq\mathbb{R}^{n}_{+}\), \[\Lambda(B)=\int_{\phi^{-1}(B)}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d\nu( \mathbf{z}).\] where \(\phi(\mathbf{z})=\frac{1}{v_{0}+\mathbf{v}\cdot\mathbf{z}}\mathbf{z}\). It is sufficient to show that \(\Lambda=0\). Observe first that \(\phi\) is continuous and injective; indeed, if \(\phi(\mathbf{z})=\phi(\mathbf{w})\), then it can be shown that \(\mathbf{v}\cdot\mathbf{w}=\mathbf{v}\cdot\mathbf{z}\). Thus \(\frac{\mathbf{w}}{v_{0}+\mathbf{v}\cdot\mathbf{z}}=\frac{\mathbf{z}}{v_{0}+ \mathbf{v}\cdot\mathbf{z}}\), implying that \(\mathbf{w}=\mathbf{z}\). Resultantly, \(\phi(B)\) for any Borel \(B\) is also Borel, hence we will have shown that for any such \(B\), \(\nu(B)=0\) as well, so \(\nu=0\). Observe that by choice of \(\mathbf{v}>0\) and and since \((v_{0}+\mathbf{v}\cdot\mathbf{z})>0\) for all \(\mathbf{z}\in\mathbb{R}^{n}_{+}\), have \[|\Lambda|(B)=\int_{\phi^{-1}(B)}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\ d|\nu|( \mathbf{z}).\] Using simple functions and the definition of \(\Lambda\), one can show both that for each \(i\), we have \[**\ \ \ m_{i}(k):=\int_{C}w^{k}_{i}\ d|\Lambda|(\mathbf{w})=\int_{C}(v_{0}+ \mathbf{v}\cdot\mathbf{z})^{r-k}z^{k}_{i}\ d\|\nu|(\mathbf{z})<\infty\] and also that \[\int_{C}w^{k}_{i}\ d\Lambda(\mathbf{w})=\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{ z}|^{r-k}z^{k}_{i}\ d\nu(\mathbf{z})=0,\] More generally, if \(k=\sum_{i}l_{i}\), then \[\int_{C}\mathbf{w}^{\mathbf{l}}\ d\Lambda(\mathbf{w})=\int_{C}\mathbf{z}^{ \mathbf{l}}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k}\ d\nu(\mathbf{z})=0,\] So it follows that \(\int_{C}P(\mathbf{w})\ d\Lambda(\mathbf{w})=0\) for all polynomials \(P(\mathbf{w})\). Now if \(k>r\) and \(\nu\neq 0\), since \(v_{i}>0\), we then we have \[m_{i}(k) =\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r-k}z_{i}^{k}\ d\|\nu|( \mathbf{z})\] \[\leq\int_{C}|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}v_{i}^{-k}\ d|\nu| (\mathbf{z})\leq v_{i}^{-k}|\Lambda|(C)<\infty\] so \[m_{i}(k)^{-1/2k}\geq v_{i}^{1/2}|\Lambda|(C)^{-1/2k}\] Thus for each \(1\leq i\leq n\), \(\sum_{k}m_{i}(k)^{-1/2k}=0\). So by [4, Theorem 5.2], \(|\Lambda|\) is the unique positive measure over \(C\) with moment values \(m_{i}(k)\). Since \(|\Lambda|+\Lambda\) yields the same values, and by **, \(\int_{C}P(\mathbf{w})\ d(|\Lambda|+\Lambda)(\mathbf{w})=\int_{C}P(\mathbf{w})\ d| \Lambda|(\mathbf{w})\), it follows that \(\Lambda=0\), so \(\nu=0\). Now we are ready to prove Theorem 4.1. Proof.: For simplicity of notation, let \(F_{j}^{i}=N^{q}[f_{i}(e_{j})]\) and \(I=[0,1]\). By definition of \(N\), the support of \(F_{j}^{i}\) as well as of \(\mu\) is the unit interval. Define positive measures \(\alpha_{j}\) by \[\alpha_{i}(B)=\mu(\{t\in I:\mathbf{F}^{i}(t)\in B\})=\mu((\mathbf{F}^{i})^{-1} (B)).\] Now, for any measurable \(B\subseteq C\), we have \[\int_{C}\mathbf{1}_{B}(\mathbf{z})\ da_{i}(\mathbf{z})=\alpha_{i}(B)=\mu(( \mathbf{F}^{i})^{-1}(B))=\int_{I}(\mathbf{1}_{B}\circ\mathbf{F}^{i})(t)\ dt\] so for any simple function \(\sigma\) over \(C\), \[\int_{C}\sigma(\mathbf{z})\ d\alpha_{i}=\int_{0}^{1}\sigma\circ\mathbf{F}^{i}( t)\ dt\] Using simple functions to approximate \(|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\), and given that \(|v_{0}+\mathbf{v}\cdot\mathbf{z}|^{r}\) is in \(L_{1}(C,\mu)\) and the support of \(\mu\) is the unit interval, it follows that \[\int_{C}|1+\mathbf{v}\cdot\mathbf{z}|^{r}\ d\alpha_{i}(\mathbf{z})=\int_{0}^{ 1}|1+\mathbf{v}\cdot\mathbf{F}^{i}(t)|^{r}\ dt.\] It is sufficient now to show that for all \(\mathbf{v}\in\mathbb{R}_{+}^{n}\), \[\int_{0}^{1}|1+\mathbf{v}\cdot\mathbf{F}^{1}(t)|^{r}\ dt=\int_{0}^{1}|1+ \mathbf{v}\cdot\mathbf{F}^{2}(t)|^{r}\ dt.\] For \(i,j\) and \(s\in[0,1]\), let \(M_{i}^{j}=\{(s,t):(s,t)\in supp(f_{i}(e_{j}))\}\), and let \(M_{i}^{j}(s)=\{t:(s,t)\in M_{i}^{j}\}\). By assumption, \(x=\sum_{j}x_{j}e_{j}\) with \(x_{j}>0\), so \(\mathbf{1}=N^{q}[f_{i}(x)]=\sum_{j}x_{j}^{q}F_{j}^{i}\). Therefore, since each \(f_{i}\) is an embedding, for all \(\mathbf{c}\in\mathbb{R}_{+}^{n}\), \[\|\sum_{j}c_{j}e_{j}\|^{p}= \left|\left|\big{(}\sum_{j}c_{j}^{q}F_{j}^{i}(s)\big{)}^{1/q}\right| \right|_{p}\] \[= \left|\left|\big{(}\mathbf{1}+\sum_{j}(c_{j}^{q}-x_{j}^{q})F_{j}^{ i}(s)\big{)}^{1/q}\right|\right|_{p}\] Let \(v_{j}:=c_{j}^{q}-x_{j}^{q}\): then in particular it follows that for all \(\mathbf{v}\geq 0\), we have \[\int_{0}^{1}\left(1+\mathbf{v}\cdot\mathbf{F}^{1}(s)\right)^{p/q}ds=\int_{0}^ {1}\left(1+\mathbf{v}\cdot\mathbf{F}^{2}(s)\right)^{p/q}ds.\] By Lemma 4.2, we can conclude that \(\alpha_{1}=\alpha_{2}\), so \(\mathbf{F}^{1}\) and \(\mathbf{F}^{2}\) are equimeasurable. Using Theorem 4.1, we can uniquely characterize lattices in \(\mathcal{K}_{p,q}\) in a way that parallels Proposition 2.2. **Theorem 4.3**.: _Suppose that \(p/q\notin\mathbb{N}\), and let \(E\subseteq L_{p}(L_{q})\) with \(E=<e_{1},...,e_{m}>\). Then the following hold:_ * \(E\in\mathcal{K}_{p,q}\) _iff there exist mutually disjoint measurable functions_ \(\phi(k,j)\in S(L_{p}(L_{q}))_{+}\)_, with_ \(1\leq j\leq n\) _and_ \(1\leq k\leq L\) _such that for each_ \(j\)_,_ \(e_{j}\in<(\phi(k,j))_{k}>=\ell_{p}^{n}\)_, and_ \(<(\phi(k,j))_{k,j}>\in B\mathcal{K}_{p,q}\)_._ * _Suppose_ \(f_{i}:E\to L_{p}(L_{q})\) _is a lattice embedding with_ \(i=1,2\) _and_ \(E\in\mathcal{K}_{p,q}.\) _Then there exist embeddings_ \(f_{i}^{\prime}:E^{\prime}\to L_{p}(L_{q})\) _extending_ \(f_{i}\) _such that_ \(E^{\prime}\in B\mathcal{K}_{p,q}\)_._ Proof.: For part 1, clearly the reverse direction is true. To prove the main direction, we can suppose that \(E\) fully supports \(L_{p}(L_{q})\). If not, recall that the band generated by \(E\) is itself doubly atomless, and hence is lattice isometric to \(L_{p}(L_{q})\) itself. Thus, if under these conditions, there is a \(BL_{p}L_{q}\) sublattices extending \(E\) as in the statement of the theorem, it will also be the case in general. By Proposition 2.5, we can also suppose that \(\sum_{j}e_{j}=\eta\cdot\mathbf{1}\). Now by assumption, since \(E\in\mathcal{K}_{p,q}\), then there is an embedding \(\psi:E\to\widetilde{E}\in B\mathcal{K}_{p,q}\) such that each \(\psi(e_{j})=\sum_{k}x(k,j)\tilde{e}(k,j)\), with \(1\leq k\leq m_{k}^{\prime}\). Without loss of generality we may also drop any \(\tilde{e}(k,j)\)'s disjoint from \(\psi(E)\) and assume that \(\psi(E)\) fully supports \(\widetilde{E}\). Now \(\widetilde{E}\) is a \(B\mathcal{K}_{p,q}\) lattice admitting a canonical representation in \(L_{p}(L_{q})\) as described in Theorem 3.2 and Remark 3.3. So we can assume that \(\psi\) embeds \(E\) into \(L_{p}(L_{q})\) in such a way that \(\psi(E)\) fully supports it and each \(\psi(e_{j})\) is both simple and base-simple. Now, use Proposition 2.5 to adjust \(\psi\) into an automorphism over \(L_{p}(L_{q})\) such that \(\psi(\sum e_{j})=\eta\cdot\mathbf{1}\) in a way that preserves both simplicity and base-simplicity. By Theorem 4.1, \(\psi(\mathbf{e})\) and \(\mathbf{e}\) are base-equimeasurable. Since the \(\psi(k,j)^{\prime}s\) are base-simple, there exist tuples \(\mathbf{s^{1}},...,\mathbf{s^{L}}\in\mathbb{R}^{m}\) such that for a.e. \(t\in[0,1]\), there is some \(k\leq L\) such that \(N[\mathbf{e}](t)=\mathbf{s^{k}}\). By equimeasurability, the same is true for \(N[\psi(\mathbf{e})](t)\). Let \(\mathbf{S^{k}}=\{t:N[\mathbf{e}](t)=\mathbf{s^{k}}\}\), and let \(S_{j}^{k}=\mathbf{S^{k}}\times[0,1]\cap supp(e_{j})\). Let \(\overline{\mathbf{S^{k}}}=\{t:N[\psi(\mathbf{e})](t)=\mathbf{s^{k}}\}\) with \(\overline{S}_{j}^{k}\) defined similarly. Note that each \(\mathbf{1}_{S_{j}^{k}}\) is also base-characteristic, as \(N[\mathbf{1}_{S_{j}^{k}}]=c_{j}^{k}\mathbf{1}_{\mathbf{S^{k}}}\) for some \(c_{j}^{k}>0\), so for fixed \(k\) and for any \(j,j^{\prime}\leq m_{k}\), we must have that \(N[\mathbf{1}_{S_{j}^{k}}]\) and \(N[\mathbf{1}_{S_{j^{\prime}}^{k}}]\) are scalar multiples of each other. Thus for each appropriate pair \((k,j)\) with \(s_{j}^{k}>0\), define \(\phi(k,j)\) by \(\frac{\mathbf{1}_{S_{j}^{k}}}{\|\mathbf{1}_{S_{j}^{k}}\|}\). By definition of \(\mathbf{S^{k}}\), for any \(k\neq k^{\prime}\) and any appropriate \(j,j^{\prime}\), \(\phi(k,j)\) and \(\phi(k^{\prime},j^{\prime})\) are fiber-disjoint, and \(N[\phi(k,j)]=N[\phi(k,j^{\prime})]\). Thus by Proposition 2.2, \(<(\phi(k,j))_{k,j}>\in B\mathcal{K}_{p,q}\). To prove part 2, Observe first that we have already essentially proven part 2 in the case that \(f_{1}=Id\) and \(f_{2}=\psi\). To show the general case, we first assume that for each \(i\), \(\sum f_{i}(e_{j})\) maps to \(\mathbf{1}\). Now, by Theorem 4.1, \(f_{1}(\mathbf{e})\) and \(f_{2}(\mathbf{e})\) are also base-equimeasurable, but by the procedure for part 1, we also know that each \(f_{i}(e_{j})\) is also base-simple. Define \(\mathbf{s^{1}},...,\mathbf{s^{L}}\) as above, and Let \(\mathbf{S^{k}}(i)=\{t:N[f_{i}(\mathbf{e})](t)=\mathbf{s^{k}}\}\). Define similarly \(S_{j}^{k}(i)\) and the associated characteristic functions \(\phi_{i}(k,j)\) for appropriate pairs \(k,j\) such that \(1\leq k\leq l\) and \(s_{j}^{k}:=\|\phi_{i}(k,j)\wedge f_{i}(e_{j})\|>0\). Note first that \[f_{i}(e_{j})=\sum_{k:s_{k}(j)>0}s_{j}^{k}\phi_{i}(k,j).\] Second, observe that by equimeasurability, the eligible pairs \((k,j)\) are the same for \(i=1,2\). Let \(E_{i}^{\prime}=<(\phi_{i}(k,j))_{k,j}>\). Clearly \(E_{i}^{\prime}\in B\mathcal{K}_{p,q}\), and since the eligible pairs \((k,j)\) are the same, \(E_{1}^{\prime}\) and \(E_{2}^{\prime}\) are isometric to each other. Let \(E^{\prime}\) be one of the \(E_{i}^{\prime}\)'s and let \(f_{i}^{\prime}:E^{\prime}\to L_{p}(L_{q})\) be the expected embedding mapping \(E^{\prime}\) to \(E_{i}^{\prime}\), and we are done. From here, we can now easily extend Theorem 3.5 to lattices in \(\mathcal{K}_{p,q}\): **Corollary 4.4**.: _Suppose \(p/q\notin\mathbb{N}\) and suppose \(f_{i}:E\to L_{p}(L_{q})\) are lattice embeddings from \(E\in K_{p,q}\) with \(f_{i}(E)\) fully supporting \(L_{p}(L_{q})\). Then there exists a lattice automorphism \(\phi\) over \(L_{p}(L_{q})\) such that \(f_{2}=\phi\circ f_{1}\)._ Proof.: Use Theorem 4.3 to generate a \(B\mathcal{K}_{p,q}\) lattice \(E^{\prime}\) containing \(E\) and lattice embeddings \(f^{\prime}_{i}:E^{\prime}\to L_{p}(L_{q})\) such that \(f^{\prime}_{i}|_{E}=f_{i}\). Clearly each \(f^{\prime}_{i}(E^{\prime})\) fully supports \(L_{p}(L_{q})\). Now apply Theorem 3.5 to generate an automorphism \(\phi\) over \(L_{p}(L_{q})\) with \(\phi\circ f^{\prime}_{1}=f^{\prime}_{2}\). Clearly \(\phi\circ f_{1}=f_{2}\) as well. When \(p/q\notin\mathbb{N}\), using Theorem 4.3, we can show that the same holds with the more general class \(\mathcal{K}_{p,q}\). However, we can make an even stronger claim by showing that homogeneity holds for any finite dimensional sublattice of \(L_{p}(L_{q})\). This is done using the following result, which gives a standard way of approximating finite dimensional sublattices of \(L_{p}(L_{q})\) with lattices in \(\mathcal{K}_{p,q}\). **Lemma 4.5**.: _Suppose \(p/q\notin\mathbb{N}\), and let \(f_{i}:E\to L_{p}(L_{q})\) be embeddings with \(E=<e_{1},...,e_{n}>\). Then for all \(\varepsilon>0\), there exists a \(\mathcal{K}_{p,q}\) lattice \(E^{\prime}=<e^{\prime}_{1},...,e^{\prime}_{n}>\) and embeddings \(g_{i}:E^{\prime}\to L_{p}(L_{q})\) such \(g_{i}(E^{\prime})\) fully supports \(L_{p}(L_{q})\) and for each \(n\), \(\|f_{i}(e_{n})-g_{i}(e^{\prime}_{n})\|<\varepsilon\)._ Proof.: We can assume each \(f_{i}(E)\) fully supports \(L_{p}(L_{q})\): given \(\varepsilon>0\), use Lemma 3.4 to get copies of \(E\) sufficiently close to each \(f_{i}(E)\) with full support. We then also assume that \(f_{i}(\sum_{1}^{n}e_{k})=\mathbf{1}\) using Proposition 2.5. By Theorem 4.1, \(f_{1}(\mathbf{e})\) and \(f_{2}(\mathbf{e})\) are base-equimeasurable. In particular, given any measurable \(C\in\mathbb{R}^{n}\), one has \(\mu(t:N[f_{1}(\mathbf{e})](t)\in C)=\mu(t:N[f_{2}(\mathbf{e})](t)\in C)\). Now pick an almost disjoint partition \(C_{1},...,C_{m}\) of \(S(\ell_{1}^{n})\), where each \(C_{i}\) is closed, has relatively non-empty interior, and is of diameter less than \(\frac{\varepsilon}{2n}\). Let \(D^{i}_{k}=\{t:N[f_{i}(\mathbf{e})](t)\in C_{i}\backslash\cup_{j}^{i-1}C_{j}\}\). Then by equimeasurability, \(\mu(D^{1}_{k})=\mu(D^{2}_{k})\). For each \(k\), pick some \(\mathbf{s}^{k}=(s^{k}_{1},...,s^{k}_{n})\in C_{k}\), and for each \(x\in D^{i}_{k}\), let \[\overline{e}^{i}_{j}(x,y)=\frac{s^{k}_{j}}{N[f_{i}(e_{j})](x)}f_{i}(e_{j})(x,y).\] Observe that \(\|\sum_{j}\overline{e}^{i}_{j}-\sum_{j}f_{i}(e_{j})\|<\varepsilon\), and \(N[\overline{e}^{i}_{j}](x)=s^{k}_{j}\) for \(x\in D^{i}_{k}\). Consider now the lattice \(E^{\prime}=<\overline{e}^{1}_{j},...,\overline{e}^{1}_{n}>\). Now, for any linear combination \(\sum a_{j}\overline{e}^{i}_{j}\), we have, as in the argument in Proposition 2.5, that \[\|\sum a_{j}\overline{e}^{i}_{j}\|^{p}=\sum_{k}^{M}(\sum_{j}(a_{j}s^{k}_{j})^ {q})^{p/q}\] implying that \(\|\sum a_{j}\overline{e}^{1}_{j}\|=\|\sum a_{j}\overline{e}^{2}_{j}\|\). It follows both that \(E^{\prime}\) embeds into \(\ell_{p}^{M}(\ell_{q}^{n})\), implying that it is a \(\mathcal{K}_{p,q}\) lattice, and it is isometric to the lattice generated by the \(\overline{e}^{2}_{j}\)'s. Let \(e^{\prime}_{j}=\overline{e}^{1}_{j}\), and define \(g_{i}:E^{\prime}\to L_{p}(L_{q})\) as the maps generated by \(g_{i}(e^{\prime}_{j})=\overline{e}^{i}_{j}\). Clearly these are lattice embeddings and \(\|f_{i}(e_{j})-g_{i}(e^{\prime}_{j})\|<\varepsilon\). **Theorem 4.6**.: _For all \(1\leq p,q<\infty\) with \(p/q\notin\mathbb{N}\), the lattice \(L_{p}(L_{q})\) is AUH for the class of finite dimensional sublattices of \(L_{p}L_{q}\) lattices._ Proof.: It is sufficient to show that the result is true over generation by basic atoms. Let \(f_{i}:E\to L_{p}(L_{q})\) be two embeddings with \(E=<e_{1},...,e_{n}>\). Use Lemma 4.5 to find \(g_{i}:E^{\prime}\to L_{p}(L_{q})\), with \(E^{\prime}:=<e_{1}^{\prime},...,e_{n}^{\prime}>\in\mathcal{K}_{p,q}\), \(\|g_{i}(e_{k}^{\prime})-f_{i}(e_{k})\|<\varepsilon/2\), and each \(g_{i}(E^{\prime})\) fully supporting \(L_{p}(L_{q})\). Then by Lemma 4.4, there exists an automorphism \(\phi:L_{p}(L_{q})\to L_{p}(L_{q})\) such that \(\phi\circ g_{1}=g_{2}\). Note then that \(\|\phi(f_{1}(e_{k}))-f_{2}(e_{k})\|\leq\|\phi(f_{1}(e_{k})-g_{1}(e_{k}^{\prime }))\|+\|f_{2}(e_{k})-g_{2}(e_{k}^{\prime})\|<\varepsilon\). In a manner similar to that of Theorem 3.8, we can also extend the AUH property to finitely generated sublattices of \(L_{p}(L_{q})\) as well: **Theorem 4.7**.: _For all \(1\leq p,q<\infty\) with \(p/q\notin\mathbb{N}\), The lattice \(L_{p}(L_{q})\) is AUH for the class \(\overline{\mathcal{K}_{p,q}}\) of its finitely generated lattices._ Proof.: Suppose \(E\subseteq L_{p}(L_{q})\) is finitely generated. Then since \(E\) is order continuous and separable, it is the inductive limit of finite dimensional lattices as well, so pick a finite dimensional \(E^{\prime}\) with elements sufficiently approximating the generating elements of \(E\), and proceed with the same proof as in Theorem 3.8. The argument used in Corollary 3.9 can also be used to show: **Corollary 4.8**.: _For \(p/q\notin\mathbb{N}\), \(L_{p}(L_{q})\) is disjointness preserving AUH over \(\overline{\mathcal{K}_{p,q}}\)._ **Remark 4.9**.: \(L_{p}(L_{q})\) for \(p/q\notin\mathbb{N}\) is AUH over the entire class of its finitely generated sublattices, a property which is equivalent to such a class being a metric _Fraisse class_ with \(L_{p}(L_{q})\) as its _Fraisse limit_. Recall that a class \(\mathcal{K}\) of finitely generated lattices is _Fraisse_ if it satisfies the following properties: 1. _Hereditary Property_ (HP): \(\mathcal{K}\) is closed under finitely generated sublattices. 2. _Joint Embedding Property_ (JEP): any two lattices in \(\mathcal{K}\) lattice embed into a third in \(\mathcal{K}\). 3. _Continuity Property_ (CP): any lattice operation symbol are continuous with respect to the Fraisse pseudo-metric \(d^{\mathcal{K}}\) in [2, Definition 2.11]. 4. _Near Amalgamation Property_ (NAP): for any lattices \(E=<e_{1},...e_{n}>_{L}\), \(F_{1}\) and \(F_{2}\) in \(\mathcal{K}\) with lattice embeddings \(f_{i}:E\to F_{i}\), and for all \(\varepsilon>0\), there exists a \(G\in\mathcal{K}\) and embeddings \(g_{i}:F_{i}\to G\) such that \(\|g_{1}\circ f_{1}(e_{k})-g_{2}\circ f_{2}(e_{k})\|<\varepsilon\). 5. _Polish Property_ (PP): The Fraisse pseudo-metric \(d^{\mathcal{K}}\) is separable and complete in \(\mathcal{K}_{n}\) (the \(\mathcal{K}\)-structures generated by \(n\) many elements). Now clearly the finitely generated sublattices of \(L_{p}(L_{q})\) fulfill the first two properties, and the third follows from the lattice and linear operations having moduli of continuity independent of lattice geometry. In addition, if one can show that the class \(\mathcal{K}\) has the \(NAP\), has some separable \(X\) which is universal for \(\mathcal{K}\), and its NAP amalgamate lattices can be chosen so that they are closed under inductive limits, then one can prove that \(\mathcal{K}\) also has the Polish Property (a technique demonstrated in [14, Theorem 4.1] and more generally described in Section 2.5 of [9]). The main difficulty in proving that a class of lattices \(\mathcal{K}\) is a Fraisse class is in showing that it has the NAP. However, thanks to Theorem 4.7, we have **Corollary 4.10**.: \(\overline{\mathcal{K}_{p,q}}\) _has the NAP._ Theorem 4.7 implies an additional collection of AUH Banach lattices to the currently known AUH Banach lattices: namely \(L_{p}\) for \(1\leq p<\infty\),the Gurarij M-space \(\mathcal{M}\) discovered in [5], and the Gurarij lattice discovered in [14]. However, if one considers classes of finite dimensional Banach spaces with Fraisse limits using linear instead of lattice embeddings, the only known separable AUH Banach spaces are the Gurarij space and \(L_{p}\) for \(p\neq 4,6,8,...\), and it is currently unknown if there are other Banach spaces that are AUH over its finite dimensional subspaces with linear embeddings. Certain combinations of \(p\) and \(q\) are also ruled out for \(L_{p}(L_{q})\) as a potential AUH candidate as discussed in Problem 2.9 of [5]: in particular, when \(1\leq p,q<2\), \(L_{p}(L_{q})\) cannot be linearly AUH. ## 5. Failure of homogeneity for \(p/q\in\mathbb{N}\) Recall that when \(E=<e_{1},...,e_{n}>\in B\mathcal{K}_{p,q}\) is embedded into \(L_{p}(L_{q})\) through \(f_{1},f_{2}\), then we can achieve almost commutativity for any \(p\neq q\). However, the automorphism in Theorem 3.6 clearly preserves the equimea-surability of the generating basic atoms of \(f_{i}(E)\) as it fixes \(\mathbf{1}\). In this section, we show that the results of Section 4 do not hold when \(p/q\in\mathbb{N}\). The first results in this section show that when some \(e\in L_{p}(L_{q})_{+}\) is sufficiently close to \(\mathbf{1}\), the automorphism originally used in the argument of Proposition 2.5 sending \(\mathbf{1}\) to \(e\) also perturbs selected functions piecewise continuous on their support in a controlled way. Second, Theorem 4.1 does not hold, and thus we cannot infer equimeasurability for arbitrary finite dimensional sublattices of \(L_{p}(L_{q})\). Finally, we use these results to strengthen the homogeneity property for any \(L_{p}(L_{q})\) lattice assumed to be AUH, and then show that when \(p/q\in\mathbb{N}\), \(L_{p}(L_{q})\) does not fulfill this stronger homogeneity property, and thus cannot be AUH. **Lemma 5.1**.: _Let \(1\leq p\neq q<\infty\), and let \(<f_{1},...,f_{n}>\subseteq L_{p}(L_{q})\) be such that \(\sum f_{i}=\mathbf{1}\). Suppose also that for a.e. \(x\), \(f_{k}(x,\cdot)=\mathbf{1}_{[g_{k}(x),g_{k+1}(x)]}\) where each \(g_{k}\) has finitely many discontinuities. Let \(\varepsilon>0\), and let fully support \(L_{p}(L_{q})\). Consider_ \[\phi(f)(x,y)=f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\widetilde{e}_{x}(y)_{q}}{N^ {q}[e](x)}\bigg{)}e(x,y)\] _which is the lattice isometry defined in Proposition 2.5 mapping \(\mathbf{1}\) to \(e\)._ _Then there exists \(\delta\) such that if \(\|\mathbf{1}-e\|<\delta\), then for each \(k\), we have that \(\|\phi(f_{k})-f_{k}\|<\varepsilon\)._ Proof.: We can assume \(\varepsilon<1\). Let \(K\subseteq[0,1]\) be a closed set such that for \(1\leq k\leq n+1\), \(g_{k}|_{K}\) is continuous and \(\mu(K)>1-\varepsilon\). Pick \(\delta^{\prime}<\varepsilon\) such that for any \(x,x^{\prime}\in K\), if \(|x-x^{\prime}|<\delta^{\prime}\), then \(|g_{k}(x)-g_{k}(x^{\prime})|<\varepsilon/4\). Now, let \(\delta<{\delta^{\prime}}^{2p}\) be such that \(1-\frac{\delta^{\prime}}{4}\leq(1-\delta)^{p}<(1+\delta)^{p}<1+\frac{\delta^{ \prime}}{4}\), and suppose \(\|\mathbf{1}-e\|<\delta\). Observe that for each \(x\), we have \(\widetilde{N[\mathbf{1}-e]}(x)_{p}<\delta\). For each \(1\leq k\leq n\), let \[\widetilde{f}_{n}(x,y)=f\bigg{(}\widetilde{N[e]}(x)_{p},\frac{\widetilde{e}_{ x}(y)_{q}}{N^{q}[e](x)}\bigg{)}.\] Observe that \(\|\widetilde{f}_{k}-\phi(f_{k})\|<\delta<\varepsilon/4\), so it is enough to show that \(\|\widetilde{f}_{k}-f_{k}\|\) is sufficiently small as well. To this end, first note that since \(f\) is being composed with increasing continuous functions in both arguments, each \(\widetilde{f}_{n}(x,\cdot)\) is also the characteristic function of an interval: indeed, we have piecewise continuous \(\widetilde{g}_{1},...,\widetilde{g}_{n+1}\) with \(\widetilde{g}_{k}(x):=g(\widetilde{N[e]}(x)_{p})\) and \(\widetilde{g}_{n+1}(x)=1\) such that for each \(k\), \(\widetilde{f}_{k}(x,y)=\mathbf{1}_{[\widetilde{g}_{k}(x),\widetilde{g}_{k+1} (x)]}(y)\). Also observe that for \(M:=\{x\in K:N[e-1](x)<\delta\}\), we have \(\mu(M))>1-\delta^{\prime}-\varepsilon\). In addition, as \[\|f_{k}-\widetilde{f}_{k}\|^{p}=\|N[f_{k}-\widetilde{f}_{k}]\|_{p}^{p}=\int\mu (D(x))^{p}\ dx,\] Where \(D_{k}(x)=\{y:f_{k}(x,y)\neq\widetilde{f}_{k}(x,y)\}\). The above set up, in combination with the triangle inequality properties of \(N\), leads us to the following inequalities: * For all \(0\leq x\leq 1\), \(|\widetilde{N[e]}(x)_{p}-x|<\delta\). * For all \(x\in M\), \(|N[e](x)-1|<\delta\). * For all \(x\in M\) and \(0\leq y\leq 1\), \(|\widetilde{e}_{x}(y)_{q}-y|<\frac{\delta^{\prime}}{2}\). * For all \(x\in M\) and \(0\leq y\leq 1\), if \(y^{\prime}:=\frac{\widetilde{e}_{x}(y)_{q}}{N^{q}[e](x)}\), then \(|y^{\prime}-e_{x}(y)_{q}|<\frac{\delta^{\prime}}{2}\) (which implies with the above that \(|y-y^{\prime}|<\delta^{\prime}\)). We now show that the above implies that \(D_{k}(x)<2\varepsilon\). Observe first that for all \(x\in M\), if \(f_{k}(x,y)\neq\widetilde{f}_{k}(x,y)\) it must be because, but \(y^{\prime}\notin[\widetilde{g}_{k}(x),\widetilde{g}_{k+1}(x)]\), or vice versa. In either case, it can be shown that either \(|y-g_{k}(x)|<\delta+\frac{\varepsilon}{4}\) or \(|y-g_{k+1}(x)|<\delta+\frac{\varepsilon}{4}\). Suppose \(y\in[g_{k}(x),g_{k+1}(x)]\) and \(y^{\prime}<\widetilde{g}_{k}(x)\) (a similar proof will work in the case that \(y^{\prime}>\widetilde{g}_{k+1}(x)\). Then since \(y>g_{k}(x)\), \(|y-y^{\prime}|\leq\delta^{\prime}\), and \(|g_{k}(x)-\widetilde{g}_{k}(x)|<\frac{\varepsilon}{4}\), \[0\leq y-g_{k}(x)=(y-y^{\prime})+(y^{\prime}-\widetilde{g}_{k}(x))+(\widetilde{ g}_{k}(x)-g_{k}(x))<\delta+\frac{\varepsilon}{4}.\] It follows then that accounting for both ends of the interval \([g_{k}(x),g_{k+1}(x)]\) and for \(x\in M\), we have \(D_{k}(x)<2\varepsilon\). Resultantly, \[\|f_{k}-\widetilde{f}_{k}\|^{p}=\int_{M}\mu(D(x))^{p}\ dx+\int_{M^{c}}\mu(D(x) )^{p}\ dx<(2\varepsilon)^{p}+\delta^{p}<3\varepsilon^{p},\] which can be made arbitrarily small. **Theorem 5.2**.: _Let \(1\leq p\neq q<\infty\) and suppose \(L_{p}(L_{q})\) is AUH over its finite dimensional sublattices. Let \(f_{i}:E\to L_{p}(L_{q})\) be lattice embeddings with \(E=<e_{1},...,e_{n}>\) such that \(f_{i}(x)=\mathbf{1}\) for some \(x\in E\). Then for all \(\varepsilon>0\), there exists an automorphism \(\phi\) fixing \(\mathbf{1}\) such that \(\|\phi f_{1}-f_{2}\|<\varepsilon\)._ Proof.: Assume the above, and pick \(E^{\prime}=<e_{1}^{\prime},...,e_{m}^{\prime}>\subseteq L_{p}(L_{q})\), where \(e_{k}^{\prime}=a_{k}\cdot\mathbf{1}_{A_{k}\times B_{k}}\) with \(A_{k}\) and \(B_{k}\) intervals such that \(\sum_{k}\mathbf{1}_{A_{k}\times B_{k}}=\mathbf{1}\) and for each \(e_{k}\) there is \(x_{k}\in S(E^{\prime})_{+}\) such that \(\|x_{k}-f_{2}(e_{k})\|<\frac{\varepsilon}{4n}\). Since \(L_{p}(L_{q})\) is AUH, there exists an automorphism \(\psi\) such that \(\|\psi f_{1}-f_{2}\|<\delta\), where \(\delta\) satisfies the conditions for \(\frac{\varepsilon}{4mn}\) and each of the \(e_{k}^{\prime}\)'s in \(E^{\prime}\) in Lemma 5.1. Now pick the automorphism \(\phi^{\prime}\) over \(L_{p}(L_{q})\) mapping \(\mathbf{1}\) to \(\psi f_{1}(x)\) as defined in Lemma 5.1. It follows that for each \(e_{k}^{\prime}\), \(\|\phi^{\prime}(e_{k}^{\prime})-e_{k}^{\prime}\|<\frac{\varepsilon}{4mn}\), so \(\|\phi^{\prime}(x_{k})-x_{k}\|<\frac{\varepsilon}{4n}\). Thus for each \(e_{k}\in E\), \[\|\phi^{\prime}f_{2}(e_{k})-\psi f_{1}(e_{k})\|\leq \|\phi^{\prime}(f_{2}(e_{k})-x_{k})\|+\|\phi^{\prime}(x_{k})-x_{k}\|\] \[+ \|x_{k}-f_{2}(e_{k})\|+\|f_{2}(e_{k})-\psi f_{1}(e_{k})\|<\frac{ \varepsilon}{n},\] Now let \(\phi={\phi^{\prime}}^{-1}\circ\psi\) to obtain the desired automorphism; then \(\|\phi f_{1}-f_{2}\|<\varepsilon\). The above can be used to show that if \(L_{p}(L_{q})\) is AUH and \(f_{i}(E)\) contains \(\mathbf{1}\) for \(i=1,2\), then we can induce almost commutativity with automorphisms fixing \(\mathbf{1}\) as well. This will allow us to reduce possible automorphisms over \(L_{p}(L_{q})\) to those that in particular fix \(\mathbf{1}\). The importance of this result is that these particular homomorphisms fixing \(\mathbf{1}\) must always preserve base-equimeasurability for characteristic functions, as shown in Proposition 3.1. Thus a natural approach in disproving that \(L_{p}(L_{q})\) is AUH would involve finding sublattices containing \(\mathbf{1}\) which are lattice isometric but whose generating elements are not base-equimeasurable. The following results do exactly that: **Lemma 5.3**.: _Lemma 4.2 fails when \(r:=p/q\in\mathbb{N}\). In particular, there exists a non-zero measure \(\nu:=\alpha-\beta\), with \(\alpha\) and \(\beta\) positive measures such that for all polynomials \(P\) of degree \(j\leq r\),_ \[\int_{0}^{1}P(x)\ d\nu(x)=0.\] **Remark 5.4**.: It is already known that a counter-example exists for \(L_{r}(0,\infty)\) for all \(r\in\mathbb{N}\), with \[d\nu(x)=e^{-u^{\frac{1}{4}}}\sin(u^{\frac{1}{4}})\ du\] (see [12] and [8] for more details). Here we provide another example over the unit interval: Proof.: Fix such an \(r\), and define a polynomial \(g(x)\) of degree \(r+1\) with \(g(x)=\sum_{0}^{r+1}a_{i}x^{i}\) such that for all \(0\leq j\leq r\), \(\int_{0}^{1}x^{j}g(x)\ dx=0\). This can be done by finding a non-trivial \(a_{0},...,a_{r+1}\) in the null set of the \((n+1)\times(n+2)\) size matrix \(A\) with \(A(i,j)=\frac{1}{i+j+1}\). Then let \(d\nu(x)=g(x)\ dx\). Let \(\alpha=\nu_{+}\) and \(\beta=\nu_{-}\). Clearly \(\alpha\) and \(\beta\) are finite positive Borel measures, but since \(g\neq 0\), \(\alpha\neq\beta\). **Lemma 5.5**.: _Let \(p/q\in\mathbb{N}\). Then there exists a two dimensional lattice \(E=<e_{1},e_{2}>\) and lattice embeddings \(f_{i}:E\to L_{p}(L_{q})\) with \(\mathbf{1}\in E\) such that \(g_{1}(\mathbf{e})\) and \(g_{2}(\mathbf{e})\) are not base-equimeasurable._ Proof.: Let \(f(x)\) be a polynomial of degree at least \(r+1\) as defined in Lemma 5.3 such that for all \(0\leq k\leq r\), \(\int_{0}^{1}t^{k}f(t)\ dt=0\), and \(\int_{0}^{1}|f(x)|\ dx=1\). Let \(h_{1}(x)=\frac{1}{2}+f(x)_{+}\), and let \(h_{2}(x)=\frac{1}{2}+f(x)_{-}\). Note that each \(h_{i}(x)>0\), and furthermore that \(\int_{0}^{1}h_{i}(t)\ dt=1\). Additionally, each map \(H_{i}(x)=\int_{0}^{x}h_{i}(t)\ dt\) is strictly increasing with \(H_{i}(0)=0\) and \(H_{i}(1)=1\). Now we will construct characteristic functions \(f_{j}^{i}\in L_{p}(L_{q})\) such that the linear map \(f_{j}^{1}\mapsto f_{j}^{2}\) induces an isometry, but \(N\mathbf{f}^{1}\) and \(\mathbf{f}^{2}\) are not base-equimeasurable. From there, we let \(e_{j}=\frac{f_{j}^{1}}{\|f_{j}^{i}\|}\), and let \(g_{i}\) be the lattice isometry induced by \(g_{i}(e_{j})=\frac{f_{j}^{i}}{\|f_{j}^{i}\|}\), To this end, let \[F_{1}^{i}(x):=H_{i}^{-1}(x),\text{ and }F_{2}^{i}(x):=1-F_{1}^{i}(x).\] Observe that \(F_{1}^{1}(x)\neq F_{1}^{2}(x)\). Indeed, one can show that the associated push forwards \(dF_{1\#}^{i}\mu\) for each \(F_{1}^{i}\) have the corresponding equivalence: \[dF_{1\#}^{i}\mu(x)=h_{i}(x)\ dx\] So \((F_{1}^{1},F_{2}^{1})\) and \((F_{1}^{2},F_{2}^{2})\) are not equimeasurable. However, For \(0\leq j\leq r\), \(u^{j}h_{i}(u)\ du=u^{j}\ dF_{1\#}^{i}(u)=F_{1}^{i}(x)^{j}\ dx\), so it follows from the construction of the \(h_{i}\)'s that \[\int_{0}^{1}F_{1}^{1}(x)^{j}\ dx=\int_{0}^{1}F_{1}^{2}(x)^{j}\ dx.\] Thus for any \(v_{1},v_{2}>0\), since \(F_{1}^{i}\) and \(F_{2}^{i}\) are both positive, we have \[\int_{0}^{1}|v_{1}F_{1}^{1}(x)+v_{2}F_{2}^{1}(x)|^{r}\ dx=\int_{0}^{1}((v _{1}-v_{2})F_{1}^{1}(x)+v_{2})^{r}\ dx\] \[= \sum_{0}^{r}\binom{r}{j}(v_{1}-v_{2})^{j}v_{2}^{r-j}\int_{0}^{1}F_{ 1}^{1}(x)^{j}\ dx=\int_{0}^{1}|v_{1}F_{1}^{2}(x)+v_{2}F_{2}^{2}(x)|^{r}\ dx\] To conclude the proof, let \(f_{1}^{i}(x,y)=\mathbf{1}_{[0,F_{1}^{i}(x)]}(y)\), and let \(f_{2}^{i}=\mathbf{1}-f_{1}^{i}\). Clearly \(N[f_{j}^{i}]=F_{j}^{i}\). **Theorem 5.6**.: _If \(p/q\in\mathbb{N}\) and \(p\neq q\), then \(L_{p}(L_{q})\) is not AUH for the class of its finite dimensional sublattices._ Proof.: Fix \(p/q\in\mathbb{N}\), and let \(E\) be the \(2\)-dimensional lattice generated in Lemma 5.5, with \(f_{i}:E\to L_{p}(L_{q})\) embeddings mapping to copies of \(E=<e_{1},e_{2}>\) such that \(f_{1}(\mathbf{e})\) and \(f_{2}(\mathbf{e})\) are not base-equimeasurable. In addition, by assumption \(\mathbf{1}\in E\). For notational ease, let \(F_{j}^{i}=N[f_{i}(e_{j})]\). Suppose for the sake of contradiction that \(L_{p}(L_{q})\) is AUH. Pick some measurable \(C\subseteq[0,1]^{2}\) and \(\varepsilon>0\) such that \[*\quad\mathbf{F}_{\#}^{2}\mu(C)>\mathbf{F}_{\#}^{1}\mu(C+\varepsilon)+\varepsilon,\] where \[C+\varepsilon=\{\mathbf{t}\in[0,1]^{2}:\|\mathbf{t}-\mathbf{s}\|_{\infty}< \varepsilon\text{ for some }\mathbf{s}\in C\}.\] By Theorem 5.2, there is some lattice automorphism \(\phi:L_{p}(L_{q})\to L_{p}(L_{q})\) fixing \(\mathbf{1}\) such that \(\|\phi\circ f_{1}-f_{2}\|<\varepsilon^{2}\). Let \(\phi F_{j}^{i}=N[\phi f_{i}(e_{j})]\). By Proposition 3.1, \(\phi\) preserves base-equimeasurability, so for any measurable \(B\), \[\phi\mathbf{F}_{\#}^{1}\mu(B)=\mathbf{F}_{\#}^{1}\mu(B).\] By the properties of \(N\), we also have \(\|\phi F_{j}^{1}-F_{j}^{2}\|_{p}\leq\|\phi f_{1}(e_{j})-f_{2}(e_{j})\|\). It also follows that \[\mu(t:\|\phi\mathbf{F}^{1}(t)-\mathbf{F}^{2}(t)\|_{\infty}>\varepsilon)<\varepsilon,\] so \(\phi\mathbf{F}_{\#}^{1}\mu(C+\varepsilon)+\varepsilon>\mathbf{F}_{\#}^{2}\mu(C)\), but this contradicts the assumption (*). So Theorem 5.2 cannot apply, implying that \(L_{p}(L_{q})\) is not AUH as desired. **Remark 5.7**.: For \(p/q\in\mathbb{N}\), \(L_{p}(L_{q})\) is the unique lattice that is separably AUH over finitely generated \(BL_{p}L_{q}\) spaces, since up to isometry it is the unique doubly atomless \(BL_{p}L_{q}\) space. In light of Theorem 5.6, this implies that the class of finitely generated sublattices of \(L_{p}(L_{q})\) is not a Fraisse class as defined in [2], as \(L_{p}(L_{q})\) is the only possible candidate as a Fraisse limit. In particular, \(L_{p}L_{q}\) lacks the NAP. Indeed, otherwise, one can use that NAP with \(BL_{p}L_{q}\) amalgamate lattices and [7, Proposition 2.8] to situate a \(d^{\mathcal{K}}\)-Cauchy sequence into a Cauchy-sequence of generating elements in an ambient separable \(BL_{p}L_{q}\) lattice. Thus \(\overline{\mathcal{K}_{p,q}}\) would also have the Polish Property, implying that \(\overline{\mathcal{K}_{p,q}}\) is a Fraisse class. Since the only possible candidate Fraisse limit space is \(L_{p}(L_{q})\) itself, this would contradict Theorem 5.6.
2309.13946
Observational constraints on interactions between dark energy and dark matter with momentum and energy transfers
We place observational constraints on a dark energy (DE) model in which a quintessence scalar field $\phi$ is coupled to dark matter (DM) through momentum and energy exchanges.The momentum transfer is weighed by an interaction between the field derivative and DM four velocity with a coupling constant $\beta$, whereas the energy exchange is characterized by an exponential scalar-field coupling to the DM density with a coupling constant $Q$. A positive coupling $\beta$ leads to the suppression for the growth of DM density perturbations at low redshifts, whose property offers a possibility for resolving the $\sigma_8$ tension problem. A negative coupling $Q$ gives rise to a $\phi$-matter-dominated epoch, whose presence can reduce the sound horizon around the Cosmic Microwave Background (CMB) decoupling epoch. Using the data of Planck 2018, 12-th Sloan Digital Sky Survey, Phantheon supernovae samples, and 1-year dark energy survey, we find that the two couplings are constrained to be $\beta=0.332^{+1.246}_{-0.237}$ and $Q =-0.0312^{+0.0312}_{-0.0085}$ at 68\,\% confidence level (CL). Thus, there is an interesting observational signature of the momentum exchange ($\beta \neq 0$) between DE and DM, with a peak of the probability distribution of the energy transfer coupling at $Q<0$.
Xiaolin Liu, Shinji Tsujikawa, Kiyotomo Ichiki
2023-09-25T08:26:51Z
http://arxiv.org/abs/2309.13946v2
# Observational constraints on interactions between dark energy and dark matter ###### Abstract We place observational constraints on a dark energy (DE) model in which a quintessence scalar field \(\phi\) is coupled to dark matter (DM) through momentum and energy exchanges. The momentum transfer is weighed by an interaction between the field derivative and DM four velocity with a coupling constant \(\beta\), whereas the energy exchange is characterized by an exponential scalar-field coupling to the DM density with a coupling constant \(Q\). A positive coupling \(\beta\) leads to the suppression for the growth of DM density perturbations at low redshifts, whose property offers a possibility for resolving the \(\sigma_{8}\) tension problem. A negative coupling \(Q\) gives rise to a \(\phi\)-matter-dominated epoch, whose presence can reduce the sound horizon around the Cosmic Microwave Background (CMB) decoupling epoch. Using the data of Planck 2018, 12-th Sloan Digital Sky Survey, Phantheon supernovae samples, and 1-year dark energy survey, we find that the two couplings are constrained to be \(\beta=0.417^{+1.592}_{-0.307}\) and \(Q=-0.036^{+0.036}_{-0.010}\) at 68 % confidence level (CL). Thus, there is an interesting observational signature of the momentum exchange (\(\beta\neq 0\)) between DE and DM, with a peak of the probability distribution of the energy transfer coupling at \(Q<0\). + Footnote †: preprint: WUAP-23-10 ## I Introduction Revealing the origin of the dark sector in our Universe is an important challenge for the modern cosmology [1; 2; 3; 4; 5; 6; 7]. Dark energy (DE) accelerates the current Universe, while cold dark matter (CDM) is the main source for the formation of large-scale structures. The origin of DE can be a cosmological constant \(\Lambda\)[8; 9; 10; 11], but it is theoretically challenging to naturally explain its small value from the vacuum energy arising from particle physics [12; 13]. Instead, there have been many attempts for constructing DE models with dynamical propagating degrees of freedom such as scalar fields, vector fields, and massive gravitons (see Refs. [14; 15; 16; 17; 18; 19] for reviews). Among them, the scalar-field DE, which is dubbed quintessence [20; 21; 22; 23; 24; 25; 26; 27], is one of the simplest models which can be distinguished from the cosmological constant through its time-varying equation of state (EOS) \(w_{\rm DE}\). From the observational side, we have not yet found compelling evidence that quintessence is favored over the cosmological constant. In particular, the joint analysis based on the data of supernovae Ia (SN Ia), baryon acoustic oscillations (BAO), and the cosmic microwave background (CMB) showed that the quintessence EOS needs to be close to \(-1\) at low redshifts [28; 29; 30; 31; 32]. Hence it is difficult to distinguish between quintessence and \(\Lambda\) from the information of \(w_{\rm DE}\) alone. At the level of perturbations, the \(\Lambda\)CDM model has a so-called \(\sigma_{8}\) tension for the amplitude of matter density contrast between the Planck CMB data [31] and low-redshift probes like shear-lensing [33; 34; 35] and redshift-space distortions [36; 37]. For both \(\Lambda\) and quintessence, the effective gravitational coupling \(G_{\rm eff}\) on scales relevant to the growth of large-scale structures is equivalent to the Newton constant \(G\). Then, the problem of the \(\sigma_{8}\) tension cannot be addressed by quintessence either. Moreover, for both \(\Lambda\) and quintessence, there is the tension of today's Hubble expansion rate \(H_{0}\) between the CMB data and low-redshift measurements [38; 39; 40; 41; 42; 43; 44; 45]. If we allow for a possibility of interactions between DE and DM, the cosmic expansion and growth histories can be modified in comparison to the \(\Lambda\)CDM model. One example of such couplings corresponds to an energy exchange between DE and DM through an interacting Lagrangian \(L_{\rm E}=-(e^{Q\phi/M_{\rm Pl}}-1)\rho_{c}\)[46; 47; 48; 49], where \(Q\) is a coupling constant, \(M_{\rm Pl}\) is the reduced Planck mass, and \(\rho_{c}\) is the CDM density. The similar type of couplings arises from Brans-Dicke theories [50] after transforming the Jordan-frame action to that in the Einstein frame [51; 52; 53]. In the presence of such an energy transfer, it is possible to realize a so-called \(\phi\)-matter-dominated epoch (\(\phi\)MDE) [47] in which the DE (scalar field) density parameter takes a nonvanishing constant value \(\Omega_{\rm DE}=2Q^{2}/3\). The presence of the \(\phi\)MDE can reduce the sound horizon at CMB decoupling [54; 55; 56], which may offer a possibility for alleviating the \(H_{0}\) tension. On the other hand, the effective gravitational coupling of CDM is given by \(G_{\rm eff}=G(1+2Q^{2})\)[57; 58], which is larger than \(G\). This property is not welcome for reducing the \(\sigma_{8}\) tension, as we require that \(G_{\rm eff}<G\) to address this problem. The scalar field can also mediate the momentum exchange with CDM through a scalar product \(Z=u_{c}^{\mu}\nabla_{\mu}\phi\)[49; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70], where \(u_{c}^{\mu}\) is a CDM four velocity and \(\nabla_{\mu}\phi\) is a covariant derivative of \(\phi\). If we consider an interacting Lagrangian of the form \(L_{\rm M}=\beta Z^{2}\), where \(\beta\) is a coupling constant, the modification to the background equations arises only through a change of the kinetic term \(\dot{\phi}^{2}/2\to(1+2\beta)\dot{\phi}^{2}/2\) in the density and pressure of \(\phi\)[63; 59]. At the level of perturbations, the Euler equation is modified by the momentum transfer, while the continuity equation is not affected. For \(\beta>0\), the conditions for the absence of ghosts and Laplacian instabilities of scalar and tensor perturbations are consistently satisfied [66]. In this case, the effective gravitational coupling of CDM is smaller than \(G\) at low redshifts [66; 63; 49; 69]. Then, there is an intriguing possibility for reducing the \(\sigma_{8}\) tension by the momentum transfer [63; 67; 65; 67]. An interacting model of DE and DM with both momentum and energy transfers was proposed in Ref. [68] as a possible solution to the problems of \(\sigma_{8}\) and \(H_{0}\) tensions. This is described by the interacting Lagrangian \(L_{\rm int}=\beta Z^{2}-(e^{Q\phi/M_{\rm Pl}}-1)\rho_{c}\) with a canonical scalar field \(\phi\) having a potential \(V(\phi)\). Since the model has an explicit Lagrangian, the perturbation equations of motion are unambiguously fixed by varying the corresponding action with respect to the perturbed variables. We would like to stress that this is not the case for many interacting DE and DM models in which the background equations alone are modified by introducing phenomenological couplings [71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86]. We note however that there are some other models with concrete Lagrangians or energy-momentum tensors based on interacting fluids of DE and DM [87; 88; 89; 90] or on vector-tensor theories [91]. In Ref. [68], it was anticipated that the momentum transfer associated with the coupling \(\beta\) may address the \(\sigma_{8}\) tension due to the suppression of growth of matter perturbations and that the energy transfer characterized by the coupling \(Q\) may ease the \(H_{0}\) tension by the presence of the \(\phi\)MDE. While the gravitational attraction is enhanced by the energy transfer, the decrease of \(G_{\rm eff}\) induced by the coupling \(\beta\) can overwhelm the increase of \(G_{\rm eff}\) induced by the coupling \(Q\)[68; 69]. We also note that the coupling \(\beta\) does not remove the existence of the \(\phi\)MDE at the background level. These facts already imply that nonvanishing values of couplings may be favored, but we require a statistical analysis with actual observational data to see the signatures of those couplings. In this paper, we perform the Markov chain Monte Carlo (MCMC) analysis of the interacting model of DE and DM with momentum and energy transfers mentioned above. For this purpose, we exploit the recent data of Planck CMB [92], 12-th Sloan Digital Sky Survey (SDSS) [93], Phantheon supernovae samples [94], and 1-year dark energy survey (DES) [95]. We show that the nonvanishing value of \(\beta\) is statistically favoured over the case \(\beta=0\), so there is an interesting signature of the momentum transfer between DE and DM. For the energy transfer, the probability distribution of the coupling has a peak at \(Q<0\). The \(Q=0\) case is also consistent with the data at \(68\,\%\) CL, so the signature of energy transfer is not so significant compared to that of momentum transfer. Today's Hubble constant is constrained to be \(H_{0}=68.22^{+0.58}_{-0.61}\) (\(68\,\%\) CL), which is not much different from the bound derived for the \(\Lambda\)CDM model with the above data sets. Like most of the models proposed in the literature, our coupled DE-DM scenario does not completely resolve the Hubble tension problem present in the current observational data. This paper is organized as follows. In Sec. II, we revisit the background dynamics in our interacting model of DE and DM. In Sec. III, we show the full linear perturbation equations of motion and discuss the stability and the effective gravitational couplings of nonrelativistic matter. In Sec. IV, we explain the methodology of how to implement the background and perturbation equations in the CAMB code. We also discuss the impact of our model on several observables. In Sec. V, we present our MCMC results and interpret constraints on the model parameters. Sec. VI is devoted to conclusions. Throughout the paper, we work in the natural unit system, i.e., \(c=\hbar=k_{B}=1\). ## II Background equations of motion We consider a DE scalar field \(\phi\) interacting with CDM through energy and momentum transfers. We assume that \(\phi\) is a canonical field with the kinetic term \(X=-(1/2)\nabla^{\mu}\phi\nabla_{\mu}\phi\) and the exponential potential \(V(\phi)=V_{0}e^{-\lambda\phi/M_{\rm Pl}}\), where \(V_{0}\) and \(\lambda\) are constants. The choice of the exponential potential is not essential for the purpose of probing the DE-DM couplings, but we can choose other quintessence potentials like the inverse power-law type \(V(\phi)=V_{0}\phi^{-p}\)[54; 55; 56]. The energy transfer is described by the interacting Lagrangian \(L_{\rm E}=-(e^{Q\phi/M_{\rm Pl}}-1)\rho_{c}\), where \(Q\) is a coupling constant and \(\rho_{c}\) is the CDM density. In the limit that \(Q\to 0\), we have \(L_{\rm E}\to 0\). The momentum transfer is weighed by the interacting Lagrangian \(L_{\rm M}=\beta Z^{2}\), where \(\beta\) is a coupling constant and \(Z\) is defined by \[Z=u_{c}^{\mu}\nabla_{\mu}\phi\,, \tag{1}\] where \(u_{c}^{\mu}\) is the CDM four velocity. For the gravity sector, we consider Einstein gravity described by the Lagrangian of a Ricci scalar \(R\). Then, the total action is given by [68] \[\mathcal{S}=\int{\rm d}^{4}x\sqrt{-g}\left[\frac{M_{\rm Pl}^{2}}{2}R+X-V_{0}e^ {-\lambda\phi/M_{\rm Pl}}-\left(e^{Q\phi/M_{\rm Pl}}-1\right)\rho_{c}+\beta Z^ {2}\right]+\mathcal{S}_{m}\,, \tag{2}\] where \(g\) is a determinant of the metric tensor \(g_{\mu\nu}\), \(\mathcal{S}_{m}\) is the matter action containing the contributions of CDM, baryons, and radiation with the energy densities \(\rho_{I}\), EOSs \(w_{I}\), and squared sound speeds \(c_{I}\), which are labeled by \(I=c,b,r\) respectively. We assume that neither baryons nor radiation are coupled to the scalar field. The action \(\mathcal{S}_{m}\) of perfect fluids can be expressed as a form of the Schutz-Sorkin action [96; 97; 98] \[\mathcal{S}_{m}=-\sum_{I=c,b,r}\int\mathrm{d}^{4}x\left[\sqrt{-g}\,\rho_{I}(n_ {I})+J_{I}^{\mu}\partial_{\mu}\ell_{I}\right]\,, \tag{3}\] where \(\rho_{I}\) depends on the number density \(n_{I}\) of each fluid. The current vector field \(J_{I}^{\mu}\) is related to \(n_{I}\) as \(n_{I}=\sqrt{g_{\mu\nu}J_{I}^{\mu}J_{I}^{\nu}/g}\), with \(\ell_{I}\) being the Lagrange multiplier. The fluid four velocity is given by \[u_{I}^{\mu}=\frac{J_{I}^{\mu}}{n_{I}\sqrt{-g}}\,, \tag{4}\] which satisfies the normalization \(u_{I}^{\mu}u_{I\mu}=-1\). Varying the action (2) with respect to \(\ell_{I}\), it follows that \(\partial_{\mu}J_{I}^{\mu}=0\). In terms of the four velocity, this current conservation translates to \[u_{I}^{\mu}\partial_{\mu}\rho_{I}+\left(\rho_{I}+P_{I}\right) \nabla_{\mu}u_{I}^{\mu}=0\,, \tag{5}\] where \(P_{I}=n_{I}\rho_{I,n}-\rho_{I}\) is the pressure of each fluid. We discuss the cosmological dynamics on the spatially-flat Friedmann-Lemaitre-Robertson-Walker (FLRW) background given by the line element \[\mathrm{d}s^{2}=-\mathrm{d}t^{2}+a^{2}(t)\delta_{ij}\mathrm{d}x^{i}\mathrm{d}x ^{j}\,, \tag{6}\] where \(a(t)\) is the time-dependent scale factor. On this background we have \(u_{I}^{\mu}=(1,0,0,0)\) and \(\nabla_{\mu}u_{I}^{\mu}=3H\), where \(H=\dot{a}/a\) is the expansion rate of the Universe and a dot denotes the derivative with respect to the cosmic time \(t\). From Eq. (5), we have \[\dot{\rho}_{I}+3H\left(\rho_{I}+P_{I}\right)=0\,, \tag{7}\] which holds for each \(I=c,b,r\). We consider the cosmological dynamics after the CDM and baryons started to behave as non-relativistic particles. At this epoch, we have \(w_{c}=0\), \(w_{b}=0\), \(c_{c}^{2}=0\), and \(c_{b}^{2}=0\). The radiation has a usual relativistic EOS \(w_{r}=1/3\) with \(c_{r}^{2}=1/3\). The gravitational field equations of motion are given by \[3M_{\rm pl}^{2}H^{2}=\rho_{\phi}+e^{Q\phi/M_{\rm Pl}}\rho_{c}+ \rho_{b}+\rho_{r}\,, \tag{8}\] \[M_{\rm pl}^{2}\left(2\dot{H}+3H^{2}\right)=-P_{\phi}-\frac{1}{3} \rho_{r}\,, \tag{9}\] where \(\rho_{\phi}\) and \(P_{\phi}\) are the scalar-field density and pressure defined, respectively, by \[\rho_{\phi}=\frac{1}{2}q_{s}\dot{\phi}^{2}+V_{0}e^{-\lambda\phi/M_{\rm Pl}}\,, \qquad P_{\phi}=\frac{1}{2}q_{s}\dot{\phi}^{2}-V_{0}e^{-\lambda\phi/M_{\rm Pl }}\,, \tag{10}\] with \[q_{s}\equiv 1+2\beta\,. \tag{11}\] We require that \(q_{s}>0\) to have a positive kinetic term in \(\rho_{\phi}\). The scalar-field equation can be expressed in the form \[\dot{\rho}_{\phi}+3H\left(\rho_{\phi}+P_{\phi}\right)=-\frac{Q\dot{\phi}}{M_{ \rm Pl}}\hat{\rho}_{c}\,, \tag{12}\] where \[\hat{\rho}_{c}\equiv e^{Q\phi/M_{\rm Pl}}\rho_{c}\,. \tag{13}\] Note that \(\hat{\rho}_{c}\) is the CDM density containing the effect of an energy transfer, and the energy flows from CDM to \(\phi\) if \(\dot{\phi}>0\) with \(Q<0\). From Eq. (7), CDM obeys the continuity equation \(\dot{\rho}_{c}+3H(\rho_{c}+P_{c})=0\). In terms of \(\hat{\rho}_{c}\), this equation can be expressed as \[\dot{\hat{\rho}}_{c}+3H\hat{\rho}_{c}=+\frac{Q\dot{\phi}}{M_{\rm Pl}}\hat{\rho }_{c}\,. \tag{14}\] From Eqs. (2.12) and (2.14), it is clear that there is the energy transfer between the scalar field and CDM, but the momentum exchange between DE and DM does not occur at the background level. The effect of the coupling \(\beta\) appears only as the modification to the coefficient of \(\dot{\phi}^{2}\). To study the background cosmological dynamics, it is convenient to introduce the following dimensionless variables \[x_{1}=\frac{\dot{\phi}}{\sqrt{6}M_{\rm Pl}H}\,,\qquad x_{2}=\sqrt{\frac{V_{0}}{ 3}}\frac{e^{-\lambda\phi/(2M_{\rm Pl})}}{M_{\rm Pl}H}\,, \tag{2.15}\] and \[\Omega_{\phi}=q_{s}x_{1}^{2}+x_{2}^{2}\,,\qquad\Omega_{c}=\frac{e^{Q\phi/M_{\rm pl }}\rho_{c}}{3M_{\rm Pl}^{2}H^{2}}\,,\qquad\Omega_{b}=\frac{\rho_{b}}{3M_{\rm pl }^{2}H^{2}}\,,\qquad\Omega_{r}=\frac{\rho_{r}}{3M_{\rm pl}^{2}H^{2}}\,. \tag{2.16}\] From Eq. (2.8), the density parameters are subject to the constraint \[\Omega_{c}=1-\Omega_{\phi}-\Omega_{b}-\Omega_{r}\,. \tag{2.17}\] The variables \(x_{1}\), \(x_{2}\), \(\Omega_{b}\), and \(\Omega_{r}\) obey the differential equations \[\frac{{\rm d}x_{1}}{{\rm d}N} = \frac{1}{2}x_{1}\left(6q_{s}x_{1}^{2}-6+3\Omega_{c}+3\Omega_{b}+ 4\Omega_{r}\right)+\frac{\sqrt{6}}{2q_{s}}\left(\lambda x_{2}^{2}-Q\Omega_{c} \right)\,, \tag{2.18}\] \[\frac{{\rm d}x_{2}}{{\rm d}N} = \frac{1}{2}x_{2}\left(6q_{s}x_{1}^{2}-\sqrt{6}\lambda x_{1}+3 \Omega_{c}+3\Omega_{b}+4\Omega_{r}\right)\,,\] (2.19) \[\frac{{\rm d}\Omega_{b}}{{\rm d}N} = \Omega_{b}\left(6q_{s}x_{1}^{2}-3+3\Omega_{c}+3\Omega_{b}+4 \Omega_{r}\right)\,,\] (2.20) \[\frac{{\rm d}\Omega_{r}}{{\rm d}N} = \Omega_{r}\left(6q_{s}x_{1}^{2}-4+3\Omega_{c}+3\Omega_{b}+4 \Omega_{r}\right)\,, \tag{2.21}\] where \(N=\ln a\). The scalar-field EOS \(w_{\phi}=P_{\phi}/\rho_{\phi}\) and effective EOS \(w_{\rm eff}=-1-2\dot{H}/(3H^{2})\) are \[w_{\phi}=\frac{q_{s}x_{1}^{2}-x_{2}^{2}}{q_{s}x_{1}^{2}+x_{2}^{2}}\,,\qquad w_ {\rm eff}=-1+2q_{s}x_{1}^{2}+\Omega_{c}+\Omega_{b}+\frac{4}{3}\Omega_{r}\,. \tag{2.22}\] The fixed points with constant values of \(x_{1}\), \(x_{2}\), \(\Omega_{b}\), and \(\Omega_{r}\) relevant to the radiation, matter, and dark-energy dominated epochs are given, respectively, by * Radiation point (A) \[x_{1}=0\,,\quad x_{2}=0\,,\quad\Omega_{b}=0\,,\quad\Omega_{r}=1\,,\quad\Omega_ {\phi}=0\,,\quad w_{\rm eff}=\frac{1}{3}\,.\] (2.23) * \(\phi\)MDE point (B) \[x_{1}=-\frac{\sqrt{6}Q}{3q_{s}}\,,\quad x_{2}=0\,,\quad\Omega_{b}=0\,,\quad \Omega_{r}=0\,,\quad\Omega_{\phi}=w_{\rm eff}=\frac{2Q^{2}}{3q_{s}}\,,\quad w_ {\phi}=1\,.\] (2.24) * Accelerated point (C) \[x_{1}=\frac{\lambda}{\sqrt{6}q_{s}}\,,\quad x_{2}=\sqrt{1-\frac{\lambda^{2}}{ 6q_{s}}}\,,\quad\Omega_{b}=0\,,\quad\Omega_{r}=0\,,\quad\Omega_{\phi}=1\,, \quad w_{\phi}=w_{\rm eff}=-1+\frac{\lambda^{2}}{3q_{s}}\,.\] (2.25) The coupling \(Q\) modifies the standard matter era through the nonvanishing values of \(\Omega_{\phi}\) and \(w_{\rm eff}\). To avoid the dominance of the scalar-field density over the CDM and baryon densities during the \(\phi\)MDE, we require that \(\Omega_{\phi}\ll 1\), i.e., \[Q^{2}\ll\frac{3}{2}(1+2\beta)\,. \tag{2.26}\] To have the epoch of late-time cosmic acceleration driven by point (C), we need the condition \(w_{\rm eff}<-1/3\), i.e., \[\lambda^{2}<2(1+2\beta)\,. \tag{2.27}\] Under this condition, we can show that point (C) is stable against the homogeneous perturbation if [68] \[\lambda(\lambda+Q)<3(1+2\beta)\,. \tag{2.28}\] Provided that the conditions (2.26)-(2.28) hold, the cosmological sequence of fixed points (A) \(\rightarrow\) (B) \(\rightarrow\) (C) can be realized. We refer the reader to Ref. [68] for the numerically integrated background solution. Taking the limits \(Q\to 0\), \(\beta\to 0\), and \(\lambda\to 0\), we recover the background evolution in the \(\Lambda\)CDM model. ## III Perturbation equations of motion In Ref. [68], the scalar perturbation equations of motion were derived without fixing particular gauges. The perturbed line element containing four scalar perturbations \(\alpha\), \(\chi\), \(\zeta\), and \(E\) on the spatially-flat FLRW background is given by \[\mathrm{d}s^{2}=-(1+2\alpha)\mathrm{d}t^{2}+2\partial_{i}\chi\mathrm{d}t \mathrm{d}x^{i}+a^{2}(t)\left[(1+2\zeta)\delta_{ij}+2\partial_{i}\partial_{j}E \right]\mathrm{d}x^{i}\mathrm{d}x^{j}\,. \tag{10}\] Tensor perturbations propagate in the same manner as in the \(\Lambda\)CDM model, so we do not consider them in the following. The scalar field \(\phi\) is decomposed into the background part \(\bar{\phi}(t)\) and the perturbed part \(\delta\phi\), as \[\phi=\bar{\phi}(t)+\delta\phi(t,x^{i})\,, \tag{11}\] where we omit the bar from background quantities in the following. The spatial components of four velocities \(u_{I}=J_{Ii}/(n_{I}\sqrt{-g})\) in perfect fluids are related to the scalar velocity potentials \(v_{I}\), as \[u_{Ii}=-\partial_{i}v_{I}\,. \tag{12}\] The fluid density is given by \(\rho_{I}=\rho_{I}(t)+\delta\rho_{I}(t,x^{i})\), where the perturbed part is [69; 66; 49] \[\delta\rho_{I}=\frac{\rho_{I,n_{I}}}{a^{3}}\left[\delta J_{I}-\mathcal{N}_{I} \left(3\zeta+\partial^{2}E\right)\right]\,, \tag{13}\] where \(\rho_{I,n_{I}}=\partial\rho_{I}/\partial n_{I}\), and \(\mathcal{N}_{I}=n_{I}a^{3}\) is the background particle number of each fluid (which is conserved). We can construct the following gauge-invariant combinations \[\delta\phi_{\mathrm{N}}=\delta\phi+\dot{\phi}\left(\chi-a^{2} \dot{E}\right)\,,\qquad\delta\rho_{I\mathrm{N}}=\delta\rho_{I}+\dot{\rho}_{I} \left(\chi-a^{2}\dot{E}\right)\,,\qquad v_{I\mathrm{N}}=v_{I}+\chi-a^{2}\dot{E }\,,\] \[\Psi=\alpha+\frac{\mathrm{d}}{\mathrm{d}t}\left(\chi-a^{2}\dot{E }\right)\,,\qquad\Phi=\zeta+H\left(\chi-a^{2}\dot{E}\right)\,. \tag{14}\] We also introduce the dimensionless variables \[\delta_{I\mathrm{N}}=\frac{\delta\rho_{I\mathrm{N}}}{\rho_{I}}\,,\qquad \delta\varphi_{\mathrm{N}}=\frac{H}{\dot{\phi}}\delta\phi_{\mathrm{N}}\,, \qquad V_{I\mathrm{N}}=Hv_{I\mathrm{N}}\,,\qquad\mathcal{K}=\frac{k}{aH}\,, \tag{15}\] where \(k\) is a comoving wavenumber. In Fourier space, the linear perturbation equations of motion are given by [68] \[6q_{s}x_{1}^{2}\frac{\mathrm{d}\delta\varphi_{\mathrm{N}}}{ \mathrm{d}N}-6\frac{\mathrm{d}\Phi}{\mathrm{d}N}+6\left(1-q_{s}x_{1}^{2} \right)\left(\xi\delta\varphi_{\mathrm{N}}+\Psi\right)-2\mathcal{K}^{2}\Phi+3 \left(3\Omega_{c}+3\Omega_{b}+4\Omega_{r}\right)\delta\varphi_{\mathrm{N}}\] \[+3\left(\Omega_{c}\delta_{c\mathrm{N}}+\Omega_{b}\delta_{b\mathrm{ N}}+\Omega_{r}\delta_{r\mathrm{N}}\right)=0\,, \tag{16}\] \[\frac{\mathrm{d}\Phi}{\mathrm{d}N}-\Psi-\xi\delta\varphi_{ \mathrm{N}}+\frac{3}{2}\left(\Omega_{c}+4\beta x_{1}^{2}\right)\left(V_{c \mathrm{N}}-\delta\varphi_{\mathrm{N}}\right)+\frac{3}{2}\Omega_{b}\left(V_{b \mathrm{N}}-\delta\varphi_{\mathrm{N}}\right)+2\Omega_{r}\left(V_{r\mathrm{N}} -\delta\varphi_{\mathrm{N}}\right)=0\,,\] (17) \[\frac{\mathrm{d}\delta_{I\mathrm{N}}}{\mathrm{d}N}+3\left(c_{I}^{2 }-w_{I}\right)\delta_{I\mathrm{N}}+\left(1+w_{I}\right)\left(\mathcal{K}^{2}V_ {I\mathrm{N}}+3\frac{\mathrm{d}\Phi}{\mathrm{d}N}\right)=0\,,\qquad(\text{for }I=c,b,r),\] (18) \[\left(\Omega_{c}+4\beta x_{1}^{2}\right)\frac{\mathrm{d}V_{c \mathrm{N}}}{\mathrm{d}N}-\left[\xi\left(\Omega_{c}+4\beta x_{1}^{2}\right)-4 \beta x_{1}^{2}(3+2\epsilon_{\phi})-\sqrt{6}Qx_{1}\Omega_{c}\right]V_{c \mathrm{N}}-\Omega_{c}\Psi\] \[-4\beta x_{1}^{2}\frac{\mathrm{d}\delta\varphi_{\mathrm{N}}}{ \mathrm{d}N}+\left[4\beta x_{1}(\xi-3-2\epsilon_{\phi})-\sqrt{6}Q\Omega_{c} \right]x_{1}\delta\varphi_{\mathrm{N}}=0\,,\] (19) \[\frac{\mathrm{d}V_{I\mathrm{N}}}{\mathrm{d}N}-\left(\xi+3c_{I}^{2 }\right)V_{I\mathrm{N}}-\Psi-\frac{c_{I}^{2}}{1+w_{I}}\delta_{I\mathrm{N}}=0\,, \qquad(\text{for }I=b,r),\] (20) \[\frac{\mathrm{d}^{2}\varphi_{\mathrm{N}}}{\mathrm{d}N^{2}}+\left(3 -\xi+2\epsilon_{\phi}\right)\delta\frac{\mathrm{d}\varphi_{\mathrm{N}}}{ \mathrm{d}N}+\left[\hat{c}_{s}^{2}\mathcal{K}^{2}-\frac{\mathrm{d}\xi}{ \mathrm{d}N}-3\xi+\frac{\mathrm{d}\epsilon_{\phi}}{\mathrm{d}N}+\epsilon_{\phi} ^{2}+(3-\xi)\epsilon_{\phi}+\frac{3}{q_{s}}\left(\lambda^{2}x_{2}^{2}+Q^{2} \Omega_{c}\right)\right]\delta\varphi_{\mathrm{N}}\] \[+3\hat{c}_{s}^{2}\frac{\mathrm{d}\Phi}{\mathrm{d}N}-\frac{ \mathrm{d}\Psi}{\mathrm{d}N}-2\left(3+\epsilon_{\phi}\right)\Psi-\frac{2\beta}{q _{s}}\frac{\mathrm{d}\delta_{c\mathrm{N}}}{\mathrm{d}N}+\frac{\sqrt{6}Q\Omega_{c }}{2q_{s}x_{1}}\delta_{c\mathrm{N}}=0\,,\] (21) \[\Psi=-\Phi\,, \tag{22}\] where \[\xi=-3q_{s}x_{1}^{2}-\frac{3}{2}\Omega_{c}-\frac{3}{2}\Omega_{b}-2\Omega_{r}\,, \qquad\epsilon_{\phi}=-3+\frac{\sqrt{6}}{2q_{s}x_{1}}\left(\lambda x_{2}^{2}-Q \Omega_{c}\right)\,,\qquad\hat{c}_{s}^{2}=\frac{1}{q_{s}}\,. \tag{23}\] We can choose any convenient gauges at hand in the perturbation Eqs. (3.7)-(3.13). For example, the Newtonian gauge corresponds to \(\chi=0=E\), in which case Eqs. (3.7)-(3.13) can be directly solved for the gravitational potentials \(\Psi\), \(\Phi\) and the scalar-field perturbation \(\delta\varphi_{\rm N}\). For the unitary gauge \(\delta\phi=0=E\), we can introduce the curvature perturbation \({\cal R}=\Phi-\delta\varphi_{\rm N}\) and the CDM density perturbation \(\delta\rho_{\rm cu}=\delta\rho_{c\rm N}-\dot{\rho}_{c}\delta\phi_{\rm N}/\dot{\phi}\) as two propagating degrees of freedom. These dynamical perturbations have neither ghost nor Laplacian instabilities under the following conditions [69; 49; 66] \[q_{s} \equiv 1+2\beta>0\,, \tag{3.15}\] \[q_{c} \equiv 1+\frac{4\beta x_{1}^{2}}{\Omega_{c}}>0\,,\] (3.16) \[c_{s}^{2} \equiv \dot{c}_{s}^{2}+\frac{8\beta^{2}x_{1}^{2}}{q_{s}(4\beta x_{1}^{2 }+\Omega_{c})}>0\,. \tag{3.17}\] Since the CDM effective sound speed vanishes for \(c_{c}^{2}\to+0\), it does not provide an additional Laplacian stability condition. The conditions (3.15)-(3.17) are independent of the gauge choices. The evolution of perturbations after the onset of the \(\phi\)MDE can be analytically estimated for the modes deep inside the sound horizon. Under the quasi-static approximation, the dominant terms in Eqs. (3.7)-(3.13) are those containing \({\cal K}^{2}\), \(\delta_{c\rm N}\), \({\rm d}\delta_{c\rm N}/{\rm d}N\), and \(\delta_{b\rm N}\). From Eqs. (3.7), (3.12), and (3.13), it follows that \[\Psi=-\Phi\simeq-\frac{3}{2{\cal K}^{2}}\left(\Omega_{c}\delta_{c\rm N}+\Omega _{b}\delta_{b\rm N}\right)\,,\qquad\delta\varphi_{\rm N}\simeq\frac{1}{q_{s} \dot{c}_{s}^{2}{\cal K}^{2}}\left(2\beta\frac{{\rm d}\delta_{c\rm N}}{{\rm d}N }-\frac{\sqrt{6}Q\Omega_{c}}{2x_{1}}\delta_{c\rm N}\right)\,. \tag{3.18}\] We differentiate Eq. (3.9) with respect to \(N\) and then use Eqs. (3.10) and (3.11) for CDM and baryons, respectively. On using Eq. (3.18) together with the quasi-static approximation, we obtain the second-order differential equations of CDM and baryons, as [68] \[\frac{{\rm d}^{2}\delta_{c\rm N}}{{\rm d}N^{2}}+\nu\frac{{\rm d} \delta_{c\rm N}}{{\rm d}N}-\frac{3}{2G}\left(G_{cc}\Omega_{c}\delta_{c\rm N}+G _{cb}\Omega_{b}\delta_{b\rm N}\right)\simeq 0\,, \tag{3.19}\] \[\frac{{\rm d}^{2}\delta_{b\rm N}}{{\rm d}N^{2}}+\left(2+\xi\right) \frac{{\rm d}\delta_{b\rm N}}{{\rm d}N}-\frac{3}{2G}\left(G_{bc}\Omega_{c} \delta_{c\rm N}+G_{bb}\Omega_{b}\delta_{b\rm N}\right)\simeq 0\,, \tag{3.20}\] where \[G_{cc}=\frac{1+r_{1}}{1+r_{2}}G\,,\qquad G_{cb}=\frac{1}{1+r_{2}}G\,,\qquad G _{bc}=G_{bb}=G\,, \tag{3.21}\] with \[r_{1}=\frac{2Q[3Q\Omega_{c}+2\sqrt{6}\beta x_{1}(2+\epsilon_{\phi}+\sqrt{6}Qx _{1})]}{3\Omega_{c}}\,,\qquad r_{2}=\frac{4\beta(1+2\beta)x_{1}^{2}}{\Omega_{ c}}\,, \tag{3.22}\] and \[\nu=\frac{4\beta(1+2\beta)(5+\xi+2\epsilon_{\phi})x_{1}^{2}+(2+\xi+\sqrt{6}Qx _{1})\Omega_{c}}{4\beta(1+2\beta)x_{1}^{2}+\Omega_{c}}\,. \tag{3.23}\] Since \(G_{bc}\) and \(G_{bb}\) are equivalent to \(G\), the baryon perturbation is not affected by the DE-DM couplings. On the other hand, \(G_{cc}\) and \(G_{cb}\) are different from \(G\) for nonvanishing values of \(Q\) and \(\beta\). During the \(\phi\)MDE, we obtain \[G_{cc}=\left(1+\frac{2Q^{2}}{1+2\beta}\right)G\,,\qquad G_{cb}=\left[1-\frac{ 8\beta Q^{2}}{3-2Q^{2}+2(3+4Q^{2})\beta}\right]G\,. \tag{3.24}\] Under the no-ghost condition (3.15), we have \(G_{cc}>G\). So long as the coupling \(Q\) is in the range \(Q^{2}\ll 1\), \(G_{cb}\) is smaller than \(G\). After the end of the \(\phi\)MDE, we do not have a simple formula for \(G_{cc}\). However, assuming that \(|\beta|\ll 1\) and \(|Q|\ll 1\), we find \[G_{cc}\simeq\left(1+2Q^{2}-\frac{4\beta x_{1}^{2}}{\Omega_{c}}\right)G\,. \tag{3.25}\] Since \(\Omega_{c}\) decreases and \(x_{1}^{2}\) increases at low redshifts, the third term in the parenthesis of Eq. (3.25) dominates over \(2Q^{2}\) to realize the value of \(G_{cc}\) smaller than \(G\). Indeed, the numerical simulation in Ref. [68] shows that the growth rate of \(\delta_{c\rm N}\) can be less than the value for \(\beta=0\) even in the presence of the coupling \(Q\). This suppressed growth of \(\delta_{c\rm N}\) at low redshifts should allow the possibility of reducing the \(\sigma_{8}\) tension. Methodology We implement our model into the public code CAMB[99] and simulate the evolution of density perturbations with the background equations to compute the CMB and matter power spectra. In this section, we rewrite the background and perturbation equations of motion in the language of the CAMB code. For this purpose, we use the conformal time defined by \(\tau=\int a^{-1}\mathrm{d}t\). The background Eqs. (7), (8), (9), and (12) can be expressed as \[\rho^{\prime}_{I}+3\mathcal{H}\left(\rho_{I}+P_{I}\right)=0\,, \qquad\text{(for \ $I=c,b,r$)}\,, \tag{14}\] \[3M_{\mathrm{Pl}}^{2}\mathcal{H}^{2}=\frac{1}{2}q_{s}\phi^{\prime 2 }+a^{2}\left(V_{0}e^{-\lambda\phi/M_{\mathrm{Pl}}}+e^{Q\phi/M_{\mathrm{Pl}}} \rho_{c}+\rho_{b}+\rho_{r}\right)\,,\] (15) \[2M_{\mathrm{Pl}}^{2}\left(\mathcal{H}^{\prime}-\mathcal{H}^{2} \right)=-q_{s}\phi^{\prime 2}-a^{2}\left(e^{Q\phi/M_{\mathrm{Pl}}}\rho_{c}+ \rho_{b}+\frac{4}{3}\rho_{r}\right)\,,\] (16) \[q_{s}\left(\phi^{\prime\prime}+2\mathcal{H}\phi^{\prime}\right) +\frac{a^{2}}{M_{\mathrm{Pl}}}\left(Q\rho_{c}e^{Q\phi/M_{\mathrm{Pl}}}- \lambda V_{0}e^{-\lambda\phi/M_{\mathrm{Pl}}}\right)=0\,, \tag{17}\] where a prime represents the derivative with respect to \(\tau\), and we have introduced the conformal Hubble parameter \(\mathcal{H}\) as \[\mathcal{H}\equiv aH=\dot{a}=\frac{a^{\prime}}{a}\,. \tag{18}\] For perturbations, we adopt the synchronous gauge conditions \[\alpha=0\,,\qquad\chi=0\,. \tag{19}\] Following Ma and Bertschinger [100], we use the notations \[\zeta=-\eta\,,\qquad E=-\frac{h+6\eta}{2k^{2}}\,,\qquad\theta_{I}=\frac{k^{2}} {a}v_{I}\,. \tag{20}\] Then, some of the gauge-invariant variables defined in Eqs. (3) and (6) reduce to \[\Psi=\frac{1}{2k^{2}}\left(h^{\prime\prime}+\mathcal{H}h^{\prime }+6\eta^{\prime\prime}+6\mathcal{H}\eta^{\prime}\right)\,,\qquad\Phi=-\eta+ \frac{\mathcal{H}}{2k^{2}}\left(h^{\prime}+6\eta^{\prime}\right)\,,\] \[\delta_{I\mathrm{N}}=\delta_{I}-\frac{3\mathcal{H}}{2k^{2}}(1+w_ {I})(h^{\prime}+6\eta^{\prime})\,,\qquad\delta\varphi_{I\mathrm{N}}=\mathcal{ H}\left(\frac{\delta\phi}{\phi^{\prime}}+\frac{h^{\prime}+6\eta^{\prime}}{2k^{2}} \right)\,,\qquad V_{I\mathrm{N}}=\frac{\mathcal{H}}{k^{2}}\left(\theta_{I}+ \frac{1}{2}h^{\prime}+3\eta^{\prime}\right)\,, \tag{21}\] where \(\delta_{I}\equiv\delta\rho_{I}/\rho_{I}\) and \(w_{I}\equiv P_{I}/\rho_{I}\). In the presence of perfect fluids of CDM (\(w_{c}=0=c_{c}^{2}\)), baryons (\(w_{b}=0=c_{b}^{2}\)), and radiation (\(w_{r}=1/3=c_{r}^{2}\)), we can express the perturbation Eqs. (3)-(3.13) in the forms \[k^{2}\eta-\frac{\mathcal{H}}{2}h^{\prime}+\frac{a^{2}}{2M_{ \mathrm{Pl}}^{2}}\left[\frac{q_{s}}{a^{2}}\phi^{\prime}\delta\phi^{\prime}+ \frac{1}{M_{\mathrm{Pl}}}\left(Q\rho_{c}e^{Q\phi/M_{\mathrm{Pl}}}-\lambda V_{0 }e^{-\lambda\phi/M_{\mathrm{Pl}}}\right)\delta\phi+e^{Q\phi/M_{\mathrm{Pl}}} \rho_{c}\delta_{c}+\rho_{b}\delta_{b}+\rho_{r}\delta_{r}\right]=0, \tag{22}\] \[k^{2}\eta^{\prime}-\frac{a^{2}}{2M_{\mathrm{Pl}}^{2}}\left[\frac{ k^{2}}{a^{2}}\phi^{\prime}\delta\phi+\left(\rho_{c}e^{Q\phi/M_{\mathrm{Pl}}}+ \frac{2\beta\phi^{\prime 2}}{a^{2}}\right)\theta_{c}+\rho_{b}\theta_{b}+\frac{4}{3}\rho_{r} \theta_{r}\right]=0,\] (23) \[\delta^{\prime}_{c}+\theta_{c}+\frac{1}{2}h^{\prime}=0,\] (24) \[\delta^{\prime}_{b}+\theta_{b}+\frac{1}{2}h^{\prime}=0,\] (25) \[\delta^{\prime}_{r}+\frac{4}{3}\theta_{r}+\frac{2}{3}h^{\prime}=0,\] (26) \[\theta^{\prime}_{c}+\mathcal{H}\theta_{c}-\frac{1}{q_{s}q_{c} \phi^{\prime 2}M_{\mathrm{Pl}}^{2}}\bigg{[}q_{s}(q_{c}-1)\phi^{\prime}M_{ \mathrm{Pl}}k^{2}\delta\phi^{\prime}+\left\{Q\phi^{\prime 2}+a^{2}(q_{c}-1)\lambda V_{0}e^{- \lambda\phi/M_{\mathrm{Pl}}}\right\}k^{2}\delta\phi\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \left\{Q(q_{s}-2)\phi^{\prime 3}+3q_{s}(q_{c}-1)\mathcal{H}\phi^{\prime 2}M_{\mathrm{Pl}}-2a^{2}(q_{c}-1)\phi^{ \prime}\lambda V_{0}e^{-\lambda\phi/M_{\mathrm{Pl}}}\right\}\theta_{c} \bigg{]}=0,\] (27) \[\theta^{\prime}_{b}+\mathcal{H}\theta_{b}=0\,,\] (28) \[\theta^{\prime}_{r}-\frac{k^{2}}{4}\delta_{r}=0\,, \tag{29}\] \[\delta\phi^{\prime\prime}+2{\cal H}\delta\phi^{\prime}+\frac{k^{2}M_{ \rm Pl}^{2}+a^{2}(\lambda^{2}V_{0}e^{-\lambda\phi/M_{\rm Pl}}+Q^{2}\rho_{e}e^{Q \phi/M_{\rm Pl}})}{q_{s}M_{\rm Pl}^{2}}\delta\phi+\frac{\phi^{\prime}}{2}h^{ \prime}+\frac{2\beta}{q_{s}}\phi^{\prime}\theta_{c}+\frac{a^{2}Q\rho_{e}e^{Q \phi/M_{\rm Pl}}}{q_{s}M_{\rm Pl}}\delta_{c}=0, \tag{4.17}\] \[h^{\prime\prime}+6\eta^{\prime\prime}+2{\cal H}(h^{\prime}+6\eta ^{\prime})-2\eta k^{2}=0\,, \tag{4.18}\] where \(q_{s}\) and \(q_{c}\) are defined by Eqs. (3.15) and (3.16), respectively. The perturbation equations of motion for baryons and radiation are the same as those in \(\Lambda\)CDM model. Thus we modify the equations for CDM and gravitational field equations in the CAMB code. We also take into account the background and perturbation equations of motion for the scalar field, i.e., Eqs. (4.4) and (4.17). Note that the CDM velocity is usually set to zero all the time as a result of the gauge fixing condition in CAMB based on the synchronous gauge. In the models considered here, CDM has non-zero velocity due to the coupling to \(\phi\) in the late Universe. However, we will set \(\theta_{c}=0\) as the initial condition to eliminate the gauge degree of freedom, assuming that CDM streams freely in the early Universe (i.e., we neglect the interaction between DE and CDM) as in the standard scenario. In the background Eqs. (4.2)-(4.4), the coupling \(\beta\) appears through the positive no-ghost parameter \(q_{s}=1+2\beta\). In the limit \(q_{s}\to\infty\), Eq. (4.4) shows that \(\phi\) approaches a constant after the onset of the \(\phi\)MDE. This limit corresponds to the \(\Lambda\)CDM model with a constant potential energy. Since the parameter space for large values of \(q_{s}\) spreads widely, the MCMC chains tend to wander in such regions. This actually leads to the loss of information about the evolution of the scalar field itself. To avoid this, we introduce a set of new variables \(p_{s},\hat{\lambda},\hat{Q}\) defined by \[p_{s}\equiv q_{s}^{-1/2}=\frac{1}{\sqrt{1+2\beta}}\,,\qquad\hat{ \lambda}\equiv p_{s}\lambda\,,\qquad\hat{Q}\equiv p_{s}Q\,. \tag{4.19}\] As we discussed in Sec. III, the growth of matter perturbations is suppressed for positive values of \(\beta\). In the MCMC analysis, we will set the prior \[\beta\geq 0\,. \tag{4.20}\] In this case, the stability conditions (3.15)-(3.17) are automatically satisfied. Then, the parameter \(p_{s}\) is in the range \(0<p_{s}\leq 1\). For the parameter \(\lambda\), we choose the value \[\lambda>0\,, \tag{4.21}\] without loss of generality. In Eq. (4.4), we observe that, for \(Q>0\), the background scalar field can approach the instantaneous minima characterized by the condition \(Q\rho_{e}e^{Q\phi/M_{\rm Pl}}=\lambda V_{0}e^{-\lambda\phi/M_{\rm Pl}}\) even during the matter era. Since we would like to study the case in which the \(\phi\)MDE is present, we will focus on the coupling range \[Q\leq 0\,. \tag{4.22}\] The same prior was chosen in the MCMC analysis of Refs. [54; 55; 56]1 for the coupled DE-DM model with \(Q\neq 0\) and \(\beta=0\). Footnote 1: In these papers, the sign convention of \(Q\) is opposite to ours. To implement our model in the CAMB code, we use the unit \(M_{\rm Pl}=1\) and replace \(\phi\) and \(\delta\phi\) with the following new variables \[\phi\equiv p_{s}\hat{\phi}\,,\qquad\delta\phi\equiv p_{s}\delta \hat{\phi}\,. \tag{4.23}\] Then, the background scalar-field equation can be expressed as \[\hat{\phi}^{\prime\prime}+2{\cal H}\hat{\phi}^{\prime}+a^{2}\left( \hat{\rho}_{c,\hat{\phi}}+V_{,\hat{\phi}}\right)=0, \tag{4.24}\] where \(\hat{\rho}_{c}=\rho_{e}e^{\hat{Q}\hat{\phi}}\) and \(V_{,\hat{\phi}}={\rm d}V/{\rm d}\hat{\phi}\). The energy density and pressure of \(\hat{\phi}\) read \(\rho_{\phi}=\hat{\phi}^{\prime 2}/(2a^{2})+V_{0}e^{-\hat{\lambda}\hat{\phi}}\) and \(P_{\phi}=\hat{\phi}^{\prime 2}/(2a^{2})-V_{0}e^{-\hat{\lambda}\hat{\phi}}\), respectively. This means that, at the background level, the effect of the momentum transfer can be absorbed into the redefined canonical scalar field \(\hat{\phi}\). We note that \(\hat{\phi}\) mediates the energy with CDM through the term \(a^{2}\hat{\rho}_{c,\hat{\phi}}\) in Eq. (4.24). Using the variables and parameters defined above, the perturbation equations of motion for \(\theta_{c}\) and \(\delta\phi\) are now expressed as \[\theta_{c}^{\prime}+{\cal H}\theta_{c}-\frac{1-p_{s}^{2}}{a^{2} \hat{\rho}_{c}q_{c}}\left[k^{2}\hat{\phi}^{\prime}\delta\hat{\phi}^{\prime}-a ^{2}k^{2}\delta\hat{\phi}V_{,\hat{\phi}}+\left(3{\cal H}\hat{\phi}^{\prime}+2a^ {2}V_{,\hat{\phi}}\right)\hat{\phi}^{\prime}\theta_{c}\right]-\frac{\hat{Q}}{q _{c}}\left[k^{2}p_{s}^{2}\delta\hat{\phi}+(1-2p_{s}^{2})\hat{\phi}^{\prime} \theta_{c}\right]=0\,, \tag{4.25}\] \[\delta\hat{\phi}^{\prime\prime}+2{\cal H}\delta\hat{\phi}^{\prime}+\left[p_{s}^{2}k ^{2}+a^{2}\left(V_{,\phi\hat{\phi}}+\hat{\rho}_{c,\hat{\phi}\hat{\phi}}\right) \right]\delta\hat{\phi}+\left[k{\cal Z}+(1-p_{s}^{2})\theta_{c}\right]\hat{\phi} ^{\prime}+a^{2}\hat{\rho}_{c,\hat{\phi}}\,\delta_{c}=0\,, \tag{4.26}\] where \({\cal Z}\equiv h^{\prime}/(2k)\). We will also express the other perturbation equations of motion in terms of the new variables introduced above and numerically solve them with the background equations. In Fig. 1, we plot the density parameters \(\Omega_{\phi}\), \(\Omega_{r}\), \(\Omega_{c}\), \(\Omega_{b}\) (left panel) and \(w_{\rm eff}\), \(w_{\phi}\) (right panel) for the model parameters \(Q=-0.04\), \(\lambda=0.5\), and \(\beta=0.4\). We observe that the solution temporally approaches the \(\phi\)MDE characterized by \(\Omega_{\phi}=w_{\rm eff}=2Q^{2}/[3(1+2\beta)]\), which is a distinguished feature compared to the \(\Lambda\)CDM model. The \(\phi\)MDE is followed by the epoch of cosmic acceleration (\(w_{\rm eff}<-1/3\)) driven by the fixed point (C). In left panel of Fig. 2, we show the CMB angular power spectra of temperature anisotropies for several different values of \(Q\) and \(\beta\), with \(\lambda=0.3\). Compared to the uncoupled quintessence, there are two main effects on CMB induced mostly by the coupling \(Q\). The first is the shift of acoustic peaks toward larger multipoles \(\ell\). The multiple \(\ell_{A}\) corresponding to the sound horizon \(r_{s*}\) at decoupling (redshift \(z_{*}\)) is given by \[\ell_{A}=\pi\frac{D_{A}(z_{*})}{r_{s*}}\,, \tag{4.27}\] where \[D_{A}(z_{*})=\int_{0}^{z_{*}}\frac{1}{H(z)}\mathrm{d}z \tag{4.28}\] is the comoving angular diameter distance, and \[r_{s*}=\frac{1}{\sqrt{3}}\int_{0}^{a_{*}}\frac{\mathrm{d}a}{\sqrt{1+R_{s}(a)} \,a^{2}H(a)}\,, \tag{4.29}\] with \(R_{s}(a)=(3\Omega_{b0}/4\Omega_{\gamma 0})a\) and \(a_{*}=(1+z_{*})^{-1}\)[101; 102]. Here, \(\Omega_{b0}\) and \(\Omega_{\gamma 0}\) are today's density parameters of baryons and photons, respectively. In our model, there is the \(\phi\)MDE in which the CDM density grows faster toward the higher redshift (\(\rho_{c}\propto(1+z)^{3+2Q^{2}/(1+2\beta)}\)) in comparison to the uncoupled case (\(Q=0\)). Moreover, the scalar-field density \(\rho_{\phi}\) scales in the same manner as \(\rho_{c}\) during the \(\phi\)MDE. These properties lead to the larger Hubble expansion rate before the decoupling epoch, so that the sound horizon (4.29) gets smaller in comparison to the uncoupled case. The coupling \(Q\) can increase the value of \(H(z)\) from the end of the \(\phi\)MDE toward the decoupling epoch \(z=z_{*}\), which results in the decrease of \(D_{A}(z_{*})\). However, for fixed \(H_{0}\), the increase of \(1/r_{s*}\) induced by the same coupling typically overwhelms the reduction of \(D_{A}(z_{*})\) in the estimation of \(\ell_{A}\) in Eq. (4.27). For the model parameters \(Q=0\) with \(\beta=0\) and \(\lambda=0.5\), we obtain the numerical values \(D_{A}(z_{*})=13.84\) Gpc and \(r_{s*}=144.40\) Mpc. If we change the coupling \(Q\) to \(-0.2\), the two distances change to \(D_{A}(z_{*})=12.95\) Gpc and \(r_{s*}=127.20\) Mpc, respectively. Clearly, the reduction of \(r_{s*}\) induced by the coupling \(Q\) is stronger than the decrease of \(D_{A}(z_{*})\), which leads to the increase of \(\ell_{A}\) from 301.17 (for \(Q=0\)) to 319.85 (for \(Q=-0.2\)). Hence the larger coupling \(|Q|\) leads to the shift of CMB acoustic peaks toward smaller scales. This effect tends to be significant especially for \(|Q|\gtrsim 0.1\). We note that the positive coupling \(\beta\) works to suppress the factor \(2Q^{2}/(1+2\beta)\) in the \((1+z)\)-dependent power of \(\rho_{c}\) during the \(\phi\)MDE. In comparison to the case \(\beta=0\), we need to choose larger values of \(|Q|\) to have the shift of acoustic peaks toward smaller scales. The second effect of the coupling \(Q\) on the CMB temperature spectrum is the suppressed amplitude of acoustic peaks. The existence of the \(\phi\)MDE gives rise to the larger CDM density \(\rho_{c}\) at decoupling, while the baryon density \(\rho_{b}\) is hardly affected. Then, the coupling \(Q\) gives rise to a smaller ratio \(\rho_{b}/\rho_{c}\) around \(z=z_{*}\). For \(Q=0\) with \(\beta=0\) and \(\lambda=0.5\), we obtain the numerical value \(\rho_{b}/\rho_{c}=0.186\), while, for \(Q=-0.2\) with the same values of \(\beta\) and \(\lambda\), this ratio decreases to \(\rho_{b}/\rho_{c}=0.116\). This is the main reason for the reduction of the height of CMB acoustic peaks seen in Fig. 2. We note that, in the MCMC analysis performed in Sec. V, the best-fit value of today's density parameter \(\Omega_{c0}\) is slightly smaller than the one in the \(\Lambda\)CDM model. However, for \(Q\neq 0\), the increase of \(\rho_{c}\) toward the past during the \(\phi\)MDE results in the larger CDM density at decoupling in comparison to the uncoupled case, suppressing the early ISW contribution around the first acoustic peak. In the right panel of Fig. 2, we show the evolution of \(f\sigma_{8}\) for several different model parameters, where \(f=\dot{\delta}_{m}/(H\delta_{m})\) is the growth rate of matter density contrast (incorporating both CDM and baryons) and \(\sigma_{8}\) is the amplitude of matter over-density at the comoving \(8h^{-1}\) Mpc scale (\(h\) is the normalized Hubble constant \(H_{0}=100\,h\) km/s/Mpc). We find that the large coupling \(\beta\) induces the suppression for the growth rate of matter perturbations at low redshifts. This is also the case even in the presence of the coupling \(Q\) of order \(-0.01\). This result is consistent with the analytic estimation for the growth of perturbations discussed in Sec. III. ## V Results and Discussion We are now going to place observational constraints on our model by using the MCMC likelihood CosmoMC [103]. In our analysis, we will exploit the following data sets. (i) The CMB data containing TT, TE, EE+lowE from Planck 2018 [92], and the large-scale structure data from the 12-th data release of SDSS [93]. (ii) The Phantheon supernovae samples containing 1048 type Ia supernovae magnitudes with redshift in the range of \(0.01<z<2.3\)[94], which are commonly used to constrain the property of late-time cosmic acceleration. (iii) The 1-st year DES results [95], which are the combined analyses of galaxy clustering and weak gravitational lensing. We stop the calculations when the Gelman-Rubin statistic \(R-1\sim 0.01\) is reached. In Fig. 3 and Table 1, we present the results of observational constraints on our model parameters. First, let us discuss constraints on the parameter \(\beta\). In Table 1, the bounds on \(\beta\) (68 % CL) constrained by different data sets are presented in terms of the log prior. From the joint analysis based on the data sets (i)+(ii)+(iii), this bound translates to \[\beta=0.417^{+1.592}_{-0.307}\qquad(68\,\%\,\text{CL})\,, \tag{10}\] where \(0.417\) is the mean value. Since \(\beta\) is constrained to be larger than \(0.11\) at \(1\sigma\), there is an interesting observational signature of the momentum exchange between DE and DM. Even with the analysis of the data set (i) or with the Figure 3: Triangle plot for the 1-dimensional marginalized distributions on individual parameters and the \(1\sigma\) and \(2\sigma\) 2-dimensional contours. The blue dashed lines represent constraints by the Planck 2018 [104] and 12-th SDSS data sets, which we call (i). The red and green solid lines correspond to constraints when the data sets (ii) and (ii)+(iii) are combined with (i), respectively. data sets (i)+(ii), the \(1\sigma\) lower limits on \(\beta\) are close to the value \(0.1\). Hence the Planck CMB data combined with the SDSS data already show the signature of the momentum transfer. We note that this result is consistent with the likelihood analysis of Refs. [63; 65; 67; 70] performed for the model \(Q=0\), where the joint analysis based on the CMB and galaxy clustering data favour nonvanishing values of \(\beta\). With the data sets (i)+(ii)+(iii), we also obtain the following \(2\sigma\) bound \[0.014<\beta<10.756\qquad(95\,\%\,{\rm CL})\,. \tag{5.2}\] Since the lower limit of \(\beta\) is as small as \(0.01\), this value is not significantly distinguished from \(\beta=0\). This means that the evidence for the momentum transfer can be confirmed at \(68\,\%\) CL, but not firmly at \(95\,\%\) CL, with the current observational data. We note that the mean value of \(\sigma_{8}\) constrained by the data sets (i)+(ii)+(iii) is \(0.7996\), which is smaller than the Planck 2018 bound \(\sigma_{8}=0.8111\pm 0.0060\)[31] derived for the \(\Lambda\)CDM model. Thus, in our model, the \(\sigma_{8}\) tension between the CMB and other measurements is alleviated by the momentum transfer. This property is mostly attributed to the fact that the growth rate of \(\delta_{e}\) at low redshifts is suppressed by the positive coupling \(\beta\). The other coupling constant \(Q\), which mediates the energy transfer between DE and DM, is constrained to be \[Q=-0.0355^{+0.035}_{-0.0097}\qquad(68\,\%\,{\rm CL})\,, \tag{5.3}\] where \(-0.0355\) is the mean value. As we see in Fig. 3, the analysis based on the data sets (i) + (ii) gives rise to a peak in the 1-dimensional probability distribution of \(Q\) around \(-0.04\). This property also holds by adding the data set (iii). Since the vanishing coupling (\(Q=0\)) is within the \(1\sigma\) contour, we do not have strong observational evidence that the nonvanishing value of \(Q\) is favored over the \(Q=0\) case. However, it is interesting to note that the current data give rise to the probability distribution of \(Q\) with a peak at \(Q<0\). In Refs. [54; 55; 56], the couplings \(|Q|\) slightly smaller than the mean value of (5.3) were obtained by the MCMC analysis with several data sets for the coupled dark energy model with \(\beta=0\). In our model, we have \(\Omega_{\phi}=w_{\rm eff}=2Q^{2}/[3(1+2\beta)]\) during the \(\phi\)MDE, so both \(\Omega_{\phi}\) and \(w_{\rm eff}\) are suppressed by the positive coupling \(\beta\). This allows the larger values of \(|Q|\) in comparison to the case \(\beta=0\). Still, the coupling \(|Q|\) exceeding the order \(0.1\) is forbidden from the data because of the significant changes of heights and positions in CMB acoustic peaks (see Fig. 2). The parameter \(\lambda\) is related to the slope of the scalar-field potential. To realize the DE equation of state closer to \(-1\) at late times, we require that \(\lambda\) can not be significantly away from \(0\). From the MCMC analysis with the data sets (i)+(ii)+(iii), we obtain the upper limit \[\lambda<0.641\qquad(68\,\%\,{\rm CL})\,. \tag{5.4}\] We also remark that, for larger \(\lambda\), the distance to the CMB last scattering surface is reduced. To compensate this property, we require smaller values of \(H_{0}\). This explains the tendency for blue contours seen in the \(\lambda\)-\(H_{0}\) plane. Thus, the smaller values of \(\lambda\) are favored from the viewpoint of increasing \(H_{0}\). In Fig. 3, we find that today's CDM density parameter \(\Omega_{c0}\) is constrained to be smaller than the Planck 2018 bound \(\Omega_{c0}h^{2}=0.120\pm 0.001\) derived for the \(\Lambda\)CDM model [31]. In spite of this decrease of \(\Omega_{c0}\), the CDM density evolves as \(\rho_{c}\propto(1+z)^{3+2Q^{2}/(1+2\beta)}\) during the \(\phi\)MDE and hence \(\Omega_{c}\) at decoupling can be increased by the nonvanishing coupling \(Q\). We note that today's baryon density parameter \(\Omega_{b0}\) is only slightly larger than the Planck 2018 bound \(\Omega_{b0}=0.0224\pm 0.0001\) (see Fig. 3). Then, the nonvanishing coupling \(Q\) hardly modifies the value of \(\Omega_{b}\) at \(z=z_{*}\) in comparison to the case \(Q=0\). Since the ratio \(\Omega_{b}/\Omega_{c}\) at decoupling is decreased by the coupling \(|Q|\) larger than the order \(0.01\), this suppresses the height of CMB acoustic peaks. The MCMC analysis with the CMB data alone already places the bound \(|Q|<0.1\) at \(95\,\%\,{\rm CL}\). \begin{table} \begin{tabular}{c c c c c} \hline \hline Parameters & Priors & mean (best fit) (i) & mean (best fit) (i)+(ii) & mean (best fit) (i)+(ii)+(iii) \\ \hline \(H_{0}\) [km/s/Mpc] & \([20,100]\) & \(67.44(67.26)^{+1.01}_{-0.69}\) & \(67.93(67.66)^{+0.58}_{-0.68}\) & \(68.22(68.41)^{+0.58}_{-0.61}\) \\ \(\Omega_{c0}h^{2}\) & \([0.001,0.99]\) & \(0.11802(0.11958)^{+0.0018}_{-0.0010}\) & \(0.11819(0.11904)^{+0.0014}_{-0.0010}\) & \(0.11712(0.11580)^{+0.0013}_{-0.0009}\) \\ \(\Omega_{b0}h^{2}\) & \([0.005,0.1]\) & \(0.02237(0.02237)^{+0.00014}_{-0.0014}\) & \(0.02237(0.02238)^{+0.00015}_{-0.0014}\) & \(0.02247(0.02248)^{+0.00014}_{-0.00013}\) \\ \(\ln\beta\) & \(*\) & \(-1.0131(-0.1997)^{+1.1754}_{-1.1754}\) & \(-0.7919(-1.5209)^{+1.1593}_{-1.1593}\) & \(-0.8754(-2.6179)^{+0.00014}_{-1.3232}\) \\ \(\lambda\) & \([0.1,\infty]\) & \(0.6028(0.4083)^{+0.1658}_{-0.5928}\) & \(0.4235(0.2467)^{+0.16573}_{-0.088}\) & \(0.4988(0.5269)^{+0.4159}_{-0.4676}\) \\ \(Q\) & \([-\infty,0]\) & \(-0.0396(-0.0072)^{+0.0396}_{-0.0108}\) & \(-0.0422(-0.0096)^{+0.408}_{-0.0125}\) & \(-0.0355(-0.0396)^{+0.0355}_{-0.0097}\) \\ \(\sigma_{8}\) & \(*\) & \(0.8031(0.8057)^{+0.0231}_{-0.0148}\) & \(0.8105(0.8058)^{+0.0169}_{-0.0148}\) & \(0.7996(0.8084)^{+0.0174}_{-0.0120}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Priors, mean values, best-fit values and \(1\sigma\) errors of the model parameters \(\ln\beta\), \(\lambda\), \(Q\) and cosmological parameters \(H_{0}\), \(\Omega_{c0}h^{2}\), \(\Omega_{\Omega0}h^{2}\), \(\sigma_{8}\), where \(\Omega_{c0}\) and \(\Omega_{b0}\) are today’s density parameters of CDM and baryons respectively. The third, fourth, and fifth columns correspond to the constraints derived by the data sets (i), (i)+(ii), and (i)+(ii)+(iii), respectively. As we discussed in Sec. IV, the nonvanishing coupling \(Q\) reduces the sound horizon \(r_{s\ast}\) at \(z=z_{\ast}\). This leads to the shift of CMB acoustic peaks toward smaller scales. To keep the position of the multipole \(\ell_{A}\) corresponding to the sound horizon, we require that the comoving angular diameter distance \(D_{A}(z_{\ast})\) appearing in the numerator of Eq. (4.27) should be also reduced. We can express Eq. (4.28) as \(D_{A}(z_{\ast})=H_{0}^{-1}\int_{0}^{z_{\ast}}E^{-1}(z)\mathrm{d}z\), where \(E(z)=H(z)/H_{0}\). In the \(\Lambda\)CDM model we have \(E(z)=[\Omega_{m0}(1+z)^{3}+\Omega_{\Lambda}+\Omega_{r0}(1+z)^{4}]^{1/2}\), where \(\Omega_{m0}=\Omega_{c0}+\Omega_{b0}\). In our model, the CDM density parameter during the \(\phi\)MDE has the dependence \(\Omega_{c0}(1+z)^{3+2Q^{2}/(1+2\beta)}\) instead of \(\Omega_{c0}(1+z)^{3}\), together with the scaling behavior of \(\rho_{\phi}\) with \(\rho_{c}\). Then, the coupling \(Q\) leads to the increase of \(E(z)\) from the end of \(\phi\)MDE to the decoupling epoch, so that the integral \(\int_{0}^{z_{\ast}}E^{-1}(z)\mathrm{d}z\) is decreased. This property is different from the early DE scenario of Ref. [105], where the energy density of early DE quickly decays after the recombination epoch. In our model, increasing the value of \(H_{0}\) also reduces \(D_{A}(z_{\ast})\), so it can compensate the reduction of \(r_{s\ast}\). However, the integral \(\int_{0}^{z_{\ast}}E^{-1}(z)\mathrm{d}z\) is already decreased at some extent by the existence of the \(\phi\)MDE. In this sense, there is the limitation for realizing \(H_{0}\) significantly larger than the value obtained for \(Q=0\). The observational constraint on \(H_{0}\) derived by the data set (i) for the model with \(Q=0\) is consistent with the Planck 2018 bound \(H_{0}=67.27\pm 0.60\) km/s/Mpc. In the presence of the negative coupling \(Q\), the likelihood region in the \(Q\)-\(H_{0}\) plane shown in Fig. 3 shifts toward larger values of \(H_{0}\). With the full data sets (i)+(ii)+(iii), the Hubble constant is constrained to be \[H_{0}=68.22^{+0.58}_{-0.61}\,\,\mathrm{km/s/Mpc}\qquad(68\,\%\,\mathrm{CL})\,, \tag{5.5}\] whose mean value is larger than the one derived for the \(\Lambda\)CDM model with the Planck 2018 data alone. However, it is not possible to reach the region \(H_{0}>70\) km/s/Mpc due to the limitation of reducing \(D_{A}(z_{\ast})\) by increasing the value of \(H_{0}\). We also carried out the MCMC analysis for the \(\Lambda\)CDM model and obtained the bound \(H_{0}=68.19^{+0.37}_{-0.38}\) km/s/Mpc with the full data sets (i)+(ii)+(iii). The \(1\sigma\) upper limit of the constraint (5.5) is only slightly larger than that of the \(\Lambda\)CDM bound. Hence the Hubble tension problem between the Planck 2018 data and those constrained by the direct measurements of \(H_{0}\) still persists in our coupled DE scenario. Albeit the difficulty of resolving the Hubble tension problem, the fact that the probability distribution of \(Q\) has a peak around \(-0.04\) is an interesting property of our model. Moreover, there are observational signatures of the momentum transfer with \(\beta>0\) between DE and DM at \(68\,\%\) CL. The coupling \(\beta\) can alleviate the \(\sigma_{8}\) tension without spoiling the existence of the \(\phi\)MDE. ## VI Conclusions In this paper, we put observational constraints on an interacting model of DE and DM given by the action (2.2). Since our model has a concrete Lagrangian, the background and perturbation equations of motion are unambiguously fixed by the variational principle. This is not the case for many coupled DE-DM models studied in the literature, in which the interacting terms are added to the background equations by hands. In our model, the DE scalar field \(\phi\) and the CDM fluid mediate both energy and momentum transfers, whose coupling strengths are characterized by the constants \(Q\) and \(\beta\), respectively. We considered an exponential potential \(V(\phi)=V_{0}e^{-\lambda\phi/M_{\mathrm{Pl}}}\) of the scalar field to derive late-time cosmic acceleration, but the different choice of quintessence potentials should not affect the observational constraints on \(Q\) and \(\beta\) significantly. The coupling \(Q\) can give rise to the \(\phi\)MDE during which the scalar-field density parameter \(\Omega_{\phi}\) and the effective equation of state \(w_{\mathrm{eff}}\) are nonvanishing constants, such that \(\Omega_{\phi}=w_{\mathrm{eff}}=2Q^{2}/[3(1+2\beta)]\). In this epoch, the CDM density grows as \(\rho_{c}\propto(1+z)^{3+2Q^{2}/(1+2\beta)}\) toward the past and hence the value of \(\rho_{c}\) at CMB decoupling can be increased by the coupling \(Q\). Since this enhances the Hubble expansion rate in the past, the sound horizon \(r_{s\ast}\) at decoupling (redshift \(z_{\ast}\)) gets smaller. Moreover, the ratio between the baryon and CDM densities, \(\rho_{b}/\rho_{c}\), is suppressed at \(z=z_{\ast}\) due to the increase of \(\rho_{c}\) induced by the presence of the \(\phi\)MDE. These modifications shift the positions and heights of acoustic peaks of CMB temperature anisotropies, so that the coupling \(Q\) can be tightly constrained from the CMB data. The effect of momentum transfers on the dynamics of perturbations mostly manifests itself for the evolution of CDM density contrast \(\delta_{c}\) at low redshifts. For \(\beta>0\), the growth of \(\delta_{c}\) is suppressed due to the decrease of an effective gravitational coupling \(G_{\mathrm{eff}}\) on scales relevant to the galaxy clustering. The coupling \(Q\) enhances the value of \(G_{\mathrm{eff}}\) through the energy transfer between DE and DM. However, the reduction of \(G_{\mathrm{eff}}\) induced by positive \(\beta\) typically overwhelms the increase of \(G_{\mathrm{eff}}\) for the redshift \(z\lesssim 1\). Hence the growth rate of CDM perturbations is suppressed in comparison to the \(\Lambda\)CDM model. We carried out the MCMC analysis for our model by using the observational data of Planck 2018 [92], 12-th SDSS, Phantheon supernovae samples, and 1-year DES. The coupling \(\beta\) is constrained to be in the range \(\beta=0.417^{+1.592}_{-0.307}\) (\(68\,\%\) CL) by using all the data sets. Since the \(\beta=0\) case is outside the \(1\sigma\) observational contour, there is an interesting observational signature of the momentum transfer between DE and DM. This is an outcome of the suppressed growth of \(\delta_{c}\) at low redshifts, thereby easing the \(\sigma_{8}\) tension. Indeed, we found that the mean value of \(\sigma_{8}\) constrained by the full data is 0.7996, which is smaller than the best-fit value 0.8111 derived for the \(\Lambda\)CDM model with the Planck data alone. For the coupling characterizing the energy transfer, we obtained the bound \(Q=-0.0355^{+0.0355}_{-0.0097}\) (\(68\,\%\,\)CL) by the analysis with full data sets. While the \(Q=0\) case is within the \(1\sigma\) observational contour, there is a peak for the probability distribution of the coupling at a negative value of \(Q\). This result is consistent with the likelihood analysis performed for the model with \(\beta=0\)[54; 55; 56], but now the constrained values of \(|Q|\) get larger. This increase of \(|Q|\) is mostly attributed to the fact that the effective equation of state during the \(\phi\)MDE is modified to \(w_{\rm eff}=2Q^{2}/[3(1+2\beta)]\) through the coupling \(\beta\). In comparison to the momentum transfer, we have not yet detected significant observational signatures of the energy transfer, but the future high-precision data will clarify this issue. The presence of the coupling \(Q\) reduces the sound horizon \(r_{s*}\) at decoupling, thereby increasing the multipole \(\ell_{A}\) defined in Eq. (4.27). To keep the position of CMB acoustic peaks, we require that the comoving angular diameter distance \(D_{A}(z_{*})\) from \(z=0\) to \(z=z_{*}\) decreases. During the \(\phi\)MDE, the Hubble expansion rate increases due to the enhancement of \(\rho_{c}\) induced by the energy transfer. Since this leads to the decrease of \(D_{A}(z_{*})\), the further reduction of \(D_{A}(z_{*})\) by the choice of larger values of \(H_{0}\) is quite limited in our model. From the MCMC analysis of full data sets we obtained the bound \(H_{0}=68.22^{+0.58}_{-0.61}\) km/s/Mpc, whose mean value is larger than the one derived for the \(\Lambda\)CDM model with the Planck 2018 data alone. However, the Hubble constant \(H_{0}\) does not exceed the value 70 km/s/Mpc, so the Hubble tension problem is not completely resolved in our scenario. It is still encouraging that the current data support signatures of the interaction between DE and DM. We expect that upcoming observational data like those from the Euclid satellite will place further tight constraints on the couplings \(\beta\) and \(Q\). Along with the \(H_{0}\) tension problem, we hope that we will be able to approach the origins of DE and DM and their possible interactions in the foreseeable future. ## Acknowledgements XL is supported by the National Natural Science Foundation of China under Grants Nos. 11920101003, 12021003 and 11633001, and the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB23000000. ST is supported by the Grant-in-Aid for Scientific Research Fund of the JSPS No. 22K03642 and Waseda University Special Research Project No. 2023C-473. KI is supported by the JSPS grant number 21H04467, JST FOREST Program JPMJFR20352935, and by JSPS Core-to-Core Program (grant number:JPJSCCA20200002, JPJSCCA20200003).
2309.05117
Model discovery for nonautonomous translation-invariant problems
Discovery of mathematical descriptors of physical phenomena from observational and simulated data, as opposed to from the first principles, is a rapidly evolving research area. Two factors, time-dependence of the inputs and hidden translation invariance, are known to complicate this task. To ameliorate these challenges, we combine Lagrangian dynamic mode decomposition with a locally time-invariant approximation of the Koopman operator. The former component of our method yields the best linear estimator of the system's dynamics, while the latter deals with the system's nonlinearity and non-autonomous behavior. We provide theoretical estimators (bounds) of prediction accuracy and perturbation error to guide the selection of both rank truncation and temporal discretization. We demonstrate the performance of our approach on several non-autonomous problems, including two-dimensional Navier-Stokes equations.
Hongli Zhao, Daniel M. Tartakovsky
2023-09-10T19:37:25Z
http://arxiv.org/abs/2309.05117v2
# Model discovery for nonautonomous translation-invariant problems+ ###### Abstract Discovery of mathematical descriptors of physical phenomena from observational and simulated data, as opposed to from the first principles, is a rapidly evolving research area. Two factors, time-dependence of the inputs and hidden translation invariance, are known to complicate this task. To ameliorate these challenges, we combine Lagrangian dynamic mode decomposition with a locally time-invariant approximation of the Koopman operator. The former component of our method yields the best linear estimator of the system's dynamics, while the latter deals with the system's nonlinearity and non-autonomous behavior. We provide theoretical estimators (bounds) of prediction accuracy and perturbation error to guide the selection of both rank truncation and temporal discretization. We demonstrate the performance of our approach on several non-autonomous problems, including two-dimensional Navier-Stokes equations. D ymamic mode decomposition, reduced-order model, advection-diffusion, Lagrangian framework, time-dependent coefficient 35K57, 37C60 ## 1 Introduction With the advent of machine learning applications in the engineering sciences, the need for pattern recognition and predictions has become increasingly pronounced in order to assist the study of temporally evolving natural phenomena [29]. Direct-solution methods, which often relies on deep neural networks (DNN) to encode an input-output relationship, are hindered by the high requirement on both quantity and quality of data and thus sensitive to parametric changes of the underlying system [33]. On the other hand, equation discovery [57] supplements partial knowledge with optimal predictions/parameter inference to reproduce the governing laws. Well-known methods belonging to this class include symbolic regression [58], numerical Gaussian processes [50, 51], sparse identification of nonlinear dynamics (SINDy) [8], physics-informed neural networks (PINN) [13] and Kalman filters [14], along with combinations of these strategies to accommodate different physical scenarios or achieve computational improvements [26, 11, 24, 9]. In the context of system identification with complete absence of physics, equation-free methods are adopted to reconstruct the observed processes through a purely data-driven surrogate. Instead of prescribing a set of dictionary terms, equation-free methods seek to approximate the flow map/operator that incorporates differential information. Deep neural networks (DNN) and dynamic mode decompositions (DMD) are two prominent classes of methods for operator learning. Including the well-known DeepONet [37], DNN architectures possess high expressiveness and are capable of serving as nonlinear surrogates of PDE-based models to arbitrary accuracy given sufficient training samples [49, 12]. On the other hand, DMD provides an optimal linear approximation of the model and relies on the Koopman operator to account for nonlinearity [31, 41, and references therein]. In the absence of precise error estimators for DNN surrogates, their performance on any given problem cannot be guaranteed _a priori_. In contrast, being a linear surrogate, DMD is better understood and equipped with theoretical estimates of prediction accuracy, e.g., [35]. Its various generalizations are designed to handle advection-dominated phenomena [34], shock waves and discontinuous solutions [36], inhomogeneity of differential operators [37] and a problem's parametric dependence [39]. Physical constraints in the PDE model, such as translation invariance and time-dependent coefficients, pose challenges for both DNNs and DMD. For instance, direct-solution DNNs using soft regularization to enforce advection and mass conservation may lead to ill-conditioning during training [30]. Operator DNNs have also been observed to yield poor performance when the finite data samples are not representative of global transport phenomena [61, 59]. Likewise, standard DMD is also not devoid of shortcomings and fails to cope with sharp gradients [4, 34]. Furthermore, its construction is based on the assumption of time homogeneity (e.g., parameters and/or source terms do not vary in time), which is not suitable for nonautonomous problems. A prime example of the twin challenges to model discovery is advection-diffusion problems which encapsulate conservation of momentum [54, 56], thermal energy [6], and probability mass [53]. In the diffusion-dominated and intermediary regimes, these problems have been successfully treated via standard reduced-order basis methods including DMD [35, 45] and POD [21]. The advection-dominated regime, characterized by, e.g., high Peclet and Reynolds numbers, complicates not only numerical solution of advection-diffusion equations but also discovery of these equations (or corresponding equation-free models) from observations. Although its convergence properties has been well-studied [28], standard DMD yields quantitatively and qualitatively incorrect solutions, spurring the development of Lagrangian DMD [34]. Reduced-order surrogate models of nonautonomous dynamical systems require either an appropriate global spatio-temporal basis or a time-dependent parameterization (e.g. via Fourier spectral expansion) [40, 43]. Examples of such modifications of the standard DMD include streaming DMD [22], time-varying DMD [60], and more generally, nonautonomous Koopman operators for (quasi)periodic time-dependent inputs [42]. We build upon these developments to construct a DMD framework for translation-invariant (e.g., advection-dominated) problems with time-dependent inputs. Our approach is to reformulate a governing partial differential equation in the Lagrangian frame of reference and to deploy a piece-wise constant approximation of the nonautonomous Koopman operator in the resulting Lagrangian DMD [34]. In section 2, we formulate a class of parabolic translation-invariant PDEs with time-dependent inputs and, upon spatial discretization, express them as a nonautonomous dynamical system. Section 3 contains a brief description of the Koopman operator theory and the DMD framework for construction of reduced-order representations of PDE-based models. In section 4, we present a local Lagrangian DMD, whose implementation shares relevant features of the time-varying DMD [60] and the Lagrangian DMD [34] to effectively represent translation-invariant PDEs with time-dependent inputs. Upper bounds of both the prediction error of our method and the operator norm error are derived in section 5, as functions of reduction of rank and number of collected snapshots. This theoretical analysis demonstrates that the local Lagrangian DMD is more accurate than either time-varying DMD or Lagrangian DMD alone. A series of numerical experiments, reported in section 6, serve to demonstrate our approach and to verify the tightness of these error bounds. Main conclusions drawn from our study are summarized in section 7, accompanied by a discussion of the method's limitations and future research. ## 2 Problem Formulation We are concerned with the following class of partial differential equations (PDE) with variable coefficients for a quantity of interest \(u(t,\mathbf{x})\), with \(\mathbf{x}\in\Omega\subset\mathbb{R}^{d}\): \[\frac{\partial u}{\partial t}+\nabla_{\mathbf{x}}\cdot(G(t,\mathbf{x},u)u)= \nabla_{\mathbf{x}}\cdot(D(t,\mathbf{x},u)\nabla_{\mathbf{x}}u),(t,\mathbf{x}) \in(0,t_{f}]\times\Omega \tag{1}\] \[u(t_{0},\mathbf{x})=u_{0}(\mathbf{x})\] We consider a semi-discrete method to simulate equation (1) by discretizing in the spatial variables \(\mathbf{x}\). For simplicity, we assume the number of gird points is \(n\) for each of the \(d\) spatial dimensions. We arrive at a nonautonomous dynamical system of general form: \[\frac{d\mathbf{u}}{dt}=\mathbf{N}(t,\mathbf{u})\] \[\mathbf{u}(0)=\mathbf{u}_{0} \tag{2}\] whose right-hand side describes the dynamics of the PDE in (1) with an explicit time-dependence. With respect to construction of ROMs, we will be primarily concerned with the discretized equations (2). Let \(\mathbf{u}\in\mathcal{M}\subset\mathbb{R}^{n^{d}}\) denote the numerical solution, and \(\mathbf{N}:\mathbb{R}^{+}\times\mathbb{R}^{n^{d}}\to\mathbb{R}^{n^{d}}\) is the discretized PDE operator. Let the temporal domain \([0,t_{f}]\) be discretized uniformly with step size \(\Delta t\), and define \(t_{i}=i\Delta t\), for \(0=t_{0}<t_{1}<\cdots<t_{m}=t_{f}\). Furthermore, define \(\mathbf{\Phi}_{\Delta t}(\cdot;t_{i}):\mathbb{R}^{n}\to\mathbb{R}^{n}\) as the discrete flow map associated with the system (2), and similarly the continuous flow map is denoted as \(\Phi_{t}(\cdot;s)\), such that for any \(t\leq t^{\prime}\): \[\mathbf{u}(t^{\prime})=\Phi_{t^{\prime}}(\mathbf{u}(t);t):=\mathbf{u}(t)+\int_ {t}^{t^{\prime}}\mathbf{N}(s,\mathbf{u}(s))ds \tag{3}\] Furthermore, \[\mathbf{u}_{i+1}=\mathbf{\Phi}_{\Delta t}(\mathbf{u}_{i};t_{i}):=\mathbf{u}(t _{i})+\int_{t_{i}}^{t_{i+1}}\mathbf{N}(s,\mathbf{u}(s))ds \tag{4}\] where we define \(\mathbf{u}_{i}=\mathbf{u}(t_{i})\). ## 3 Review of DMD Algorithms For the general dynamical system (4), the associated family of Koopman operators evolve a set of observables along its flow. More precisely, given an observable function \(g:\mathbb{R}^{n}\to\mathbb{R}^{N}\), the Koopman operator \(\mathcal{K}_{t}^{t^{\prime}}\) is defined such that: \[\mathcal{K}_{t}^{t^{\prime}}g(\mathbf{u}(t)):=g(\mathbf{u}(t^{\prime})) \tag{5}\] For the discrete-time description (4), we similarly define the associated Koopman operator \(\mathcal{K}_{i}^{\Delta t}\), such that: \[\mathcal{K}_{i}^{\Delta t}g(\mathbf{u}_{i})=g(\mathbf{u}_{i+1}) \tag{6}\] Both \(\mathcal{K}_{t}^{t^{\prime}},\mathcal{K}_{i}^{\Delta t}\) are infinite-dimensional operators on the Hilbert space of all observable functions \(g\). In addition, they are linear maps despite potential nonlinearity of the original system. Dynamic mode decomposition (DMD) is a celebrated algorithm that attempts to approximate the eigenmodes of the Koopman operator to identify dominant frequencies and reconstruct the underlying dynamics from discrete observations. Let a training dataset containing \(m\) collected snapshots be denoted as \(\mathcal{S}=\{(\mathbf{g}_{i},\mathbf{g}_{i+1})\}\}_{i=1}^{m}\), with \(\mathbf{g}_{i}=g(\mathbf{u}_{i})\). In line with (11), we would like to construct a best-fit linear operator \(\mathbf{K}\) such that: \[\mathbf{g}_{i+1}\approx\mathbf{K}\mathbf{g}_{i} \tag{12}\] for all \(i=1,2,\ldots,m\). ### Standard DMD The standard DMD algorithm attempts to reconstruct directly in solution space, i.e. \(g(\mathbf{u}_{i})=\mathbf{u}_{i}\) and \(\mathbf{K}\) is constructed via minimizing the mean squared error (MSE): \[L_{\mathcal{S}}(\mathbf{K})=\frac{1}{m}\sum_{i=1}^{m}||\mathbf{y}_{i}-\mathbf{ K}\mathbf{x}_{i}||_{2}^{2} \tag{13}\] where the pairs \((\mathbf{x}_{i},\mathbf{y}_{i})=(\mathbf{u}_{i},\mathbf{u}_{i+1})\) form the data matrices of size \(n\times m\): \[\mathbf{X}=\left[\begin{array}{c|ccc}&\cdots&\\ \mathbf{u}_{1}&\mathbf{u}_{2}&\cdots&\mathbf{u}_{m}\\ \big{|}&\cdots&\end{array}\right],\mathbf{Y}=\left[\begin{array}{c|ccc}& &\cdots&\\ \mathbf{u}_{2}&\mathbf{u}_{3}&\cdots&\mathbf{u}_{m+1}\\ \big{|}&\cdots&\end{array}\right] \tag{14}\] The minimizer of (13) can be explicitly derived as: \[\mathbf{K}=\mathbf{Y}\mathbf{X}^{\dagger} \tag{15}\] where \(\dagger\) denotes the Moore-Penrose pseudoinverse, \(\mathbf{X}^{\dagger}=(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{*}\). In order to compute \(\mathbf{X}^{\dagger}\) stably and tractably, a truncated singular value decomposition (SVD) is often applied on the data matrix \(\mathbf{X}\): \[\mathbf{X}\approx\mathbf{U}_{r}\mathbf{\Sigma}_{r}\mathbf{V}_{r}^{*} \tag{16}\] where the subscript \(r\) denotes a pre-specified rank typically determined on a Frobenius-norm error threshold, with \(\mathbf{U}_{r}\in\mathbb{R}^{n\times r},\mathbf{V}_{r}\in\mathbb{R}^{m\times r },\mathbf{\Sigma}_{r}\in\mathbb{R}^{r\times r}\) is a diagonal matrix containing the singular values \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{r}\) of \(\mathbf{X}\), in non-increasing order. Furthermore, the columns of \(\mathbf{U}_{r}\) span a \(r\) dimensional subspace of \(\mathbb{R}^{n}\), making it a computationally efficient strategy to first project the observations, compute predictions on the lower-dimensional space, and transform back to the original state space [1]. The procedure is summarized in Algorithm 1, which provides a continuous and fully data-driven model satisfying (13). The standard DMD algorithm serves as the foundation of a wide range of DMD algorithms that incorporate additional control parameters [47, 38]. ### Physics-Aware DMD To account for fundamental physical constraints for problems describing conservative advection (i.e. non-negativity of solutions, mass conservation), reduced-order models in a Lagrangian frame of reference are first discussed in [44] based on principal orthogonal decomposition (POD). In the data-driven Koopman operator formulation, the physics-aware DMD (or Lagrangian DMD) was developed in [34] for advection-dominated phenomena, where standard DMD fails. The main idea is to include the moving Lagrangian grid as observables in addition to a high-fidelity numerical solution. More explicitly, we consider the PDE (2.1) along the characteristic lines: (3.15) with initial conditions: (3.16) where \(\mathcal{X}_{i}\) denotes the \(i\)th point in the Lagrangian moving grid at which the solution to (2.1) is evaluated, denoted as \(\tilde{u}(t,\mathcal{X}(t))\). The starting grid is assumed to be the same spatial discretization as that of (2). In particular, \(\tilde{u}(t,\mathcal{X}(t))\) differs from the solution \(u(t,x)\) of (2), which is in the Eulerian frame of reference. The solution on the Lagrangian grid can be interpolated to the Eularian grid, and vice versa [34]. After discretizing (30), the Lagrangian system (30) yields a dynamical system of general form (2) with state variables: \[\mathbf{w}(t)=\begin{bmatrix}\boldsymbol{\mathcal{X}}(t)\\ \mathbf{u}(t)\end{bmatrix}\in\mathbb{R}^{N} \tag{31}\] where the effective state dimension \(N=dn+n^{d}\), including the discretized solution \(u(\mathbf{x}_{i})\) at each spatial grid points and re-ordered into a vector, along with a one-dimensional grid for each of the \(d\) spatial dimensions. The physics-aware DMD then considers the observables defined by \(g(\mathbf{u}_{i})=\mathbf{w}_{i}\), and the associated data matrices are: \[\mathbf{X}=\begin{bmatrix}\big{|}&\big{|}&\cdots&\big{|}\\ \mathbf{w}_{1}&\mathbf{w}_{2}&\cdots&\mathbf{w}_{m}\\ \big{|}&\big{|}&\cdots&\big{|}\end{bmatrix},\mathbf{Y}=\begin{bmatrix} \big{|}&\big{|}&\cdots&\big{|}\\ \mathbf{w}_{2}&\mathbf{w}_{3}&\cdots&\mathbf{w}_{m+1}\\ \big{|}&\big{|}&\cdots&\big{|}\end{bmatrix} \tag{32}\] **Remark 1**: The formulation of state vector \(\mathbf{w}(t)\) in (31) suffers from the so-called curse of dimensionality as the PDE solution is defined on a \(d\)-dimensional spatial grid. Furthermore, the interpolation from \(\tilde{u}(t,\mathbf{x})\) to \(u(t,\mathcal{X}(t))\) requires the formation of meshgrids at each time step \(t\). As observed in [34], the Lagrangian DMD for advection-dominated phenomena is restricted to the use of low-dimensional problems. Although possible model order reduction techniques exist, such as using tensor-network based methods [52, 10], the discussion of high-dimensional PDE solutions is out of the scope of this paper. ### Time-Varying DMD The time-varying DMD algorithm divides the temporal domain \([0,t_{f}]\) into \(p\) sub-intervals, \([t_{0},t_{1}],\ldots,[t_{p-1},t_{p}]\), with \(t_{0}=0,t_{p}=t_{f}\). For simplicity, we assume each sub-interval contains \(r\) snapshots and \(m=pr\). The time-varying DMD model introduces a time dependence to the linear operator, such that: \[\mathbf{g}_{i+1}\approx\mathbf{K}(t_{i})\mathbf{g}_{i} \tag{33}\] which approximates the nonautonomous Koopman operator (18). A common construction of \(\mathbf{K}(t)\) is piecewise constant in time, considered in this work, via solving \(p\) minimization problems: \[\min_{\mathbf{K}_{1},\ldots,\mathbf{K}_{p}}L_{\mathcal{S}}(\mathbf{K}(t))= \min_{\mathbf{K}_{1},\ldots,\mathbf{K}_{p}}\sum_{i=1}^{p}L_{\mathcal{S}_{i}}( \mathbf{K}_{i}) \tag{34}\] with \(\mathcal{S}_{i}\) being the snapshots collected from \([t_{i-1},t_{i}]\), and \(\mathcal{S}=\bigcup_{i=1}^{p}\mathcal{S}_{i}\). The linear operator \(\mathbf{K}^{(i)}\) can be interpreted as a local best-fit given by a standard DMD procedure on interval \([t_{i-1},t_{i}]\). \[\mathbf{K}(t)=\sum_{i=1}^{p}\mathbf{K}^{(i)}\delta_{[t_{i-1},t_{i}]}(t) \tag{35}\] where \(\delta_{[t_{i-1},t_{i}]}\) is the indicator function for time interval \([t_{i-1},t_{i}]\). It is also possible to construct other parameterized models of \(\mathbf{K}(t)\), such as basis polynomials or a universal function approximator [48]. ## 4 Proposed Methodology Both the standard DMD model and the physics-aware DMD model assume the underlying dynamical system (2) is autonomous or periodic, such that the Koopman operator (3) may be captured on a time-invariant manifold given sufficient observations. Furthermore, the standard DMD algorithm tends to perform poorly on phenomena with advective mass and sharp gradients due to oscillatory DMD modes [34]. Although the physics-aware DMD is sufficient for prediction of spatially-dependent advection phenomena, the inherent assumption of time homogeneity gives rise to model misspecification and degradation of accuracy for time-dependent advection problems (1). To address the inaccuracies introduced by both standard DMD and physics-aware DMD, we consider the following procedure, summarized in Algorithm 1, which effectively introduces a time-dependence to the Lagrangian reduced order model. Algorithm 1 provides an elementary implementation of the (temporal) piece-wise constant Koopman operator in (31). Upon appropriate modifications of \(\mathbf{K}(t)\) to allow superpositions of DMD frequencies in each time interval, it is possible to recover other forms of DMD strategies, such as the multi-resolution DMD of [32] or the windowed DMD of [3]. In terms of computational complexity, it is possible to consider the incremental SVD updates with adaptive rank truncation to directly update \(\mathbf{K}^{(i)}\) to \(\mathbf{K}^{(i+1)}\) in low-rank format [7]. However, due to the inclusion of Lagrangian moving grids in the formulation of (27), it is assumed that the data matrices have dimensions \(N\gg m\) and are of full column rank. The size constraint is especially true in high-dimensional PDE problems. In our numerical experiments, we did not observe a significant computational advantage of applying incremental SVD update to computed operators \(\mathbf{K}^{(1)},\ldots,\mathbf{K}^{(i)}\). In particular, a direct pseudoinverse computation in standard DMD involves \(O(m^{2}N)\) runtime complexity, which is asymptotically more expensive than \(p\) separate SVD computations, yielding \(O(pr^{2}N)=O(mrN)\), with \(m=pr\). A small computational saving may be achievable if the highest rank of data matrices during each time interval of collected snapshots is bounded by some \(r^{\prime}<r\), in which case the runtime complexity is \(O(p\cdot rr^{\prime}N)=O(mr^{\prime}N)\), by applying incremental SVD updates. ## 5 Theoretical Analysis The judicious choice of subintervals in the time-varying DMD formulation 3 is crucial for prediction accuracy. As general guideline, we first present in Section 5.1 pointwise and average error upper bounds for the time-varying DMD in (31). In Section 5.2, we compute upper bounds of perturbations to the learned operator in terms of \(L^{2}\) operator norm under truncation of frequencies and deletion of training data. Furthermore, for classes of linear dynamical systems, the bounds can be refined by analyzing the norm of time-shifted training data \(\mathbf{Y}\) in relation to that of \(\mathbf{X}\). For general nonlinear dynamical systems, we refer the reader to the analysis provided in the analysis given in Section 3 of [48] ### Prediction Error We first consider the pointwise prediction error of time-varying DMD strategy: **Proposition 5** **(Pointwise error for time-varying DMD)**: Assume the system in equation (2) and the time-varying DMD in (30) satisfy the following properties: 1. \(\mathbf{N}(t,\cdot):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is uniformly Lipschitz (in time) with constant \(L>0\). 2. \(\sup_{s\in[t_{0},t_{f}]}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;s)-\mathbf{K}(s )\right\|_{L^{\infty}(\mathbb{R}^{n})}<+\infty\), where \(\mathbf{K}(s)\) is piecewise constant on each interval \(s\in[t_{0},t_{1}],[t_{1},t_{2}],\ldots,[t_{p-1},t_{p}]\). \(\mathbf{K}_{1},\mathbf{K}_{2},\ldots,\mathbf{K}_{p}\) are respective solutions of the standard problem of minimizing (4) on each in _interval \([t_{0},t_{1}],[t_{1},t_{2}],\ldots,[t_{p-1},t_{p}]\)._ 3. _All reconstructed solutions_ \(\mathbf{x}_{DMD}\) _belong to the solution manifold, defined as:_ (5.1) \[\mathcal{M}_{\Delta t}=\{\mathbf{x}\in\mathcal{M}:\mathbf{\Phi}_{\Delta t}( \mathbf{x};t_{i})\in\mathcal{M}\}\] _Define the error of incremental DMD at time step \(t_{n}\) to be:_ \[\mathcal{E}^{n}=\|\mathbf{x}_{n}-\widehat{\mathbf{x}}_{n}\|_{2}^{2} \tag{5.2}\] _where \(\mathbf{x}_{n}=\mathbf{x}(t_{n})\) is exact, and \(\widehat{\mathbf{x}}_{n}\) is the approximation given by DMD. Rewritting_ the model expression:_ \[\widehat{\mathbf{x}}_{k+1}=\mathbf{K}(t_{k})\widehat{\mathbf{x}}_{k}=\widehat{ \mathbf{x}}_{k}+\mathbf{A}(t_{k})\widehat{\mathbf{x}}_{k} \tag{10}\] _where:_ \[\mathbf{A}(t):=\mathbf{K}(t)-\mathbf{I}_{N} \tag{11}\] _then the pointwise error of time-vary DMD is:_ \[\mathcal{E}^{n}\leq(1+e^{L\Delta t})^{m}\mathcal{E}_{0}+\sum_{j=1}^{p}\sum_{l= 0}^{r}(1+e^{L\Delta t})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(jr-l)}) -\mathbf{A}_{k}\right\|_{L^{\infty}(\mathcal{M}_{\Delta t})}^{2} \tag{12}\] Proof: For ease of presentation, we omit the time dependence in the flow map and let \(\mathbf{\Phi}_{\Delta t}(\mathbf{x}(t))=\mathbf{\Phi}_{\Delta t}(\mathbf{x}(t);t)\), and \(\left\|\cdot\right\|_{\infty}=\left\|\cdot\right\|_{L^{\infty}(\mathcal{M}_{ \Delta t})}\). By Gronwall's inequality along with Lipschitz continuity, we have for any time \(t\) and solutions \(\mathbf{x},\widehat{\mathbf{x}}\in\mathcal{M}_{\Delta t}\): \[\left\|\mathbf{\Phi}_{\Delta t}(\mathbf{x}(t))-\mathbf{\Phi}_{\Delta t}( \widehat{\mathbf{x}}(t))\right\|_{2}\leq e^{\tau L}\left\|\mathbf{x}(t)- \widehat{\mathbf{x}}(t)\right\|_{2},\tau\in[0,\Delta t] \tag{13}\] Then by repeated applications of triangle inequality: \[\begin{array}{l}\mathcal{E}^{n}=\left\|\mathbf{x}_{n-1}+\mathbf{\Phi}_{ \Delta t}(\mathbf{x}_{n-1})-(\widehat{\mathbf{x}}_{n-1}+\mathbf{A}(t_{n-1}) \widehat{\mathbf{x}}_{n-1})\right\|_{2}^{2}\\ \leq\left\|\mathbf{x}_{n-1}-\widehat{\mathbf{x}}_{n-1}\right\|_{2}^{2}+\left\| \mathbf{\Phi}_{\Delta t}(\mathbf{x}_{n-1})-\mathbf{A}(t_{n-1})\widehat{ \mathbf{x}}_{n-1}\right\|_{2}^{2}\\ =\left\|\mathbf{x}_{n-1}-\widehat{\mathbf{x}}_{n-1}\right\|_{2}^{2}+\left\| \mathbf{\Phi}_{\Delta t}(\mathbf{x}_{n-1})-\mathbf{\Phi}_{\Delta t}(\widehat{ \mathbf{x}}_{n-1})+\mathbf{\Phi}_{\Delta t}(\widehat{\mathbf{x}}_{n-1})- \mathbf{A}(t_{n-1})\widehat{\mathbf{x}}_{n-1}\right\|_{2}^{2}\\ \leq\left\|\mathbf{x}_{n-1}-\widehat{\mathbf{x}}_{n-1}\right\|_{2}^{2}+\left\| \mathbf{\Phi}_{\Delta t}(\mathbf{x}_{n-1})-\mathbf{\Phi}_{\Delta t}(\widehat{ \mathbf{x}}_{n-1})\right\|_{2}^{2}+\left\|\mathbf{\Phi}_{\Delta t}(\widehat{ \mathbf{x}}_{n-1})-\mathbf{A}(t_{n-1})\widehat{\mathbf{x}}_{n-1}\right\|_{2}^ {2}\\ \leq\mathcal{E}^{n-1}+e^{\Delta tL}\mathcal{E}^{n-1}+\left\|\mathbf{\Phi}_{ \Delta t}(\cdot;t_{n})-\mathbf{A}_{p}\right\|_{\infty}^{2}\\ \leq(1+e^{\Delta tL})\mathcal{E}^{n-2}+(1+e^{\Delta tL})\left\|\mathbf{\Phi}_{ \Delta t}(\cdot;t_{n-1})-\mathbf{A}_{p}\right\|_{\infty}^{2}+\left\|\mathbf{ \Phi}_{\Delta t}(\cdot;t_{n})-\mathbf{A}_{p}\right\|_{\infty}^{2}\\ \leq\cdots\leq(1+e^{\Delta tL})^{\mathcal{E}}\mathcal{E}^{n-r}+\sum_{l=0}^{r} (1+e^{\Delta tL})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(r-l)})- \mathbf{A}_{m}\right\|_{\infty}^{2}\\ \leq\cdots\leq(1+e^{\Delta tL})^{2r}\mathcal{E}^{n-2r}+\sum_{l=0}^{r}(1+e^{ \Delta tL})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(r-l)})-\mathbf{A}_{ m}\right\|_{\infty}^{2}+\cdots\\ \sum_{l=0}^{r}(1+e^{\Delta tL})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(2 r-l)})-\mathbf{A}_{m-1}\right\|_{\infty}^{2}\\ \leq\cdots\leq(1+e^{\Delta tL})^{m}\mathcal{E}_{0}+\sum_{j=0}^{p}\sum_{l=0}^{w }(1+e^{L\Delta t})^{l}\left\|\mathbf{\Phi}_{\Delta t}(\cdot;t_{n-(jr-l)})- \mathbf{A}_{k}\right\|_{\infty}^{2}\\ \end{array}\] **Remark 5.2**: If \(\mathbf{K}(t)\equiv\mathbf{K}\) is constant in time, we recover the upper bound investigated in Theorem 4.3 of [49] and subsequently that in equation (3.11) of [37]. **Corollary 5.3**: _The time-varying DMD of (3.21) is at least as accurate in the MSE sense as the standard DMD of (13)._ Proof: The property can be intuitively interpreted from the fact that a stepwise constant (in time) approximation is always at least as good on average as a constant approximation. More precisely, let \(\mathbf{K},\mathbf{K}(t)\) denote the solutions of standard DMD and time-varying DMD, respectively, we may rewrite the minimization problem in (11): \[L_{\mathcal{S}}(\mathbf{K})=\min_{\mathbf{K}}\frac{1}{m}\sum_{i=1}^{m}\left\| \mathbf{y}_{i}-\mathbf{Kx}_{i}\right\|_{2}^{2}=\min_{\mathbf{K}}\frac{1}{p} \sum_{i=1}^{p}\frac{1}{w}\sum_{j=1}^{w}\left\|\mathbf{y}_{n-(ir-j)}-\mathbf{Kx }_{n-(ir-j)}\right\|_{2}^{2}\] and by definition of minimum, we conclude: \[L_{\mathcal{S}}(\mathbf{K})\geq\frac{1}{p}\sum_{i=1}^{p}\min_{\mathbf{K}_{i}} \frac{1}{w}\sum_{j=1}^{w}\left\|\mathbf{y}_{n-(iw-j)}-\mathbf{K}_{i}\mathbf{x}_{ n-(iw-j)}\right\|_{2}^{2}=L_{\mathcal{S}}(\mathbf{K}(t))\] ### Perturbation Analysis With the DMD algorithms introduced in Section 3, we provide an operator 2-norm error bound on the DMD solution for two cases of common operations in engineering: (1) truncation of singular value decomposition (SVD) rank in data matrix \(\mathbf{X}\) and, (2) deletion of most recent snapshots in both \(\mathbf{X},\mathbf{Y}\). In particular, we connect the error bound with a case study of nonautonomous linear dynamical system with the following form: \[\begin{cases}\frac{d\mathbf{u}(t)}{dt}=\mathbf{C}(t)\mathbf{u}(t)+\mathbf{f}(t) \\ \mathbf{u}(0)=\mathbf{u}_{0}\end{cases} \tag{10}\] whose solution is provided: \[\mathbf{u}(t)=\Phi_{t}(\mathbf{u}_{0};0)=\exp\bigg{(}\int_{0}^{t}\mathbf{C}(s) ds\bigg{)}\mathbf{u}_{0}+\int_{0}^{t}\exp\bigg{(}\int_{s}^{t}\mathbf{C}(\tau) dr\bigg{)}\mathbf{f}(s)ds \tag{11}\] We first present the results without assumptions on the underlying system. **Proposition 1**: _(Operator norm error under rank truncation) Let the SVD of data matrix \(\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T}\). \(\mathbf{\Sigma}\) contains the singular values arranged in non-increasing order, i.e. \(\sigma_{1}=\sigma_{\text{max}}\geq\sigma_{2}\geq\cdots\geq\sigma_{\text{min}}= \sigma_{\text{rank}(\mathbf{X})}\). Let a truncated SVD with \(r\leq\text{rank}(\mathbf{X})\) be denoted as \(\mathbf{X}_{r}=\mathbf{U}_{r}\mathbf{\Sigma}_{r}\mathbf{V}_{r}^{T}\) where only the first \(r\) columns are retained in \(\mathbf{U}_{r},\mathbf{V}_{r}\), and the first \(r\) singular values are retained in \(\mathbf{\Sigma}_{r}\). Then the operator norm error has the following upper bound:_ \[\left\|\mathbf{A}-\mathbf{A}_{r}\right\|_{2}\leq\frac{\sigma_{\text{max}}( \mathbf{Y})}{\sigma_{\text{min}}(\mathbf{X})} \tag{12}\] \[\|\mathbf{K}-\mathbf{K}_{r}\|_{2}^{2}=\left\|\mathbf{Y}\mathbf{X}^{\dagger}- \mathbf{Y}\mathbf{X}_{r}^{\dagger}\right\|_{2}^{2}\leq\left\|\mathbf{Y}\right\| _{2}^{2}\cdot\left\|\mathbf{X}^{\dagger}-\mathbf{X}_{r}^{\dagger}\right\|_{2} ^{2}=\frac{\sigma_{\text{max}}^{2}(\mathbf{Y})}{\sigma_{\text{min}}^{2}( \mathbf{X})} \tag{13}\] _Remark 2_: The bound presented in Proposition 1 is an upper bound in the sense that it does not depend on the rank-\(r\) due to the pseudoinverse operation. More granular bounds can be derived by analyzing instead the pointwise error for a specific observation \(\mathbf{x}\): \[\left\|\mathbf{K}\mathbf{x}-\mathbf{K}_{r}\mathbf{x}\right\|_{2}^{2}\leq \sigma_{\text{max}}^{2}(\mathbf{Y})\left\|\sum_{k=r}^{\text{rank}(\mathbf{X})} -\frac{1}{\sigma_{k}(\mathbf{X})}(\mathbf{u}_{k}^{T}\mathbf{x})\mathbf{v}_{k} \right\|_{2}^{2} \tag{14}\] \[=\sum_{k=r}^{\text{rank}(\mathbf{X})}\frac{\sigma_{\text{max}}^{2}(\mathbf{Y})} {\sigma_{k}^{2}(\mathbf{X})}(\mathbf{u}_{k}^{T}\mathbf{x})^{2}\] Under different assumptions of \(\mathbf{x}\) in relations to the column space of data matrix \(\mathbf{X}\), the bound (14) can be tightened [55]. To analyze the time-varying DMD strategy in Section 3.3, one may view the individual solutions \(\mathbf{K}_{i}\) on time interval \([t_{i-1},t_{i}]\) as a standard DMD solution with fewer observations. To provide a benchmark on the effect of adding/deleting observations in the training data and investigate dependencies, we illustrate the operator norm perturbation that occurs by deleting the most recent observation. The general case of deleting \(r\) most recent observations can be analogously derived using the Sherman-Morrison-Woodbury update formula. For the pseudoinverse of data matrices, the following result holds: **Lemma 5.6**: _(Updating pseudoinverse) Suppose \(N\geq m\) and \(\mathbf{X}_{m}\in\mathbb{R}^{N\times m}\) has full column rank, Furthermore, let \(\mathbf{u}\in\mathbb{R}^{N}\) be a newly collected snapshot, the pseudoinverse of \(\mathbf{X}=[\mathbf{X}_{m},\mathbf{u}]\in\mathbb{R}^{N\times(m+1)}\) is given by:_ \[\mathbf{X}^{\dagger} =\left[\mathbf{X}_{m}^{\dagger}+c\mathbf{X}_{m}^{\dagger} \mathbf{u}\mathbf{u}^{T}(\mathbf{X}_{m}\mathbf{X}_{m}^{\dagger})^{T}-c \mathbf{X}_{m}^{\dagger}\mathbf{u}\mathbf{u}^{T}\right]\] \[=\left[\mathbf{X}_{m}^{\dagger}\right]-c\left[\mathbf{X}_{m}^{ \dagger}\mathbf{u}\right]((\mathbf{I}-\mathbf{X}_{m}\mathbf{X}_{m}^{\dagger}) \mathbf{u})^{T}\] _where:_ \[c=\frac{1}{\left\|\mathbf{u}\right\|_{2}^{2}-\mathbf{u}^{T}\mathbf{X}_{m}( \mathbf{X}_{m}^{T}\mathbf{X}_{m})^{-1}\mathbf{X}_{m}^{T}\mathbf{u}}\geq\frac{ 1}{\left\|\mathbf{u}\right\|_{2}^{2}} \tag{12}\] _The lower bound is attained if \(\mathbf{u}\) is orthogonal to the range of \(\mathbf{X}_{m}\). \({}_{\Box}\)_ Proof: We directly apply the block matrix inverse formula [18] to \((\mathbf{X}^{T}\mathbf{X})^{-1}\): \[(\mathbf{X}^{T}\mathbf{X})^{-1} =\begin{bmatrix}\mathbf{X}_{m}^{T}\mathbf{X}_{m}&\mathbf{X}_{m}^{ T}\mathbf{u}\\ \mathbf{u}^{T}\mathbf{X}_{m}&\left\|\mathbf{u}\right\|_{2}^{2}\end{bmatrix}^{-1}\] \[=\begin{bmatrix}(\mathbf{X}_{m}^{T}\mathbf{X}_{m})^{-1}+c\mathbf{ X}_{m}^{\dagger}\mathbf{u}\mathbf{u}^{T}(\mathbf{X}_{m}^{\dagger})^{T}&-c \mathbf{X}_{m}^{\dagger}\mathbf{u}\\ -c\mathbf{u}^{T}(\mathbf{X}_{m}^{\dagger})^{T}&c\end{bmatrix}\] and multiply the result to \(\mathbf{X}^{T}=\begin{bmatrix}\mathbf{X}_{m}^{T}\\ \mathbf{u}^{T}\end{bmatrix}\). \({}_{\Box}\) **Proposition 5.7**: _(Operator 2-norm perturbation under column deletion) \({}_{\Box}\)_ _Let \(\mathbf{X}=[\mathbf{X}_{m},\mathbf{u}]\in\mathbb{R}^{N\times(m+1)}\), \(\mathbf{Y}=[\mathbf{Y}_{m},\mathbf{v}]\in\mathbb{R}^{N\times(m+1)}\), and \(\mathbf{X}_{m},\mathbf{Y}_{m}\in\mathbb{R}^{N\times m}\), with \(N\geq m\). We further assume that \(\mathbf{X}_{m}\) has full column rank. Then, the operator norm error satisfies the following upper bound:_ \[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}\leq\sqrt{c^{2}\left\|\mathbf{u} \right\|_{2}^{2}\left(1+\frac{\left\|\mathbf{u}\right\|_{2}^{2}}{\sigma_{min}^ {2}(\mathbf{X}_{m})}\right)(\sigma_{max}^{2}(\mathbf{Y}_{m})+\left\|\mathbf{v }\right\|_{2}^{2})+\frac{\left\|\mathbf{v}\right\|_{2}^{2}}{\sigma_{min}^{2}( \mathbf{X}_{m})}} \tag{13}\] _with \(c\) defined in Lemma 5.6. In particular, if \(\mathbf{u}\) is orthogonal to the range of \(\mathbf{X}_{m}\), the bound is tightened to:_ \[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}\leq\sqrt{\frac{\sigma_{max}^{2}( \mathbf{Y}_{m})+\left\|\mathbf{v}\right\|_{2}^{2}}{\left\|\mathbf{u}\right\|_{ 2}^{2}}+\frac{\sigma_{max}^{2}(\mathbf{Y}_{m})+2\left\|\mathbf{v}\right\|_{2}^ {2}}{\sigma_{min}^{2}(\mathbf{X}_{m})}} \tag{14}\] _Proof._ \[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}^{2}=\left\|\mathbf{Y}\mathbf{X}^ {\dagger}-\mathbf{Y}_{m}\mathbf{X}_{m}^{\dagger}\right\|_{2}^{2}=\left\| \mathbf{Y}\mathbf{X}^{\dagger}-\mathbf{Y}\widehat{\mathbf{X}_{m}}^{\dagger}+ \mathbf{Y}\widehat{\mathbf{X}_{m}}^{\dagger}-\widehat{\mathbf{Y}_{m}}\widehat {\mathbf{X}_{m}}^{\dagger}\right\|_{2}^{2}\] where we define: \[\widehat{\mathbf{X}_{m}}^{\top}:=\begin{bmatrix}\mathbf{X}_{m}^{\dagger}\\ \mathbf{0}_{1\times N}\end{bmatrix}\in\mathbb{R}^{(m+1)\times N},\widehat{ \mathbf{Y}_{m}}=\begin{bmatrix}\mathbf{Y}_{m}&\mathbf{0}_{N\times 1}\end{bmatrix}\in \mathbb{R}^{N\times(m+1)} \tag{15}\] then by triangle inequality: \[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}^{2}\leq\left\|\mathbf{Y}\right\|_{ 2}^{2}\left\|\mathbf{X}^{\dagger}-\widehat{\mathbf{X}_{m}}^{\dagger}\right\|_{ 2}^{2}+\left\|\widehat{\mathbf{X}_{m}}^{\dagger}\right\|_{2}^{2}\left\| \mathbf{Y}-\widehat{\mathbf{Y}_{m}}\right\|_{2}^{2}\] where \(\left\|\mathbf{X}^{\dagger}-\widehat{\mathbf{X}_{m}}^{-\dagger}\right\|_{2}\) needs to be further bounded. Using Lemma 5.6, we have: \[\mathbf{X}^{\dagger}-\widehat{\mathbf{X}_{m}}^{\dagger}=-c\begin{bmatrix} \mathbf{X}_{m}^{\dagger}\mathbf{u}\\ 1\end{bmatrix}((\mathbf{I}-\mathbf{X}_{m}\mathbf{X}_{m}^{\dagger})\mathbf{u})^ {T} \tag{5.16}\] Furthermore, we have: \[\left|\left|\begin{bmatrix}\mathbf{X}_{m}^{\dagger}\mathbf{u}\\ 1\end{bmatrix}\right|\right|_{2}^{2}\leq 1+\frac{\left\|\mathbf{u}\right\|_{2}^{2 }}{\sigma_{min}^{2}(\mathbf{X}_{m})} \tag{5.17}\] and as a projection matrix: \[\left\|\mathbf{I}-\mathbf{X}_{m}\mathbf{X}_{m}^{\dagger}\right\|_{2}^{2}\leq 1 \tag{5.18}\] Then we may conclude: \[\left\|\mathbf{X}^{\dagger}-\widehat{\mathbf{X}_{m}}^{-\dagger}\right\|_{2}^ {2}\leq c^{2}\left\|\mathbf{u}\right\|_{2}^{2}\left(1+\frac{\left\|\mathbf{u} \right\|_{2}^{2}}{\sigma_{min}^{2}(\mathbf{X}_{m})}\right) \tag{5.19}\] Putting everything together, we conclude that: \[\left\|\mathbf{K}-\mathbf{K}_{m}\right\|_{2}^{2}\leq c^{2}\left\|\mathbf{u} \right\|_{2}^{2}\left(1+\frac{\left\|\mathbf{u}\right\|_{2}^{2}}{\sigma_{min} ^{2}(\mathbf{X}_{m})}\right)(\sigma_{max}^{2}(\mathbf{Y}_{m})+\left\|\mathbf{ v}\right\|_{2}^{2})+\frac{\left\|\mathbf{v}\right\|_{2}^{2}}{\sigma_{min}^{2}( \mathbf{X}_{m})} \tag{5.20}\] Under the assumption of \(\mathbf{u}\) being orthogonal to \(\mathrm{range}(\mathbf{X}_{m})\), the last conclusion follows by the reduction of lower bound for \(c\) presented in Lemma 5.6. Figure 1 provides a verification of the bound in Theorem 5.7 using random Gaussian matrices, averaged over 10 random seeds. The results obtained in Theorem 5.4 and Theorem 5.7 only rely on general linear algebra operations. With explicit form of the dynamical system, such as the system in equation (5.7), more insights can be gained by leveraging the dependence of time-shifted data matrix \(\mathbf{Y}\) on \(\mathbf{X}\) via the flow map \(\mathbf{\Phi}_{\Delta t}\), as we now present in the following proposition: Figure 1: Operator norm error bound (5.7) under deletion of most recent observation, for random Gaussian data matrices. The comparison of true operator norm error and upper bounds are averaged over 10 seeds. **Proposition 5.8**: _(Time-shift norm upper bound, for system (5.7)) Assume that \(\mathbf{C}(t)\) is diagonalizable for all \(t\), and \(\mathbf{C}(t)\), \(\mathbf{f}(t)\) are piecewise continuous on all intervals \([t_{0},t_{1}],\ldots,[t_{m-1},t_{m}]\). Then we have that the norm of \(\mathbf{Y}\) is connected with the norm of \(\mathbf{X}\) as the following, with \(f,\gamma\) defined in equation (5.29):_ \[\left\|\mathbf{Y}\right\|_{2}\leq\exp\left(\frac{1}{2}\gamma^{2}\Delta t \right)\sqrt{\frac{mf^{2}}{\gamma^{2}}+\sum_{i=1}^{m}\sigma_{i}^{2}(\mathbf{ X})}\] For convenience of notations, define the time-dependent matrix: \[\mathbf{M}_{\Delta t}^{(i)}=\mathbf{M}_{\Delta t}(t_{i}):=\exp\bigg{(}\int_{t _{i}}^{t_{i}+\Delta t}\mathbf{C}(s)ds\bigg{)} \tag{5.21}\] and the time-dependent vector: \[\mathbf{g}_{\Delta t}^{(i)}=\mathbf{g}_{\Delta t}(t_{i})=\int_{t_{i}}^{t_{i} +\Delta t}\exp\bigg{(}\int_{s}^{t_{i}+\Delta t}\mathbf{C}(\tau)d\tau\bigg{)} \mathbf{f}(s)ds \tag{5.22}\] then we have by the explicit flow map (5.7) that: \[\mathbf{v}=\mathbf{M}_{\Delta t}^{(m+1)}\mathbf{u}+\mathbf{g}_{\Delta t}^{(m +1)} \tag{5.23}\] Iteratively applying the recurrence (5.23) to \(\mathbf{Y}\), we have the explicit dependence for each column \(1\leq i\leq m\): \[\mathbf{y}_{i}=\mathbf{M}_{\Delta t}^{(i)}\mathbf{x}_{i}+\mathbf{g}_{\Delta t} ^{(i)} \tag{5.24}\] and therefore: \[\mathbf{Y}=\begin{bmatrix}\begin{array}{cccc}\begin{array}{cccc}\begin{array} []{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc} \begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array []{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array}[]{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\end{array}[]{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array []{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array[]{cccc}\begin{array}{cccc}\array[]{cccc}\begin{array}{cccc}\begin{array}{cccc}\array []{cccc}\begin{array}{cccc}\begin{array}{cccc}\array[]{cccc}\end{array}[]{cccc}\begin{array}{cccc}\begin{array}{cccc}\array []{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\begin{array}{cccc}\array}[] Under the assumption of piecewise continuity on each \([t_{i},t_{i+1}]\), the attainability of \(\gamma_{i}\) is given by considering the spectra of \(\mathbf{C}(t)\) as a continuous map of time [19, 5]. Furthermore, \[\left\|\mathbf{g}_{\Delta t}^{(i)}\right\|_{2}^{2}=\left\|\int_{t_{i}}^{t_{i}+ \Delta t}\exp\bigg{(}\int_{s}^{t_{i}+\Delta t}\mathbf{C}(\tau)d\tau\bigg{)} \mathbf{f}(s)ds\right\|_{2}^{2}\] \[\leq f_{i}^{2}\int_{t_{i}}^{t_{i}+\Delta t}\exp\bigg{(}\gamma_{i}^{2}(t_{i}+ \Delta t-s)\bigg{)}ds=\frac{f_{i}^{2}}{\gamma_{i}^{2}}(\exp(\gamma_{i}^{2} \Delta t)-1)\] where we define: \[f_{i}:=\max_{t_{i}\leq s\leq\gamma_{i+1}}\left\|\mathbf{f}(s)\right\|_{2} \tag{5.28}\] which is attainable due to the piecewise continuous assumption of \(\mathbf{f}(t)\). Finally, define: \[\gamma:=\max_{1\leq i\leq m}\gamma_{i},f:=\max_{1\leq i\leq m}f_{i} \tag{5.29}\] We conclude the following result as desired: \[\left\|\mathbf{Y}\right\|_{2}^{2}\leq\exp(\gamma^{2}\Delta t)\sum _{i=1}^{m}\sigma_{i}^{2}(\mathbf{X})+\frac{mf}{\gamma}(\exp(\gamma^{2}\Delta t )-1)\] \[\leq\exp(\gamma^{2}\Delta t)\bigg{(}\frac{mf^{2}}{\gamma^{2}}+ \sum_{i=1}^{m}\sigma_{i}^{2}(\mathbf{X})\bigg{)}\] _Remark 5.9_.: In the special case where \(\mathbf{C}(t)\equiv\mathbf{C}\) with eigendecomposition \(\mathbf{C}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}\) and largest eigenvalue \(\lambda_{1}\), and \(\mathbf{f}\equiv\mathbf{0}\). We have that the solution has the form: \[\mathbf{x}(t)=\mathbf{Q}\exp(t\mathbf{\Lambda})\mathbf{Q}^{-1}\mathbf{x}_{0} \tag{5.30}\] Under the same conditions, the upper bound in Proposition 5.8 can be tightened to: \[\left\|\mathbf{Y}\right\|_{2}\leq\kappa_{2}(\mathbf{Q})\exp\big{(}\lambda_{1} \Delta t\big{)}\sigma_{max}(\mathbf{X})\] where \(\kappa_{2}(\cdot)\) denotes the 2-norm condition number. We provide a verification of the upper bounds in Proposition 5.8 in Figure 2 using the time-varying linear system of [60], Example 5.2: \[\frac{d\mathbf{x}(t)}{dt}=\mathbf{C}(t)\mathbf{x}(t) \tag{5.31}\] \[\mathbf{x}(0)=[1,0]^{T}\] where: \[\mathbf{C}(t)=\begin{bmatrix}0&1+\epsilon t\\ -1-\epsilon t&0\end{bmatrix} \tag{5.32}\] with \(\epsilon=0.1\) on the temporal domain \(t\in[0,1]\), with \(\Delta t=10^{-3}\). Furthermore, we also provide the upper bounds for the two advection-dominated examples with the Figure 2: Time shift data matrix 2 norm upper bounds (5.8) compared to actual 2 norms, with respect to number of collected snapshots. Top: linear system (5.31) with \(N=2,\Delta t=10^{-3}\). Middle: time-varying advection in 1d (6.7) with \(N=400,\Delta t=0.01\). Bottom: time-varying advection-diffusion in 2d (6.9) with \(N=2500\) and \(\Delta t=0.01\). parameter setups described in Section 6.2 and Section 6.3. In particular, the example system (13) is especially useful for the consideration of numerical solutions to the linear PDE (1), where the matrix \(\mathbf{C}(t)\) may be seen as the finite difference or finite element stiffness matrix with time-varying coefficients, and \(\mathbf{f}(t)\) as the inhomogeneous source term. The interpretations of the results obtained in Proposition 5.2 and Proposition 5.3 are two-folds. In cases where the learning is agnostic of underlying physics (i.e. with data only available as images and the underlying system is unknown), such as the cases considered in [20], perturbations in the DMD operator will strictly be estimable as the perturbation in collected data snapshots alone. However, with additional information of the underlying system, such as (13), one may incorporate physical knowledge and refine the bound by considering columns of \(\mathbf{X},\mathbf{Y}\) as ordered time-shifts of the initial condition along the flow map. Nevertheless, both of the results serve as a priori estimates of operator norm perturbation to help guide the selection of hyperparameters in DMD algorithms. ## 6 Numerical Experiments In the following numerical examples, we test the accuracy of Algorithm 1 for a variety of time-varying advection phenomena. In particular, for advection-dominated linear conservation laws (Sections 6.2 and 6.3), we make the procedure fully data-driven by assuming that the advection velocity in equation (11) is unknown, and is estimated from tracking the trajectory of the mode. Given a temporal discretization \(0=t_{0}<t_{1}<\ldots,<t_{n}=t_{f}\), we measure the performance of DMD algorithms via relative prediction error defined as the following: \[\epsilon(t)=\frac{\left\|\mathbf{u}_{\mathrm{DMD}}(t)-\mathbf{u}(t)\right\|_{ 2}}{\left\|\mathbf{u}(t)\right\|_{2}} \tag{14}\] where \(\mathbf{u},\mathbf{u}_{\mathrm{DMD}}\), are respectively the exact solution and the DMD prediction at time \(t\), with the error computed in the \(L^{2}(\mathbb{R}^{d})\) sense. To construct the reduced order model in each experiment, an SVD and projection to POD modes are applied at a prespecified rank determined based on a relative accuracy tolerance level. The exact setup of each numerical simulations is reported separately. We first consider the Navier-Stokes equations to test the accuracy of the base time-varying DMD algorithm in reconstructing complex and nonlinear dynamics without Lagrangian information, presented in Section 6.1. For each experiment of Section 6.2 and 6.3, we compare four different strategies: the standard DMD and time-varying DMD using only \(\mathbf{u}(t)\) as observables, the physics-aware DMD in Section 3.2 without recomputations, and Algorithm 1. ### Incompressible Navier-Stokes equations We consider the flow field of a two-dimensional incompressible fluid with density \(\rho=1\) and dynamic viscosity \(\nu=1/600\) (kg/(m\(\cdot\)s)). With a rectangular domain \(\mathcal{D}=[0,2]\times[0,1]\), the fluid enters from the left boundary with fixed velocity and flows around an impermeable cylinder centered at \(\mathbf{x}_{\mathrm{circ}}=[0.3,0.5]^{T}\). The dynamics of fluid pressure \(p(t,\mathbf{x})\), horizontal velocity component \(u(t,\mathbf{x})\) and vertical velocity component \(v(t,\mathbf{x})\) follow the Navier Stokes (NS) equation: \[\begin{cases}\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}+v\frac{ \partial u}{\partial y}=-\frac{1}{\rho}\frac{\partial p}{\partial x}+\nu\bigg{(} \frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}} \bigg{)}\\ \frac{\partial v}{\partial t}+u\frac{\partial v}{\partial x}+v\frac{ \partial v}{\partial y}=-\frac{1}{\rho}\frac{\partial p}{\partial y}+\nu \bigg{(}\frac{\partial^{2}v}{\partial x^{2}}+\frac{\partial^{2}v}{\partial y ^{2}}\bigg{)}\\ \frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0\end{cases} \tag{10}\] subject to the following initial-boundary conditions: \[p(t,2,y)=0,\frac{\partial p}{\partial\mathbf{n}}\big{|}_{\partial\mathcal{D} \setminus\{x=2\}}=0,\frac{\partial u(t,2,y)}{\partial\mathbf{n}}=0,\frac{ \partial v(t,2,y)}{\partial\mathbf{n}}=0 \tag{11}\] \[u(t,0,y)=1,v(t,0,y)=0, \tag{12}\] \[u(t,x,0)=u(t,x,1)=0,v(t,x,0)=v(t,x,1)=0 \tag{13}\] We define the quantity of interest as the magnitude of our velocity field: \[w(t,x,y):=\sqrt{u(t,x,y)^{2}+v(t,x,y)^{2}} \tag{14}\] and simulate the nonlinear system (10) with a custom MATLAB library. The problem is solved in conservative form with finite difference method on a staggered grid [16], with discretization levels \(\Delta x=\Delta y=0.02\), and time step size \(\Delta t=0.001\). Under this setting, we focus on reconstructing the dynamics during formation of vortex street on the time domain \(t\in[0,3.0]\), yielding effective state dimensions \(N=5000\) and \(m=3000\) snapshots. For each DMD strategy, we set the SVD truncation level to \(\epsilon=1.0\times 10^{-2}\). Figures 4 and Figure 3 shows a comparison of predicted solutions between standard DMD and time-varying DMD along with their relative \(L^{2}\)-errors from the reference numerical solution. As expected, standard DMD places an invariant manifold assumption and yields an inaccurate reduced-order model under rapid time changes. The time-varying DMD more accurately represents the solution by updating the operator at different time intervals. Finally, we visualize the dominant frequency variations during the time domain \([0,2.5]\) and observe that standard DMD begins to accumulate errors after \(t=0.05\), failing to capture the rapid frequency changes. ### 1d time-varying advection As a test problem for a comprehensive comparison understanding of standard DMD, time-varying DMD (without Lagrangian information), physics-aware DMD (without temporal updates), and time-varying DMD with Lagrangian moving grid information, we consider the following conservation law under pure advection (\(D\equiv 0\)): \[\begin{cases}\frac{\partial u}{\partial t}+\frac{\partial}{\partial x}[c\sin( \omega t)u]=0\\ u(0,x)=u_{0}(x)=\exp(-0.5x^{2})\end{cases} \tag{15}\] where we choose the advection speed \(c\equiv 2\) and frequency \(\omega\equiv\pi/2\). The snapshots are simulated using an upwind numerical scheme on a temporal grid of \(t\in[0,8]\) with discretization \(\Delta t=0.01\), yielding \(m=800\) training data points. The spatial grid is taken to be \(x\in[-10,10]\) with discretization \(\Delta x=0.05\). By construction, the initial concentration \(u_{0}\) does not change shape, and is advected in a oscillatory manner over time. As a fully data driven model, we consider estimating the velocity as a function of time directly from observations. Figure 5 shows a visualization of the advection velocity as a function of time, estimated from tracking the mode of the solution, defined by viewing the conserved solution \(u\) as a density, and computing the average: \[\overline{x}(t):=\frac{1}{\int_{x_{l}}^{x_{r}}u(t,x)dx}\int_{x_{l}}^{x_{r}}xu(t,x)dx \tag{10}\] where for (11), \(x_{l}=-10,x_{r}=10\). Then the estimated velocity can be computed using a centered difference of \(\overline{x}(t)\) at discrete time points, which was then used as an approximation to the velocity in the Lagrangian reference frame of (15). Figure 4: Reconstructed velocity magnitudes to the 2d Navier-Stokes equation (10) at time steps \(t=0.15,0.25,0.5\). Top row: reference solution from high-fidelity simulation. Middle row: standard DMD predictions. Bottom row: time-varying DMD predictions (\(r=50\)). Figure 3: Left: comparison of standard DMD and time-varying DMD in terms of prediction relative errors. Middle: real part of top 3 dominant frequencies, computed from time-varying DMD modes, as a function of time. Right: imaginary part of top 3 dominant frequencies as a function of time. We present the predicted solutions, compared with the reference numerical solution, at time steps \(t=0,\pi/4,\pi/2,\pi\). In this experiment, we set the tolerance for SVD truncation for all DMD strategies to be \(\epsilon=10^{-6}\). Furthermore, for time-varying DMD strategies, the size of the subintervals are chosen to be \(r=5\). Figures 6, 7, 8, and 9 show the behavior of predicted solutions under different DMD strategies. The relative errors are plotted on log scale, presented in Figure 10. In particular, we observe increased error fluctuations for time-homogeneous DMD strategies (i.e. standard DMD and physics-ware DMD) at regions of high velocity speed. The advection of the solution mode is also not captured. This is to be expected as standard DMD and physics-aware DMD are assumed to be constant in time, and would incur larger errors where such dependence is stronger. In the case of the time-varying DMD without Lagrangian information, we observe that the modal information is captured and advects through time. However, unphysical oscillations are still present. Out of the tested DMD strategies, Algorithm 1 provides the most faithful reconstruction of the time-varying advection behavior. ### Advection-dominated Equation in 2d We consider a two-dimensional linear advection-diffusion equation with time-varying velocity components, defined on the spatio-temporal domain: \((t,x,y)\in[0,10]\times[-10,10]\times[-10,10]\). \[\begin{cases}\frac{\partial u}{\partial t}+v_{x}(t)\frac{\partial u}{ \partial x}+v_{y}(t)\frac{\partial u}{\partial y}=D\bigg{(}\frac{\partial^{2 }u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}\bigg{)}\\ u(0,x,y)=\exp(-(x^{2}+y^{2}))\\ v_{x}(t)=\frac{1}{2}\cos(t),v_{y}=-\frac{2}{5}\sin(t),D\equiv 0.001\end{cases} \tag{12}\] In this example, we let the number of spatial nodes in each direction be \(N_{x}=N_{y}=50\). The temporal domain is discretized with a step size of \(\Delta t=0.01\). The PDE is numerically solved using a modified centered time, centered space method (Du-Fort Frankel method) presented in [23]. The above discretization yields state dimension \(N=2500\) and number of snapshots \(M=1000\). Similar to the 1-dimensional problem (11), the advection velocity can be estimated in a fully data-driven manner by tracking the mode of the solution snapshots Figure 5: Estimated advection velocity for (11) by tracking the mode of numerical solutions on the time domain \([0,10]\). by defining, analogously to (6.8): \[\mathfrak{X}(t):=\begin{bmatrix}\overline{x}(t)\\ \overline{y}(t)\end{bmatrix}=\frac{1}{\int_{x_{l}}^{x_{r}}\int_{y_{b}}^{y_{t}}u(t,x,y)dxdy}\int_{x_{l}}^{x_{r}}\int_{y_{b}}^{y_{t}}\begin{bmatrix}x\\ y\end{bmatrix}\cdot u(t,x,y)dydx \tag{6.10}\] and numerically differentiating in time with centered difference. We visualize the predicted solutions for three of the DMD strategies in Figures 11 and 12, corresponding respectively to the standard DMD, physics-aware DMD, and time-varying DMD with Lagrangian moving grid, constructed with a subinterval size \(r=30\). We predict the solutions up to \(t=8\) and compare with the baseline numerical solution. Finally, the prediction errors (6.1) for all four DMD strategies are presented in Figure 13. Due to presence of small diffusion, a time-varying DMD strategy without Lagrangian moving grid is able to achieve comparable accuracy to that with Lagrangian information. The standard DMD shows significant degradation in accuracy over time. The physics-aware DMD and time-varying DMD with physics still possess model misspecification that results in a growth of error over time, albeit at a reduced rate than standard DMD. In contrast, the results given by Algorithm 4.1 shows controlled error growth, similar to the case observed in (6.2). ## 7 Conclusions In this work, we investigated a method for learning time-dependent advection-dominated phenomena using DMD algorithms. In particular, when the PDE parameters vary in time, we demonstrated that the characteristic lines of the PDE are an important observable to include in order to improve the accuracy of reconstructions, as verified with 1d and 2d advection-diffusion equations with time-varying coefficients. We further provided prediction error guarantee for the time-dependent approximation to the Koopman operator. In addition, we analyzed the effect of SVD truncation and number of data snapshots on operator norm error, and verified such upper bounds in both model-free and model-dependent cases. The method adopted in this work provides a possibility for real-time control in advection-dominated systems. One of the possible future directions concerns the identification of closures for characterizing the time-evolution of a quantity of interest that depends on the states of another dynamical system [15]. Instead of relying on an equation-free model, deriving and learning explicit forms of the reduced-order dynamics provides a principled analysis tool for uncertainty propagation and control system design, as well as extrapolation capabilities. Furthermore, we briefly investigated the possibility of a full data-driven model by assuming the advection coefficients are unknown and estimated by mode-tracking. Although such a method is effective in capturing the macroscopic behavior, it is far from being sufficient for velocities that have nonlinear dependence Figure 7: 1d time-varying advection: time-varying DMD predictions (\(r=5\), without Lagrangian grid), at \(t=0\), \(t=\pi/4\), \(t=\pi/2\), \(t=\pi\). in both spatial variables and the solution itself. Future explorations will focus on parameterizations for the advection and diffusion coefficients, which are identified simultaneously as the optimal linear operator is constructed. Such a scenario can be potentially considered in a constrained optimization [46] or Bayesian inversion setting [25]. Reduction of computational complexity is another possible path of future exploration due to the curse of dimensionality for advection-dominated problems associated with moderate- to high-dimensional datasets. An added layer of dimensionality reduction must be adopted in such cases where storing and operating with data snapshots and the Lagrangian moving grid are intractable. A potential solution in the DMD setting is by using low-rank tensor-networks to approximate multidimensional linear operators [27, 17]. ## Acknowledgments We would like to thank Dr. Hannah Lu and Dr. Tyler Maltba for useful discussions and manuscript review. The research was partially supported by the Air Force Office of Scientific Research under grant FA9550-21-1-0381, by the National Science Foundation under award 2100927, by the Office of Advanced Scientific Computing Research (ASCR) within the Department of Energy Office of Science under award number DE-SC0023163, and by the Strategic Environmental Research and Development Program (SERDP) of the Department of Defense under award RC22-3278. Figure 8: 1d time-varying advection: physics-aware DMD predictions at \(t=0\), \(t=\pi/4\), \(t=\pi/2\), \(t=\pi\).
2309.03566
P4R-Type: a Verified API for P4 Control Plane Programs (Technical Report)
Software-Defined Networking (SDN) significantly simplifies programming, reconfiguring, and optimizing network devices, such as switches and routers. The de facto standard for programmming SDN devices is the P4 language. However, the flexibility and power of P4, and SDN more generally, gives rise to important risks. As a number of incidents at major cloud providers have shown, errors in SDN programs can compromise the availability of networks, leaving them in a non-functional state. The focus of this paper are errors in control-plane programs that interact with P4-enabled network devices via the standardized P4Runtime API. For clients of the P4Runtime API it is easy to make mistakes that lead to catastrophic failures, despite the use of Google's Protocol Buffers as an interface definition language. This paper proposes P4R-Type, a novel verified P4Runtime API for Scala that performs static checks for P4 control plane operations, ruling out mismatches between P4 tables, allowed actions, and action parameters. As a formal foundation of P4R-Type, we present the $F_{\text{P4R}}$ calculus and its typing system, which ensure that well-typed programs never get stuck by issuing invalid P4Runtime operations. We evaluate the safety and flexibility of P4R-Type with 3 case studies. To the best of our knowledge, this is the first work that formalises P4Runtime control plane applications, and a typing discipline ensuring the correctness of P4Runtime operations.
Jens Kanstrup Larsen, Roberto Guanciale, Philipp Haller, Alceste Scalas
2023-09-07T08:52:49Z
http://arxiv.org/abs/2309.03566v1
# P4R-Type: a Verified API for P4 Control Plane Programs ###### Abstract. Software-Defined Networking (SDN) significantly simplifies programming, reconfiguring, and optimizing network devices, such as switches and routers. The _de facto_ standard for programming SDN devices is the P4 language. However, the flexibility and power of P4, and SDN more generally, gives rise to important risks. As a number of incidents at major cloud providers have shown, errors in SDN programs can compromise the availability of networks, leaving them in a non-functional state. The focus of this paper are errors in control-plane programs that interact with P4-enabled network devices via the standardized P4Runtime API. For clients of the P4Runtime API it is easy to make mistakes that lead to catastrophic failures, despite the use of Google's Protocol Buffers as an interface definition language. This paper proposes P4R-Type, a novel verified P4Runtime API for Scala that performs static checks for P4 control plane operations, ruling out mismatches between P4 tables, allowed actions, and action parameters. As a formal foundation of P4R-Type, we present the \(F_{\text{P4R}}\) calculus and its typing system, which ensure that well-typed programs never get stuck by issuing invalid P4Runtime operations. We evaluate the safety and flexibility of P4R-Type with 3 case studies. To the best of our knowledge, this is the first work that formalises P4Runtime control plane applications, and a typing discipline ensuring the correctness of P4Runtime operations. Keywords:**Software and its engineering \(\rightarrow\) Formal language definitions; _Domain specific languages; \(\rightarrow\) Networks \(\rightarrow\) Programming interfaces. + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science This separation simplifies network management and enables network administrators to quickly and easily reconfigure and optimize network traffic flows. The de facto Open Source standard for SDN is P4 [P4.org Working Group 2020a]. In P4, the the data plane is programmed by specifying packet processing _tables_ which select the _actions_ to perform when a network packet matches certain patterns. The P4 standard also defines a control plane API (called P4Runtime [P4.org Working Group 2020b]) for writing programs that query or alter the configuration of P4-enabled network devices. Unfortunately, the power and ease of automation of SDN come with risks: a mistake in an SDN program can leave a network in a non-functional state. Indeed, erroneous configuration changes have compromised the availability of entire regions of large cloud providers [Sharwood 2016]. A recent study by Bhardwaj et al. [2021] shows that 38.8% of SDN bugs are triggered when the controller _"attempts to process system configurations"_ -- i.e. read, add, update, delete table entries; the authors add that _"this fact is astounding because a critical motivation for SDN is to move towards automation and eliminate configuration-based errors."_ In this paper, we focus on statically preventing a specific form of P4Runtime controller bug: attempting to read/insert/modify/delete P4 table entries that do not conform to the actual table layout of the P4 data plane. Such erroneous attempts are not statically checked by the official, weakly-typed P4Runtime API, as we explain below. Preventing this form of bug does not avert all possible P4 configuration processing bugs (e.g. a P4Runtime controller may insert a well-formed but incorrect routing table entry, or omit or delete a necessary entry) -- but it provides a baseline correctness guarantee towards more thorough static verification of P4Runtime applications (that we discuss as future work in Section 10). ### The Problem with Weakly-Typed P4Runtime APIs For a concrete example of how mistakes could happen, consider Figure 1 (left): it is based on the P4 documentation [P4.org Working Group 2023], and shows a control plane program written in Python using the official P4Runtime API. The program is connected to a P4-enabled switch, and inserts a new entry (i.e. a packet processing rule) into a table called IPv4_table, meaning: _"if a packet has destination address_10.0.1.1, _then perform the action_ IPv6_forward _with the given parameters._" (We provide more details about P4 in Section 2.) The Python program in Figure 1 contains an error: the table IPv4_table in the switch does _not_ allow for an action called IPv6_forward (although that action may be allowed by other tables in the same switch). The P4Runtime Python API detects this discrepancy at run-time, and throws an exception -- which may cause the program to fail half-way during a series of related P4 rule updates, leaving the network configuration in an inconsistent state. The same program may have other problems: e.g. does the intended action for IPv4_table actually take two parameters? Is one Figure 1: Example of control plane P4 programs. Left: a Python program using the official P4Runtime API. Right: the equivalent Scala 3 program using verified API P4R-Type. of such parameters actually called mac_dst? Again, the official P4Runtime Python API would only spot these issues at run-time, by throwing exceptions. As this example shows, it is all too easy to make mistakes when writing control plane programs in scripting languages (like Python) that don't perform static checks to ensure the validity of P4Runtime operations. However, statically detecting such errors is not trivial: to prevent errors without being overly-restrictive, the static checks must take into account the actual _dependencies_ between the packet processing tables available in a P4-enabled device, the actions allowed by each specific table, and the parameters expected by each specific action. Our objective is to design and develop a strongly-typed P4Runtime API that addresses the issues above, while satisfying **three key requirements**: 1. the API must have a formal foundation for proving that well-typed programs never get stuck by issuing invalid P4Runtime operations or receiving unexpected responses; 2. the API must be written and usable in an _existing_ programming language -- i.e. the implementation of the formal results (from requirement **(R1)**) must not depend on a bespoke programming language nor type checker; 3. if the API depends on code generation, the amount of generated code must be minimal. #### Our Proposal: P4R-Type and its Formal Foundation \(F_{\text{P4R}}\) This paper proposes P4R-Type, a novel verified P4Runtime API for Scala 3 that performs _static_ checks for P4 control plane operations, ruling out mismatches between P4 tables, allowed actions, and action parameters. Programs written with P4R-Type look like the one shown in Figure 1 (right): albeit similar to its Python equivalent, the P4R-Type program does _not_ compile, because (thanks to its type constraints) the off-the-shelf Scala 3 compiler can spot that the action on line 5 is not valid for the table IPv4_Table. The Scala 3 compiler can also similarly spot any discrepancy between a selected action and the supplied parameters. P4R-Type has a formal foundation: \(F_{\text{P4R}}\), a calculus and typing system allowing us to state and prove that _"well-typed \(F_{\text{P4R}}\) programs never perform invalid P4Runtime operations"_ (like the mistake in Figure 1). \(F_{\text{P4R}}\) is specifically designed for implementation as a Scala 3 API, and for enabling the "Python-like" P4Runtime programs shown in Figure 1. To the best of our knowledge, this is the first work that formalises control plane applications based on P4Runtime, and a typing discipline to ensure the correctness of P4Runtime operations. #### Contributions and Outline of the Paper After a background and overview (Section 2), we introduce our main contributions: 1. The first formal model of P4Runtime networks (Section 3) consisting of clients written in our novel formal language \(F_{\text{P4R}}\) (Section 3.1) and servers with different configurations (Section 3.2) interacting with each other (Section 3.3). 2. A typing discipline for \(F_{\text{P4R}}\) (Section 4) ensuring that if a client is well-typed w.r.t. the configuration of the surrounding P4Runtime network servers (under the server-configuration-to-type encoding we introduce in Definition 5.2), then the client will never perform invalid P4Runtime operations nor get stuck (Theorems 6.1 and 6.4). To ensure that these results translate into a verified P4Runtime client API in an _existing_ programming language (as per requrement (**R2**) above), we equip the \(F_{\text{P4R}}\) typing system with a limited form of type-level computation based on _match types_[1] and _singleton types_, both available in Scala 3. Our development of \(F_{\text{P4R}}\) also contributes a novel combination of _(i)_ match types without_ default cases, _(ii)_ structural subtyping, and _(iii)_ singleton types: the details and challenges are explained in Remark 4.6. (Besides, our theory and results are not Scala-specific and can be embedded e.g. in dependently-typed languages like Coq.) 3. The first implementation of a verified P4Runtime API, called P4R-Type (Section 7) and published as companion artifact of this paper. P4R-Type is based on the formalisation and results of \(F_{\texttt{P4R}}\), is written and usable in Scala 3, and only depends on a small amount of autogenerated type definitions (based on our server-configuration-to-type encoding in Definition 5.2): therefore, P4R-Type satisfies the requirements (**R1**), (**R2**), and (**R3**) above. We demonstrate the features of P4R-Type with 3 case studies (Section 8), and discuss the drawbacks of alternative approaches (Section 8.4). We discuss the related work in Section 9 and conclude in Section 10. ## 2. Background and overview We now provide an overview of Software Defined Networks (Section 2.1), the P4 data plane (Section 2.2), and P4Runtime (Section 2.3), followed by a bird's eye view of our approach (Section 2.4). ### Software Defined Networks Software Defined Networking (SDN) is an umbrella that covers several technologies to support dynamic and programmable network reconfigurations. SDN can be used to improve network performance (e.g. intelligent load balancers [1]), efficiency (e.g. network resource virtualisation and partitioning among customers of a cloud provider [11]), and security (AI based anomaly detection systems [10]). As mentioned in Section 1, an SDN consists (roughly speaking) of at least two architectural components: * _data plane_ devices with direct control of packet processing -- e.g. network interface cards, or switches, or a network of such devices; and * a centralised or distributed _controller_, which is in charge of interacting, via an _interface_, with the data plane devices to manage network flows. ### Programmable Data Plane and the P4 Language For many years SDN data plane elements were implemented with fixed-function application-specific integrated circuits (ASICs), with very limited programmability. In fact, programmable switches were two orders of magnitude slower than the corresponding fixed-function ASICs. However, newer programmable switches can run as fast as fixed-function ones. The key for this improvement is the usage of dedicated programmable accelerators, called Network Processing Units (NPUs), and FPGAs. Programmable data-processing enables the support of customised network protocols, for example VPN-aware data processing and in-line packet inspection. NPUs and FPGAs cannot be programmed using general purpose languages. Hence, high-speed data plane must be programmed with dedicated programming languages. Recently, P4 [10] has risen as the main Domain Specific Language for data plane. P4 can be compiled to a variety of targets, including NPUs (e.g. Intel Tofino), FPGAs, and software switches. The key form of configuration for a P4 program are _tables_, which are manipulated by the control plane. The P4 fragment below defines the tables IPv4_table and IPv6_table, with an "if" statement that inspect the header of an incoming network packet and selects one of the two tables. When the program executes IPv4_table.apply(), the P4 system performs 3 steps: * it computes a _key_ value from the network packet being processed. In this case, the key is the IPv4 destination address of the packet; ``` 1table"IPv4_table"{ 2key={hdr.ip.IPv4_dst_addr:lpm;} 3actions={Drop_action; 4IPv4_forward;} 5} 6 7table"IPv6_table"{ 8key={hdr.ip.IPv6_dst_addr:lpm;} 9actions={Drop_action; 10IPv6_forward;} 11} 12... 13if(hdr.ip.version==4w4) 14IPv4_table.apply(); 15else 16IPv6_table.apply(); ``` In this example, the definition of IPv4_table says that a table entry can select one of two possible actions (Drop_action and IPV4_forward, defined below) to execute after a packet match: ``` 1actionDrop_action(){ 2outCrtl.outputPort=PROPORT; 3} 4actionIPv4_forward(EthernetAddressmac_dst,PortIdport){ 5packet.ethernet.dstAddr=mac_dst; 6packet.ip.version=4w4; 7packet.ip.ttl=headership.ttl-1; 8outCrtl.outputPort=port; 9} ``` Drop_action does not require any argument and simply forwards the packet to a "port" that drops (i.e. discards) it. The IPv4_forward action requires two arguments: therefore, when a table entry in IPv4_table wants to invoke the action IPv4_forward, the table entry must also specify a destination Ethernet address and a port. In the following section we briefly discuss exemples that violate these constraints. ### P4Runtime and P4Info Metadata Files Today, applications that control P4-enabled devices use a control plane API called P4Runtime [P4.org Working Group 2020b]: the control application (acting as a P4Runtime client) connects to P4 device (which acts as a P4Runtime server) and issues API calls to query and modify the device configuration. Thanks to a portable design based on Google's Protobuf interface definition language, P4Runtime programs may be written in any programming language with Protobuf support -- and the official P4Runtime API implementation is written in Python. The use of general-purpose programming languages for control plane applications is possible because their performance is less critical the one of the data plane; moreover, general purpose languages allow for reusing existing software stacks and support a wide range of application domains. In the usual workflow, when a P4 data plane program is compiled, it yields two outputs: 1. a packet-processing "executable" deployed on a P4-enabled device (e.g. on a switch); and 2. a _P4Info metadata file_, which summarises all the entities defined in the P4 program -- in particular, its tables and actions. Each entity has a numeric identifier. To interact with a P4-enabled device (e.g. to add entries to its tables), a P4Runtime program uses the P4Info metadata corresponding to the P4 program deployed on the device. Figure 2 shows an example of P4Info metadata file for the P4 program in Section 2.2. From this metadata we can see that a P4 device running that program has two tables: IPv4_table and IPv6_table. Each table has one key that is an address of the corresponding IP protocol version. The entries of IPv4_table and IPv6_table can invoke actions IPv4_forward and IPv6_forward (respectively) and must provide a MAC address and port as action's arguments. All table entries can invoke Drop_action, which has no parameters. P4Runtime applications can change the configuration of a P4-enabled device by adding, updating, and deleting table entries. P4Runtime applications can also read the table contents, possibly using _wildards_ to filter the results. As shown by the Python program in Figure 1, it is easy to make mistakes if the language does not perform static checks on table updates. Specifically, the P4Info metadata in Figure 2 says that any new entry added to IPv4_table cannot use the IPv6_forward action -- which is the mistake highlighted in Figure 1. ### An Overview of Our Approach To address the issues described above, we propose P4R-Type: a verified P4Runtime API for Scala 3, with a tool to translate P4Info metadata into Scala types. Our approach is depicted below. As usual, a P4 data plane program is compiled and deployed on one or more P4-enabled network devices; the program's P4Info metadata is made available for P4Runtime applications. This is where P4R-Type comes into play: a programmer can write a P4Runtime control application by importing (1) the P4R-Type library, and (2) a set of type definitions automatically generated from P4Info metadata. If the P4R-Type-based application type-checks, then it will be able to connect to a P4 device (acting as P4Runtime server) and perform P4Runtime operations that never violate the device configuration -- provided that the configuration matches the P4Info metadata. The design and implementation of P4R-Type is based on a formal model allowing us to reason about the behaviour of P4Runtime client applications and P4 devices acting as P4Runtime servers. Figure 2. Example P4Info metadata file with the tables and actions of the P4 program in Section 2.2. For brevity, we only show actions IDs and omit tables IDs. Our formal model is outlined above: a P4Runtime server \(S\) holds tables and actions that are well-formed w.r.t. a configuration \(C\) (which represents P4Info metadata). We define an encoding from a P4Info configuration \(C\) into a set of types for \(F_{\text{P4R}}\): a formal calculus describing P4Runtime client applications. We design the typing discipline of \(F_{\text{P4R}}\) to include match types and singleton types, which are also present in the Scala 3 typing system: this design allows us to implement our results as a Scala 3 API (i.e., P4R-Type) reflecting the typing constraints of \(F_{\text{P4R}}\). Then, we prove our Theorems 6.1 and 6.4: if a \(F_{\text{P4R}}\) program \(t\) type-checks with types encoded from a P4Info configuration \(C\), then \(t\) will interact correctly with any P4Runtime server \(S\) that is well-formed w.r.t. \(C\). ## 3. A Model of P4Runtime Clients, Servers, and Networks We now illustrate how we model P4Runtime networks consisting of P4-enabled devices (acting as servers), and control applications (the clients) that connect and modify the devices' P4 table entries. In Section 3.1 we introduce the syntax of \(F_{\text{P4R}}\), a formal language for modelling P4Runtime client programs, with the capability of connecting to P4Runtime servers and performing P4Runtime operations. In Section 3.2 we model P4Runtime servers by focusing on their internal configuration, i.e. their P4 tables, packet matching methods, and actions. In Section 3.3 we formalise a P4Runtime network as a parallel composition of P4Runtime clients and servers. We introduce the semantics of \(F_{\text{P4R}}\) programs, servers, and networks later on (in Section 5) after introducing the typing system (in Section 4). ### The \(F_{\text{P4R}}\) Language for P4Runtime Clients In Definition 3.1 below we introduce the syntax of the \(F_{\text{P4R}}\) language and its types. \(F_{\text{P4R}}\) is designed as an extension of \(F_{<}\). (System F with subtyping (Cardelli et al., 1994)) augmented with: * **P4Runtime-specific operations**: primitives and types for server addresses and channels; * **singleton types**, i.e. types inhabited by exactly one value; and * **match types**, introducing the capability of performing type-level computations and refining the result type of a pattern matching. Our match types are based on the work of Blanvillain et al. (2022) (which in turn formalises the corresponding feature of the Scala 3 programming language) -- but our adaptation includes significant differences: we discuss them later on, in Remark 4.6. Definition 3.1 (Syntax of \(F_{\text{P4R}}\)).: The syntax of \(F_{\text{P4R}}\) terms \(t\) and types \(T\) is shown in Figure 3 -- where \(I\) (used to index records and pattern matching terms, and record and match types) represents a finite, non-empty set containing sequential natural numbers \(1,2,\ldots\) Moreover, Figure 4 introduces some frequently-used syntactic abbreviations. Most of Definition 3.1 is based on standard \(F_{<}\). constructs and extensions (in particular, lists and records). The key deviations are the highlighted constructs in Figure 3: * a **P4Runtime operation**_op_ allows a client to connect to a P4Runtime server, and query or change the entries in its configuration; * a **ground value**\(v_{G}\) is a value that does not contain lambda nor type abstractions. A ground value is a "simple" value (e.g. string, integer, \(\ldots\)), or a list or record of ground values. For each ground value \(v_{G}\), there is a **singleton type**\(v_{G}\) only inhabited by \(v_{G}\) itself; * a **byte string**\(\mathbf{b}(\ldots)\) is the byte representation of a sequence of integers, and has type Bytes; * a **server address**\(a_{T_{m},T_{a},T_{p}}\) represents a handle for connecting to a P4Runtime server (in practice, it represents its IP address and TCP port). A server address \(a_{T_{m},T_{a},T_{p}}\) has a corresponding **server address type* * ServerRef\([T_{m},T_{a},T_{p}]\), where the type parameters reflect information available in the server's P4Info file:1 Footnote 1: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{m}\) describes the _matches_ of each table in the server configuration; Footnote 2: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 3: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 4: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 5: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 6: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 6: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 7: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _actions_ that could be performed after a network packet is matched; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed later, in Example 4.5, Definition 5.2 and Example 5.3. * \(T_{a}\) describes the _parameters_ of each table in the server configuration; Footnote 8: The instantiation of the type parameters \(T_{m},T_{a},T_{p}\) is detailed, in Example 4.5, Definition 5.2 and Example 5. For brevity, we will often just write \(a\), omitting the subscript; * a **client-server connection**\(s_{T_{m},T_{a},T_{p}}\) represents a communication channel (created after establishing a connection) that a P4Runtime server and client use to communicate. A connection value has a corresponding **channel type**\(\mathrm{Chan}[T_{m},T_{a},T_{p}]\), whose type arguments have the same meaning outlined above for \(\mathrm{ServerRef}[T_{m},T_{a},T_{p}]\). For brevity, we will often just write \(s\), omitting the subscript; * a **match type** describes a type-level pattern matching, as illustrated in Example 3 below. Example 3 (Match Types).: Consider the following match type: \[\mathrm{Int\ match\ \{\mathrm{Int}\Rightarrow\mathrm{Bool},\ String\Rightarrow \mathrm{Unit}\}}\] We call the types \(\mathrm{Bool}\) and \(\mathrm{Unit}\) (i.e., the types of the expressions that can be executed after a match case is selected) _continuation types_ of the match type. This match type "reduces" to the continuation type \(\mathrm{Bool}\), because its guard matches the type \(\mathrm{Int}\). (More precisely, in Section 4 we will see that the match type and the selected continuation are subtyping-equivalent.) Now consider the following match type abstracted over type \(T\): \[\begin{array}{r@{\quad}l}\mathrm{Table\_Actions}\quad\triangleq\quad&\forall T \text{.}\ T\ \mathtt{match}\ \{\\ \quad&\quad&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ * the **configuration**\(C\) is a mapping that contains the following fields: * _table_matches_ (abbreviated _tm_), a mapping from table names to **match fields**, which in turn consist of: * _name_, the name of the network packet field to inspect; * _type_, the type of packet matching algorithm used for this packet field (either Exact, Ternary, LPM, Range, or Optional [P4.org Working Group 2020a]). * _table_actions_ (abbreviated _ta_), mapping table names to sets of allowed action names; * _action_params_ (abbreviated _ap_), mapping action names to sets of **action parameters**: * _name_, the name of the parameter; * _bitwidth_, the size of the parameter. * \(E\) is a set of **P4Runtime entities \(e\)* * that can be hosted on a P4Runtime server. The main type of entity is a **table entry**,2 which is a record consisting of: * _table_name_, the name of table which owns the entry; * _field_matches_, a set of records describing packet matching rules: * _name_, the name of the network packet field to inspect; * a set of additional key-value entries, depending on the type of packet matching algorithm used for this field (for example, when using the matching algorithm type Range, the key-value entries are called 'low' and 'high'); * _action_name_, the name of the action that the entry applies upon network packet match; * _action_args_, a set of **action argument* * records, which in turn contains: * _name_, the name of the associated parameter; * _value_, the value provided as the argument; * \(a\) is the **address** where the server listens for connections; * \(K\) is a set of **channels**: the active connections between the server and its clients. Footnote 2: P4Runtime models several other types of entries, but they are out of scope for this work. As mentioned in the opening of Section 3, the P4Runtime server model formalised in Definition 3.3 focuses on the configuration of a P4Runtime server, by abstracting from other implementation details (e.g. its implementation language). An example server configuration can be seen in Figure 5. ### P4Runtime Networks We conclude this section by formalising the syntax of a network of P4Runtime servers and clients. Definition 3.4 (P4Runtime Network).: A _P4Runtime network_ is a parallel composition of clients (i.e. terms \(t\) of the grammar in Definition 3.1) and servers (i.e. tuples \(S\) conforming to Definition 3.3): Figure 5. Example of a P4Runtime server configuration \(C\) (by Definition 3.3). This JSON-like structure models the P4Info metadata describing the configuration of an actual P4 device (as outlined in Sections 2.2 and 2.3). ## 4. The \(F_{\text{P4R}}\) typing system Definitions 4.1 and 4.2 formalise the typing system of \(F_{\text{P4R}}\). The typing system is based on System \(F_{\text{c}}\): [Cardelli et al. 1994] extended with singleton types and match types [Blanvillain et al. 2022], plus new typing rules for the P4Runtime-specific operations we introduced in Definition 3.1. Definition 4.1 (Typing Environment).: A _typing environment_\(\Gamma\) is a mapping from term or type variables to types, that we syntactically represent as follows: \[\begin{array}{rcl}\Gamma&\coloneqq&\emptyset&\text{(Empty typing environment)}\\ &\mid&\Gamma,\,x:T&\text{(Term variable $x$ has type $T$)}\\ &\mid&\Gamma,\,X<:T&\text{(Type variable $X$ has upper bound $T$)}\end{array}\] Definition 4.2 (The \(F_{\text{P4R}}\) Typing System).: The \(F_{\text{P4R}}\) typing system consists of the following mutually-defined, inductive judgements: \[\begin{array}{rcl}\vdash\Gamma\text{ env}&\text{($\Gamma$ is a valid typing environment)}&\text{(Figure 6)}\\ \Gamma\vdash T\text{ type}&\text{($T$ is a valid type in $\Gamma$)}&\text{(Figure 7)}\\ \Gamma\vdash T\circ T^{\prime}&\text{(Types $T$ and $T^{\prime}$ are disjoint in $\Gamma$)}&\text{(Definition 4.3)}\\ \Gamma\vdash T<:T^{\prime}&\text{($T$ is subtype of $T^{\prime}$ in $\Gamma$ -- assuming $\Gamma\vdash T$ type and $\Gamma\vdash T^{\prime}$ type)}&\text{(Figure 8)}\\ \Gamma\vdash T\eqeqeqcolon=:T^{\prime}&\text{($T$ and $T^{\prime}$ are subtyping-equivalent in $\Gamma$, i.e. $\Gamma\vdash T<:T^{\prime}$ and $\Gamma\vdash T^{\prime}$ <:T$)}\\ \Gamma\vdash t:T&\text{($t$ has type $T$ in $\Gamma$ -- assuming $\Gamma\vdash T$ type)}&\text{(Figure 9)}\end{array}\] Most type validity rules in Figure 7 are standard. The exceptions are highlighted: Figure 6. Typing environment validity rules. Figure 7. Type validity rules. Non-standard extensions to \(F_{\text{c}}\): are highlighted. * by rule Type-Val, any ground value \(v_{G}\) (i.e. any value that does _not_ contain lambda or type abstractions, by Definition 3.1) has a corresponding singleton type \(v_{G}\); * rules Type-SR and Type-Chan say that our new server address and client-server channel types are well-formed if all their type arguments are well-formed; * by rule Type-Match, a match type is well-formed if the scrutinee type (\(T_{s}\)), the types being matched against (\(T_{i}\)), and the continuation types (\(T^{\prime}_{i}\)) are well-formed. The subtyping rules in Figure 8 are also standard, with the following highlighted exceptions: * by rule ST-Val, if a ground value \(v_{G}\) belongs to type \(T\) (by the relation "\(v_{G}\in_{G}T\)" defined in Appendix A.1.4), then the singleton type \(v_{G}\) is subtype of \(T\). For example: since \(42\in\mathrm{Int}\), then we have \(\Gamma\vdash\underline{42}<:\mathrm{Int}\) (assuming \(\Gamma\vdash\mathrm{Int}\) type); * rule ST-Match1 (adapted from [1]) says that a match type is subtyping-equivalent to the continuation type \(T^{\prime}_{k}\) if all match type cases before \(k\) (i.e. all \(T_{i}\) with \(i<k\)) are _disjoint_ form \(T_{k}\), according to Definition 4.3 below; * rule ST-Match2 (also adapted from [1]) says that match types are covariant in both the type being matched, and the continuation types. The type disjointness judgement \(\Gamma\vdash T_{1}\circ T_{2}\) (used in rule ST-Match1) is formalised in Definition 4.3 below: the intuition is that two types \(T_{1}\) and \(T_{2}\) are disjoint when they have no common subtypes, hence there exists no value that can have both types \(T_{1}\) and \(T_{2}\). Definition 4.3 (Disjointness of Types): Two types \(T_{1}\) and \(T_{2}\) are disjoint in \(\Gamma\), written \(\Gamma\vdash T_{1}\circ T_{2}\), iff: 1. \(\Gamma\vdash T_{1}\) type and \(\Gamma\vdash T_{2}\) type; and 2. \(\nexistsists T_{3}:\Gamma\vdash T_{3}\) type and \(\Gamma\vdash T_{3}<:T_{1}\) and \(\Gamma\vdash T_{3}<:T_{2}\) Example 4.4 (Subtyping and Disjointness in Match Types): Consider the following match type: Figure 8. Subtyping rules. \(\forall X.\ X\ match\ \{\text{Int}\Rightarrow\text{\text@underline{42}},\ \text{Bool}\Rightarrow\text{\text@underline{``Hello"}\}}\) The type (\(\forall X.\ X\ match\ \{\text{Int}\Rightarrow\text{\text@underline{42}},\ \text{Bool}\Rightarrow\text{\text@underline{``Hello"}\}}\)) true is subtyping-equivalent (i.e. "reduces") to "Hello", by the subtyping rule ST-Match1 in Figure 8. The rule first checks whether true is a subtype of Int, which it is not. Since it is also disjoint from Int (the two types do not share a common subtype), the rule then proceeds to the next case. Here, true is a subtype of Bool, and so the type "reduces" to the case "Hello". Finally, Figure 9 includes several (highlighted) non-standard typing rules for \(F_{\text{P4R}}\) terms: * by rule T-Val, a ground value \(\mathit{v}_{G}\) is typed by the singleton type \(\underline{v}_{G}\). E.g. 42 has type 42, hence (via the subsumption rule T-Sub and ST-Val in Figure 8) we also have that 42 has type Int; * by rule T-Match (adapted from [1]), a pattern matching term is typed with a match type of a similar shape. The clause "\(\Gamma\vdash T_{s}<:\cup_{i\in I}T_{i}\)" ensures that pattern matching is exhaustive; * i.e. the type of the channel returned by the connection maintains type-level information about the server configuration; Figure 9. Typing rules for \(F_{\text{P4R}}\) terms. Non-standard extensions to \(F_{<:}\) are highlighted. * by rule T-OpR, the query operation \(\mathsf{Read}(t_{c},t_{e})\) is typed as follows: 1. the query argument \(t_{e}\) has type P4Entity (Figure 4) applied to type parameters that match those of the type of \(t_{c}\) (expected to be a channel). Intuitively, this means that \(t_{e}\) can only be a P4 entity supported by the P4Runtime server connected over \(t_{c}\); and 2. the read operation returns a list of type P4Entity applied to type arguments that match those of the type of \(t_{c}\). Intuitively, this means that the returned list is expected to only contain entities supported by the P4Runtime server connected via \(t_{c}\); * rules T-OpI, T-OpM, and T-OpD have type constraints similar to T-OpR above: their argument \(t_{e}\) must be a P4 entity supported by the server connected over channel \(t_{c}\). All these operations return a boolean value (indicating whether the operation had an effect). Example 4.5 (Typable and Untypable Operations): Consider the following types:3 Footnote 3: You may notice a similarity between the types used in Example 4.5 and the P4Runtime server configuration in Figure 5: indeed, those types capture the constraints of that server configuration. We will reprise the topic in Section 5.2. \(T_{m}\ =\ \forall T\). \(T\) match { \(\begin{array}{ll}\ _Remark 4.6_ (Differences with Blanvillain et al. (2022)).: Our formulation of match types differs from the original presentation by Blanvillain et al. (2022) in 3 significant aspects: these differences are non-trivial and interplay with each other in subtle ways, making our formalisation and proofs quite challenging. 1. Blanvillain et al. (2022) use a _nominal_ type system which models class hierarchies, abstracting from class fields and data. Instead, we need data in order to represent P4Runtime tables in \(F_{\text{P4R}}\) and in our results; moreover, our implementation (Section 7) does not make significant use of class hierarchies. Therefore, unlike Blanvillain et al. (2022), we adopt standard data types (records, lists...) with _structural_ typing and subtyping, and we support singleton types -- and consequently, we adapt the match-typing-related rules accordingly. 2. Unlike Blanvillain et al. (2022), our match types do _not_ include a mandatory default case. With the default case, a match type can be "reduced" (i.e. proven subtype-equivalent) to the type in its default case, if the scrutinee type does not match any other case. We removed the mandatory default case because it is not needed (and is actually undesirable) for our modelling of P4Runtime table types. Moreover, the Scala 3 compiler does _not_ require programmers to specify a default case in their match types -- and since our API P4R-Type leverages this feature, we formalised the typing system of \(F_{\text{P4R}}\) accordingly. A default match type case can be obtained (when needed) by adding a branch that matches the top type \(\top\). 3. Correspondingly, our match terms do _not_ include a mandatory default case (unlike Blanvillain et al. (2022)). Consequently, our typing rule T-Match (Figure 9) has an additional constraint w.r.t. Blanvillain et al. (2022): the scrutinee type must be a subtype of the union of all case types, thus ensuring that the pattern matching is exhaustive (the Scala 3 compiler performs similar checks). Notably, match term exhaustiveness is needed to prove progress (Theorem 6.4); instead, Blanvillain et al. (2022) do not check match term exhaustiveness because their default match case ensures that a match term can always be reduced. ## 5. Semantics of \(F_{\text{P4R}}\) Programs and P4Runtime Networks In this section we formalise the semantics of \(F_{\text{P4R}}\) programs (Section 5.1), P4Runtime servers (Section 5.2), and networks of clients and servers (Section 5.3). ### Semantics of \(F_{\text{P4R}}\) Programs We introduce the labelled transition system (LTS) semantics of \(F_{\text{P4R}}\). Definition 5.1 below formalises an _early_ semantics, where each transition label denotes either an internal computation (\(\tau\)), or a possible input/output interaction with the surrounding environment. This style of _early_ LTS semantics is inspired by the \(\pi\)-calculus (Sangiorgi and Walker, 2001), and allows us to formalise and reason about the interactions between \(F_{\text{P4R}}\) programs and P4Runtime servers (formalised later in Definition 5.7) while keeping the respective syntax and semantics decoupled. Definition 5.1 (Semantics of \(F_{\text{P4R}}\)).: Assume a predicate "\(v\in T\)" which holds iff value \(v\) belongs to type \(T\). We define the _labelled transition system (LTS) semantics_ of \(F_{\text{P4R}}\) as a transition relation \(t\xrightarrow{\alpha}t^{\prime}\), where the label \(\alpha\) is defined as: \[\begin{array}{llll}\text{Transition label}&\alpha&\dot{=}&\tau&\text{(Internal transition)}\\ &|&\text{connect}(a)\leadsto s&\text{(Connect to server address $a$, getting channel $s$)}\\ &|&\text{read}(s,v)\leadsto v^{\prime}&\text{(Perform query $v$ on channel $s$, getting result $v^{\prime}$)}\\ &|\text{insert}(s,v)\leadsto v^{\prime}&\text{(Insert $v$ on channel $s$, getting result $v^{\prime}$)}\\ &|&\text{modify}(s,v)\leadsto v^{\prime}&\text{(Modify $v$ on channel $s$, getting result $v^{\prime}$)}\\ &|\text{delete}(s,v)\leadsto v^{\prime}&\text{(Delete $v$ on channel $s$, getting result $v^{\prime}$)}\\ \end{array}\] The transition relation \(t\xrightarrow{\alpha}t^{\prime}\) is defined in Figure 10, where the context transition rule E-\(\mathbb{C}\) uses an _evaluation context_\(\mathbb{C}\) (defined below) which represents a \(F_{\mathrm{P4R}}\) term with one hole [ ]: \[\begin{array}{rcl}\mathbb{C}&\coloneqq&[\,]\mid\mathbb{C}::t\mid v::\mathbb{C }\mid\text{head }\mathbb{C}\mid\text{tail }\mathbb{C}\mid\text{let }x=\mathbb{C}\text{ in }t\\ &\mid&\mathbb{C}\,t\mid v\,\mathbb{C}\mid\,\mathbb{C}\,T\mid\,\mathbb{C}\,f \mid\,\mathbb{C}\text{ match }\{x_{i}:T_{i}\Rightarrow t_{i}\}_{i\in I}\\ &\mid&\{f_{i}=\gamma_{i}\}_{i\in I}\quad\text{where }\,\exists k\in I:\forall i \in I:\begin{cases}i<k\text{ implies }\,\,\gamma_{i}=v_{i}\\ i=k\text{ implies }\,\,\gamma_{i}=\mathbb{C}\\ i>k\text{ implies }\,\,\gamma_{i}=t_{i}\end{cases}\end{array}\] Most rules in Definition 5.1 are standard, except for the ones highlighted in Figure 10: * by rule E-Connect, the term Connect(\(a\)) transitions by producing a channel \(s\), whose type conforms to the type of the server address \(a\). The transition label "connect(\(a\))\(\leadsto s\)" means that the term is trying to interact with the surrounding environment: hence, as we will see in Section 5.2, the client expects a P4Runtime server to emit the dual label "connect(\(a\))\(\leadsto s\)" -- meaning that the server is listening on address \(a\) and can produce channel \(s\); Figure 10. LTS semantics of \(F_{\mathrm{P4R}}\) terms. Non-standard extensions to \(F_{<}\): are highlighted. * by rule E-Read, the term \(\mathsf{Read}(s,v)\) transitions by producing a value \(v^{\prime}\), which is a list of P4Entity instances (Figure 4) whose type conforms to the type of channel \(s\). The transition label means that the term expects to interact with a P4Runtime server on channel \(s\); * rules E-Insert, E-Modify, and E-Delete work similarly, and produce a boolean value describing whether the operation had an effect or not. \(F_{\mathsf{P4R}}\) terms are evaluated from left to right, using the evaluation contexts \(\mathbb{C}\) in Definition 5.1. For instance, the last case "\(\{f_{i}=\gamma_{i}\}_{i\in I}\)" represents a record whose fields \(f_{i}\) are indexed by \(i\in I\), where \(I\) is a set of consecutive natural numbers \(1..n\) (as per Definition 3.1); all fields to the left of \(f_{k}\) (for some \(k\in I\)) are already fully-evaluated into values \(v_{i}\); the field \(f_{k}\) is a context with a hole, which is going to be evaluated next; and all fields to the right of \(f_{k}\) are arbitrary terms \(t_{i}\), which may be evaluated after \(f_{k}\). ### Semantics of P4Runtime Servers To define our P4Runtime server semantics (in Definition 5.6 later on), we need to ensure that a server \(S\) will only answer to well-typed requests from its clients, and that the server entities are well-typed w.r.t. the server configuration \(C\). To this end, we formalise an encoding of a server configuration \(C\) into \(F_{\mathsf{P4R}}\) types (Definition 5.2 below). Intuitively, this describes how to turn the P4Info metadata of a P4 device into a set of types describing the device tables, actions, etc. **Definition 5.2** (Encoding of a Server Configuration into \(F_{\mathsf{P4R}}\) Types).: Given a P4Runtime server configuration \(C\), we define the _encoding_\([\cdots]\) of its entries into \(F_{\mathsf{P4R}}\) types in Figure 11. Figure 11. Definition of the encoding operation \([\![\cdots]\!]\) from P4Runtime configurations to \(F_{\mathsf{P4R}}\) types. _Example 5.3 (Server Configuration Representation as \(F_{\text{P4R}}\) Types)._ Consider the P4Runtime server configuration in Figure 5: by Definition 5.2, its encoding into the \(F_{\text{P4R}}\) types is shown in Figure 12. (The same types are also used in Example 4.5, where they are called \(T_{m},T_{a},T_{p}\).) From now on, we will assume that each P4Runtime server is _well-formed_ by Definition 5.4 below: it means that each entity belongs to the P4Entity type (Figure 4) instantiated with type parameters that correspond to the type-encoded server configuration (by Definition 5.2). _Definition 5.4 (P4Runtime Entity Conformity and Server Well-Formedness)._ A P4Runtime entity \(e\)_conforms_ to a server configuration \(C\) iff: \[\exists X_{n},X_{a}:e\in\text{P4Entity}\;\llbracket C.\mathit{table\_matches} \rrbracket\;\llbracket C.\mathit{table\_actions}\rrbracket\;\llbracket C. \mathit{action\_params}\rrbracket\;X_{n}\;X_{a}\] The predicate \(\mathit{Conforms}(e,C)\) holds iff entity \(e\) conforms to the configuration \(C\). A P4Runtime server \(\langle C,E,a,K\rangle\)_is well-formed_ iff \(\forall e\in E:\mathit{Conforms}(e,C)\). The key insight behind Definition 5.4 is that the instantiation of P4Entity can only reduce to an actual type if the argument \(X_{n}\) is a valid table name in \(C\), and if \(X_{a}\) is a valid action for table \(X_{n}\). Definition 5.4 directly leads to the following property, which will allow us to prove the results in Section 6: if a client sends to the server a well-typed value \(v\), the server will consider it conformant. Proposition 5.5 (Conformance of Well-Typed Values)._For any server \(S=\langle C,E,a,K\rangle\) and any value \(v\), we have:_ \[\mathit{Conforms}(v,C)\iff\emptyset\vdash v:\text{P4Entity}\;\llbracket C. \mathit{table\_matches}\rrbracket\;\llbracket C.\mathit{table\_actions} \rrbracket\;\llbracket C.\mathit{action\_params}\rrbracket\;X_{n}\;X_{a}\] _Definition 5.6 (P4Runtime Server Semantics)._ We define the _semantics of a P4Runtime server \(S\)_ as a relation \(S\xrightarrow{\overline{\alpha}}S^{\prime}\) (where \(\alpha\) is from Definition 5.1) inductively defined by the rules in Figure 13. The P4Runtime server semantics in Definition 5.6 show how the internal configuration of a P4Runtime server evolves, and how the server responds to queries from clients. The semantics are based on the P4.org Working Group (2020b). The server semantics focus on checking the conformance of requests from the clients, and computes a response using an abstract evaluation predicate "\(\langle\_,\_,\_\rangle\;\downarrow\;\_\)": the details of this predicate are not crucial -- but we assume that it always yields a well-typed response, i.e. a boolean or an entity that conforms to the server configuration \(C\).4 Footnote 4: For reference, the semantics of the predicate "\(\langle C,E,\text{read}(v)\rangle\;\downarrow\;\varphi^{\ast}\)" are available in the appendix, in Figure 17. Figure 12. The encoding of the server configuration \(C\) in Figure 5 into \(F_{\text{P4R}}\) types. (The same types are also used in Example 4.5, where they are called \(T_{m},T_{a},T_{p}\).) * By rule Sv-Connect, a server listening on address \(a\) can accept a client connection by generating a unique channel instance \(s\), adding \(s\) to the set of established connections \(K\), and producing a transition label \(\overline{\text{connect}(a)\leadsto}\)\(s\). Importantly, the channels \(s\) belongs to a \(F_{\text{P4R}}\) channel type whose type arguments are obtained by encoding the server configuration \(C\) (by the encoding \(\llbracket\cdots\rrbracket\) in Definition 5.2). * By rule Sv-Read, the server can handle a client's read request by emitting a label \(\overline{\text{read}(s,v)\leadsto v^{\prime}}\), provided that the connection \(s\) belongs to the set of established connections \(K\), and the query argument \(v\) conforms to the server configuration (by Definition 5.4); * e.g. by adding or removing P4 table entries. ### Semantics of P4Runtime Networks We now formalise the semantics of the P4Runtime networks introduced in Definition 3.4. Definition 5.7 (P4Runtime Network Semantics): The _LTS semantics of a P4Runtime network_ is defined by the following rules, where \(\alpha\) ranges over the labels introduced in Definition 5.1: (for brevity, we omit the symmetric rules) \[\frac{N\xrightarrow{\alpha}N^{\prime}}{N\,|\,N^{\prime\prime}\,\xrightarrow {\alpha}\,N^{\prime}\,|\,N^{\prime\prime}}\text{\text{\text{\text{\text{\text{ \text{Net-}}}}}}}\alpha\qquad\frac{N_{1}\xrightarrow{\alpha}N_{1}^{\prime}\, \,N_{2}\xrightarrow{\alpha}N_{2}^{\prime}}{N_{1}\,|\,N_{2}\,\xrightarrow{ \tau}\,N_{1}^{\prime}\,|\,N_{2}^{\prime}}\text{\text{\text{\text{\text{Net-}}}}} \alpha\qquad\frac{N_{1}\xrightarrow{\alpha}N_{1}^{\prime}\,\,N_{2}^{\prime}}{N _{1}\,|\,N_{2}\,\xrightarrow{\tau}\,N_{1}^{\prime}\,|\,N_{2}^{\prime}}\text{ \text{\text{\text{Net-}}}}\alpha\] We often write \(N\to N^{\prime}\) instead of \(N\xrightarrow{\tau}N^{\prime}\), and \(\to\)* for the reflexive and transitive closure of \(\to\). By Definition 3.4, a network \(N\) is a parallel composition of any number of P4Runtime clients and servers. According to the semantics in Definition 5.7, a network \(N\) can perform a transition \(\alpha\) even when composed with another network \(N^{\prime\prime}\) (rule Net-\(\alpha\)); and if two networks fire dual labels \(\alpha\) and Figure 13. LTS semantics of a P4Runtime server. \(\overline{\alpha}\), then they can synchronise when composed, producing a \(\tau\)-transition: this allows a P4Runtime client and server to interact, as illustrated in Example 5.8 below. Example 5.8 (A Simple P4Runtime Network).: We give a brief example of how a network reduction could look using our semantics. Consider the \(F_{\mathrm{P4R}}\) term: let \[c=\textsc{Connect}(a)\] in \[\textsc{Insert}(c,v)\] This term attempts to connect to a P4Runtime server \(a\) and insert a value \(v\) (a P4 table entry). If we compose this \(F_{\mathrm{P4R}}\) term with a P4Runtime server, the resulting network reduces as: \[\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}s \text{ is a fresh channel }\\ s\in\mathrm{Chan}[\![\overline{C}.t\!m]\!]\!]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Definition 6.2** (_Network Congruence_).: \(\equiv\) is the least congruence between networks such that: \[N_{1}\mid N_{2}\ \equiv\ N_{2}\mid N_{1}\qquad(N_{1}\mid N_{2})\mid N_{3}\ \equiv\ N_{1}\mid(N_{2}\mid N_{3})\] **Definition 6.3** (_Well-typed Network_).: We say that _a network \(N\) is well-typed_ iff for all P4Runtime clients \(t\) such that \(N\equiv t\mid N_{0}\) (for some \(N_{0}\)), we have: 1. \(\emptyset\vdash t:T\) (for some \(T\)); 2. for all server addresses \(a\) occurring in \(t\): * there is exactly one server \(S=\langle C,E,a,K\rangle\) such that \(N_{0}\equiv S\mid N_{1}\); and * \(a\in\text{ServerRef}\big{[}\llbracket C.table\_matches\rrbracket,\llbracket C. table\_actions\rrbracket,\llbracket C.action\_params\rrbracket\big{]}\) 3. for all client-server channels \(s\) occurring in \(t\): * there is exactly one server \(S=\langle C,E,a,K\rangle\) with \(s\in K\), and such that \(N_{0}\equiv S\mid N_{1}\); and * \(s\in\text{Chan}\big{[}\llbracket C.table\_matches\rrbracket,\llbracket C.table\_actions \rrbracket,\llbracket C.action\_params\rrbracket\big{]}\) We now have all ingredients to formalise progress (Theorem 6.4), and the resulting Corollary 6.5: well-typed networks only stop reducing when all P4Runtime clients terminate successfully. **Theorem 6.4** (Progress).: _Take any well-typed network \(N\), and take any P4Runtime client \(t\) such that \(N\equiv t\mid N_{0}\) (for some \(N_{0}\)). Then either:_ * \(t\) _is fully-reduced into a value; or_ * \(t\to t^{\prime}\)_, and correspondingly,_ \(N\to N^{\prime}\equiv t^{\prime}\mid N_{0}\) _with_ \(N^{\prime}\) _well-typed; or_ * _there is a server_ \(S\) _such that_ \(N_{0}\equiv S\mid N_{1}\) _and_ \(t\mid S\to t^{\prime}\mid S^{\prime}\)_, and correspondingly,_ \(N\to N^{\prime}\equiv t^{\prime}\mid S^{\prime}\mid N_{1}\) _with_ \(N^{\prime}\) _well-typed._ **Corollary 6.5** (Type soundness).: _Take any well-typed network \(N\). If \(N\to^{*}N^{\prime}\) and \(N^{\prime}\) cannot perform further \(\tau\)-transitions, then all P4Runtime clients in \(N^{\prime}\) are fully-reduced into values._ ## 7. Implementation of P4r-Type: A Scala 3 API Based on \(F_{\tt P4R}\) We now outline the implementation P4R-Type, our verified API for programming P4Runtime client applications, based on our formalisation of \(F_{\tt P4R}\) and its typing system (Sections 3 and 4). P4R-Type is published as companion artifact of this paper, and its latest version is available at: [https://github.com/JensKanstrupLarsen/P4R-Type/](https://github.com/JensKanstrupLarsen/P4R-Type/) Our typing system (Section 4) is designed to take advantage of Scala 3 features (in particular, match types [1]): this naturally leads to implementing P4R-Type as a Scala 3 API. Consequently, the interactions between a client using P4R-Type and one or more P4 devices have the properties presented in Section 6: all read/insert/modify/delete operations are type-safe, and they enjoy progress and preservation (if both client and device use the same P4Info file). The implementation of P4R-Type consists of: _(1)_ a type-parametric API for P4Runtime operations (connect, read, insert, etc.) (Section 7.1), and _(2)_ a software tool that turns a P4Info file into a set of Scala 3 types which constrain the P4R-Type API (Section 7.2). ### Type-Parametric API for P4Runtime Operations The P4R-Type API consists of the five P4Runtime operations detailed in Section 3: connect, read, insert, modify, and delete. We implement these operations as methods equipped with the strict type parameters shown in Figure 9 (rules T-OrC, T-OrI, T-OrM, T-OrD). The operations closely correspond to the operations in the P4Runtime protobuf API [P4.org Working Group 2020b]. Under the hood, these methods use the loosely-typed P4Runtime protobuf specification and RPC,5 with (de-)serialisation from/to Scala objects based on the ScalaPB library:6 Footnote 5: [https://github.com/p4lang/p4runtime](https://github.com/p4lang/p4runtime) Footnote 6: https:scalap.github.io/ * connect uses the StreamChannel RPC to establish a connection; * read uses the Read RPC to read table entries from the server; * insert, modify, and delete use the Write RPC to update the server. The signature of the API methods also align with the formal API: \[\begin{split}\text{let}\ \ \text{read}&=\ \lambda T_{m}.\ \lambda T_{a}.\ \lambda T_{p}.\ \lambda X_{n}<:\text{TableName.}\ \lambda X_{a}<:T_{a}\ X_{n}.\\ &\lambda c:\text{Chain}[T_{m},T_{a},T_{p}].\\ &\lambda x:\{\text{name}:X_{n},\ \text{matches}:T_{m}\ X_{n},\ \text{ action}:X_{a},\ \text{params}:T_{p}\ X_{a}\}.\ \text{Read}(c,x)\\ \text{in}\ \ \ldots\end{split}\] ``` 1defread[TM[_],TA[_],TP[_]] 2(c:FP4Channel[TM,TA,TP],tableEntry:FP4TableEntry[TM,TA,TP,_,_]) 3:Seq[FP4TableEntry[TM,TA,TP,_,_]] 4=... ``` In the code snippet above, the two types FP4Channel and FP4TableEntry are also part of P4R-Type. Each of these types take the same type parameters as their equivalents in Figures 3 and 4; such type parameters are usually constrained by the context and inferred by the Scala 3 compiler, hence the user does not need to write them explicitly. The FP4Channel type is simply a case class that contains the table entry values (table name, parameters, etc.), while the FP4Channel is an abstract class containing the methods for serialization (toProto) and deserialization (fromProto). ### Translation of P4 Device Configuration Metadata (P4Info) into Scala 3 Types P4R-Type includes a tool that implements the encoding in Definition 5.2: the tool takes a P4Info file (representing a P4 device's tables, actions,...) and generates three Scala 3 types, which can be used to instantiate the type parameters \(T_{m},T_{a},T_{p}\) (see Sections 3 and 4) to guarantee type safety and progress. Such generated types are called TableMatchFields, TableActions, and ActionParams: * type TableMatchFields can instantiate \(T_{m}\), and maps table names to their match fields; * type TableActions can instantiate \(T_{a}\), and maps table names to their action names; * type ActionParams can instantiate \(T_{p}\), and maps action names to their parameter types. A programmer can use P4R-Type to connect to a P4 device and obtain a typed channel constrained by the 3 types above (akin to \(\text{Chain}[T_{m},T_{a},T_{p}]\) in Section 3); when using our type-parametric API (Section 7.1) on this typed channel, only operations compatible with the P4 device can be performed; otherwise, a type error occurs (just like our type system in Section 4 prevents invalid operations). We now illustrate in more detail the P4R-Type-generated types that can instantiate the type parameters \(T_{m},T_{a},T_{p}\), using Figure 12 as an example. **The type parameter \(T_{m}\) (match fields of a P4 table) can be instantiated with the higher-kinded type TableMatchFields, which takes a parameter TN (expected to be a known table name).** ``` 1typeTableMatchFields[TN]=TNmatch 2case"IPv4_table"="("IPv4_dst_addr",P4.LPM) 3case"IPv6_table"="("IPv6_dst_addr",P4.LPM) 4case"*"=">"*" ``` The type above matches a table name TN with one of the known table names (represented as singleton string types) and yields tuple types pairing TN's field names with their type of packet match (P4.Exact, P4.Ternary, P4.LPM,...which are types provided by P4R-Type). As per P4Runtime standard, table fields can be optionally undefined, unless they perform a P4.Exact packet match. **The type parameter \(T_{a}\) (P4 table actions) can be instantiated with type TableAction, that matches a table name TN to yield the valid actions for TN (which may include the wildcard *).** The type parameter \(T_{p}\) (action parameters) can be instantiated with type ActionParams, that matches an action name AN to yield the parameter types for AN. Each parameter type is a tuple with the name of the parameter (as a singleton string type) and the value type. ``` 1typeActionParams[AN]=ANmatch 2case"IPv4_forward"=>(("mac_dst",ByteString),("port",ByteString)) 3case"IPv6_forward"=>(("mac_dst",ByteString),("port",ByteString)) 4case"Drop"=>Unit 5case"=">Unit ``` All three types above also accept a _wildcard_ singleton type "*" as a parameter, representing the request of querying all/any table match fields, actions, or parameters. ## 8. Case Studies and Discussion of Alternative Designs In this section we demonstrate the usefulness of having compile-time checked P4Runtime queries, by illustrating three case studies implemented using P4R-Type. We discuss one case study in detail (update of multiple switches, in Section 8.1) and outline two more (port forwarding and load balancing, in Sections 8.2 and 8.3): these applications derive and extend the tunnelling example in the P4Runtime tutorials,7 and are all included in the software artifact that accompanies this paper. Footnote 7: [https://github.com/p4lang/tutorials/tree/master/exercises/p4runtime](https://github.com/p4lang/tutorials/tree/master/exercises/p4runtime) ### Updating a Network with Multiple P4 Switches Figure 14 shows the case study network. It contains four networks (N1-N4) which are connected through the bridge established by the switches (S1-S4). Switch S1 and S2 use the same subnet mask (10.1.0.0), as do switch S3 and S4 (10.2.0.0). Each switch is configured with a general firewall table for all network traffic, as well as a more specific IPv4 forwarding table for its own subnet. For this reason, the switches use different configuration files, shown in Figure 15. All of the switches should share the same entries for the firewall table. Switch S1 is the master switch for forwarding rules related to subnet 10.1.0.0, while switch S3 is the master switch for forwarding rules related to subnet 10.2.0.0, meaning that S2 and S4 should replicate their table entries, respectively. The replication of table entries must be done periodically by an external controller. For this case study, we implement a controller in P4R-Type that performs this replication, which should: 1. Insert a set of firewall table entries into all four switches. 2. Read all entries from the ipv4_lpm table on S1, then insert them into S2. 3. Read all entries from the ipv4_table table on S3, then insert them into S4. When a programmer uses our P4R-Type API, the Scala compiler spots several possible errors that may occur when updating multiple switches with different P4 configurations: Figure 14. Network topology used in the case studies (Section 8): N1–N4 are networks, and S1–S4 are switches. * Using non-existent table or action names (e.g., due to typos) * Inserting the wrong type of entries in a table (e.g., wrong number of match fields) * Using an existing action in an existing table that does not support it (e.g., an entry in firewall referencing ipv4_forward) * Passing the wrong type of arguments to an action (e.g., an entry in ipv4_lpm referencing action ipv4_forward, but passing only one argument) _Generated Types in Scala 3_. Using the types generated by the tool, the replication program written in P4R-Type is shown in Figure 16. Note that the API interface is relatively minimal and similar to the Python API. For instance, compare the insert call in line 9-11 to the Python code in Figure 1. The difference here is that an error like the one shown in Figure 1 would be caught at compile time by the Scala 3 type system. For example, using "Process.forward_packet" instead of "Process.drop" on line 11 would yield a type error: _"a value of type_s.TA[("Process.firewall")] is required"_. On lines 1-4, the connection to each switch is established. Note that the connect methods are specific to each configuration, unlike the other P4Runtime operations which are part of a generic Figure 16. The replication program written in P4R-Type. Figure 15. The packet-processing sections of the P4 files of switches S1 and S3 (left) and S2 and S4 (right). package: connect returns an FP4Channel instance with predefined type parameters, which in turn constrain the read/insert/modify/delete operations that can be performed on that channel. Consider e.g. lines 9-11 in Figure 16: in the insert call, the tableEntry parameter is constrained to only accept table entries that satisfy the switch configuration of channel s. Since s ranges over a list of channels having two different types of switches (config1 and config2), such entries must be valid in _both_ switch configurations. Since both configurations share a "Process.firewall" table, the program compiles. Instead, if an otherwise valid entry for e.g. the "Process.ipv4_lpm" table is provided, the code would not compile, as that table is defined in config1 but not in config2. ### Port Forwarding Management We implemented a control plane program for _port forwarding_, which is a Network Address Translations (NAT) service typically offered e.g. by Internet routers. We use the same topology as in Figure 14, but we assume that N1, N2, and N3 are local networks, while N4 is an external network. The goal is to allow external clients to connect to servers hosted in the internal networks. To this end, S4 applies a set of NAT rules saying e.g. that: * each packet received on the external S4 interface, and having destination IP address 1.2.3.4 and port 42, should be translated to have destination IP address 10.1.0.4 and port 1042 (and vice versa for the internal S4 interface). We developed a program (using P4R-Type) that offers a command line interface to connect to S4 and query, add, and delete its NAT rules. The program reads and modifies two P4 tables called nat_ingress and nat_egress containing the translations for incoming and outgoing packets. Translated packets are then forwarded according to the entries of a table called ipv4_forward (similar to the one used in Section 8.1). ### Load Balancing We implemented a control plane program for load balancing packet transfers. We use the same topology as in Figure 14, and the goal is for S1 to equally distribute all packets bound for N4 between its outgoing ports to S2, S3 and S4. To implement this, we use a P4 entity called _counter_,8 which can be incremented by the data plane and read by the control plane. We configure the data plane of S1 with one counter per output port, and rules that increment a counter every time a packet is forwarded through the corresponding port. Our control plane program then periodically reads the counter values (using the P4R-Type method readCounter, similar to read for P4 tables) and updates the packet forwarding rules (using the P4R-Type method modify). Footnote 8: Counters are not modelled in \(F_{\text{P4R}}\); they can be easily added e.g. as a new case to the union type of P4Entity (Figure 4). ### On the Role of Match Types and Singleton Types We now discuss whether our results could be achieved with a different design that, while still satisfying requirements (**R1**), (**R2**), and (**R3**) in Section 1, would not rely on match types nor singleton types, and would be therefore less tied to the Scala 3 programming language. Let us consider the case study in Section 8.1, and how we could address it in a subset of Scala 3 _without_ match nor singleton types. To ensure that the table entries described in a P4Info file are constructed correctly, we would need to generate a dedicated data type for each table, with argument types capturing the constraints on actions and parameters. We would also need to constrain channel types to valid table entry types, to ensure that read/insert/modify/delete only use table entries of the correct type. E.g. in the case of the first P4Info metadata in Figure 15 we might generate a set of type definitions like: ``` 1packageconfig1 2 3caseclassActionWildcard() 4caseclassActionDrop() 5caseclassActionForwardPacket(addr:ByteString,port:ByteString) 6 7typeFirewallAction=ActionDrop|ActionWildcard 8caseclassFirewallTableEntry(fields:Option[FP4_LPM],action:FirewallAction) 9 10typeIPV4Action=ActionDrop|ActionForwardPacket|ActionWildcard 11caseclassIPV4TableEntry(fields:(FP4_Exact,Option[FP4_LPM]),action:IPV4Action) 12 13defconnect(...):FP4Channel[FirewallTableEntry|IPV4TableEntry]=... ``` A program written with the resulting API would look like: ``` 1vals1=config1.connect(0,"127.0.0.1",50051) 2insert(s1,config1.FirewallTableEntry(Some(FP4_LPM(...)),config1.ActionDrop)) ``` The type definitions outlined above are roughly as compact as the match types we generate.9 However, the main drawback of such type definitions is that they are substantially more laborious to formalise: we would need to extend the typing system of \(F_{\text{P4R}}\) (Definition 4.2) with a nominal environment to collect type definitions, and the formal encoding from P4Info metadata to types would be significantly more complex than our Definition 5.2. As a consequence, stating and proving results like our Theorems 6.1 and 6.4 would be considerably harder, hampering requirement (**R1**). Footnote 9: These definitions may be more verbose in languages without the type union ”\(|\) ”, going against requirement (**R3**). E.g. in F# or OCaml, FirewallAction and IPV4Action would be rendered as labelled sum types, and each action used in more than one table would result in duplicated label definition (in this example, this would apply to ActionDrop and ActionWildcard). On the practical side, another drawback of the type definitions outlined above is that they would make the API more cumbersome and limited: e.g. it would be hard or impossible to write code like lines 7-11 in Figure 16, where the insert operation works on channels with different P4 configurations config1 and config2. The reason is that channels s1 and s2 would only support table entries of type config1.FirewallTableEntry, whereas channels s3 and s4 would only support config2.FirewallTableEntry: such types would be unrelated and could not be unified, hence a programmer would need to duplicate the code of the insert operations. One might try to mitigate this duplication by leveraging structural typing (available e.g. in TypeScript, or in OCamlstructs) -- but then, the signature of the API method insert would become non-trivial and the feasibility of this approach would require further research. Instead, the match types produced by our encoding in Definition 5.2 allow the Scala compiler to verify that the table entries for "Process.firewall" have the same type under both config1 and config2, hence the code in Figure 16 type-checks. ## 9. Related Work The programmability of SDNs comes at the cost of complexity and attack surfaces of modern networks (Kreutz et al., 2013). Several proposals address complementary problems to our work by giving formal semantics to the data plane language (Alshnakat et al., 2022; Doenges et al., 2021; Peterson et al., 2023) and by developing static (Eichholz et al., 2022; Liu et al., 2018; Stoenescu et al., 2018) and dynamic (Notzli et al., 2018; Shukla et al., 2020) analysis tools for the data plane. Several tools have been developed to verify various network properties. Header Space Analysis (Kazemian et al., 2012) is a framework that can analyse reachability and identify loops, among other properties, of dynamic networks. Both the data plane and the control plane must be represented in the abstract framework. NetKAT (Anderson et al., 2014), and the more recent DyNetKat (Caltais et al., 2022), provides a network language which encompasses both data plane and control plane, with formal semantics and develops syntactic techniques for proving network reachability, non-interference, and correctness of program transformation. Batfish (Fogel et al., 2015) uses a declarative approach to define network behavior via logical relations that represent both data and control plane. The framework allows to check if any reachable network configurations (and packets) can violate forwarding properties. The main difference with our proposal is that these models are non-executable specifications and are much more abstract than the languages used to program SDNs. Therefore, they do not directly provide a method to program the control plane. Verifying actual control software using these models requires to map software behavior to these specifications, which is extremely hard when the control plane is developed using a general-purpose language like Python or Scala. Moreover, many of these models assume a configurable, but not programmable, data plane, which supports a limited and predefined set of protocols (e.g., SDNs using OpenFlow (McKeown et al., 2008)). Instead, our proposal provides a programming framework for the control plane that can interact with arbitrary P4 data planes, and that can statically prevent invalid table manipulations. ## 10. Conclusion and Future Work We presented P4R-Type, a novel verified API for P4Runtime programs written in Scala 3. As a foundation for P4R-Type, we presented the first formal model of P4Runtime networks, where servers interact with client applications written in the calculus \(F_{\text{P4R}}\); we also developed a typing system for \(F_{\text{P4R}}\) (including match types and singleton types, inspired by Scala 3) and proved that well-typed \(F_{\text{P4R}}\) clients interact correctly with the surrounding servers (Theorems 6.1 and 6.4). These correctness results are inherited by actual P4 control programs that use our P4R-Type API. This paper is a stepping stones toward a broader objective: a fully-verified P4 development pipeline encompassing the verification of _both_ the control plane and the data plane, ensuring that configuration updates applied by control programs never compromise desired network properties. This objective determines our future work, outlined below. While our type system is sound in the sense that well-typed programs never get stuck, a server may still in some cases reject an update by producing a false response value (for Insert, Modify or Delete). Not all these cases can be statically verified (e.g. trying to insert a table entry that already exists in the server), but some cases may be prevented by further typing constraints. For example, instead of using the same P4Entity type for all of the operations that handle table entries, we may adopt distinct formats or restrictions on table entries for distinct operations -- e.g. the Insert operation does not in general accept entries where \(table\_matches="*"\), but the Read operation always does. A solution to this could be to generate a distinct set of match types for each operation: this should not drastically change the formalization nor the proofs. Network properties like reachability of a node, enforcement of access control list, and presence of loops, for systems with programmable data plane cannot be verified by looking only at the control plane. In order to verify these properties, we plan to extend our semantics with P4Runtime stream messages and integrate it with existing semantics of P4. We may also need to formalise more detailed P4Runtime server semantics, e.g. to model P4 network elements that perform delayed table updates, have background processes, or communicate with each other. We expect that, thanks to our adoption of an early LTS semantics for clients and servers (Section 5.1), we will be able to adapt the server semantics, while reusing most of the current proofs and results involving \(F_{\text{P4R}}\) clients. ###### Acknowledgements. This work was partially supported by the DTU Nordic Five Tech Alliance grant "Safe and secure software-defined networks in P4" and the Horizon Europe grant no. 101093006 "TaRDIS."
2309.04209
Computable error bounds for quasi-Monte Carlo using points with non-negative local discrepancy
Let $f:[0,1]^d\to\mathbb{R}$ be a completely monotone integrand as defined by Aistleitner and Dick (2015) and let points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\in[0,1]^d$ have a non-negative local discrepancy (NNLD) everywhere in $[0,1]^d$. We show how to use these properties to get a non-asymptotic and computable upper bound for the integral of $f$ over $[0,1]^d$. An analogous non-positive local discrepancy (NPLD) property provides a computable lower bound. It has been known since Gabai (1967) that the two dimensional Hammersley points in any base $b\ge2$ have non-negative local discrepancy. Using the probabilistic notion of associated random variables, we generalize Gabai's finding to digital nets in any base $b\ge2$ and any dimension $d\ge1$ when the generator matrices are permutation matrices. We show that permutation matrices cannot attain the best values of the digital net quality parameter when $d\ge3$. As a consequence the computable absolutely sure bounds we provide come with less accurate estimates than the usual digital net estimates do in high dimensions. We are also able to construct high dimensional rank one lattice rules that are NNLD. We show that those lattices do not have good discrepancy properties: any lattice rule with the NNLD property in dimension $d\ge2$ either fails to be projection regular or has all its points on the main diagonal. Complete monotonicity is a very strict requirement that for some integrands can be mitigated via a control variate.
Michael Gnewuch, Peter Kritzer, Art B. Owen, Zexin Pan
2023-09-08T08:42:23Z
http://arxiv.org/abs/2309.04209v2
# Computable error bounds for quasi-Monte Carlo using points with non-negative local discrepancy ###### Abstract Let \(f:[0,1]^{d}\to\mathbb{R}\) be a completely monotone integrand as defined by Aistleitner and Dick (2015) and let points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\in[0,1]^{d}\) have a non-negative local discrepancy (NNLD) everywhere in \([0,1]^{d}\). We show how to use these properties to get a non-asymptotic and computable upper bound for the integral of \(f\) over \([0,1]^{d}\). An analogous non-positive local discrepancy (NPLD) property provides a computable lower bound. It has been known since Gabai (1967) that the two dimensional Hammersley points in any base \(b\geqslant 2\) have non-negative local discrepancy. Using the probabilistic notion of associated random variables, we generalize Gabai's finding to digital nets in any base \(b\geqslant 2\) and any dimension \(d\geqslant 1\) when the generator matrices are permutation matrices. We show that permutation matrices cannot attain the best values of the digital net quality parameter when \(d\geqslant 3\). As a consequence the computable absolutely sure bounds we provide come with less accurate estimates than the usual digital net estimates do in high dimensions. We are also able to construct high dimensional rank one lattice rules that are NNLD. We show that those lattices do not have good discrepancy properties: any lattice rule with the NNLD property in dimension \(d\geqslant 2\) either fails to be projection regular or has all its points on the main diagonal. **Keywords:** Associated random variables, Digital nets, Rank one lattices ## 1 Introduction Quasi-Monte Carlo (QMC) sampling [7, 26] can have much better asymptotic accuracy than plain Monte Carlo (MC), but it does not come with the usual statistical error estimates that MC has. Those estimates can be recovered by randomized QMC (RQMC) [21, 29] based on independent replicates of QMC. In this paper we consider an alternative approach to uncertainty quantification for QMC. For some special sampling points with a non-negative local discrepancy (NNLD) property described later and a suitably monotone integrand \(f\), we can compute upper and lower bounds on the integral \(\mu\) of \(f\) over the unit cube in \(d\) dimensions. Methods based on random replication can provide confidence intervals for \(\mu\) that attain a desired level such as 95% or 99% asymptotically, as the number of replicates diverges. The method we consider attains 100% coverage for finite \(n\). Unlike the well-known bounds derived via the Koksma-Hlawka inequality [19], these bounds can be computed by practical algorithms. Convex optimization [2] has the notion of a certificate: a computable bound on the minimum value of the objective function. The methods we present here provide certificates for multidimensional integration of a completely monotone function. This improved uncertainty quantification comes at some cost. Our versions of the method will be more accurate than MC for dimensions \(d\leqslant 3\), as accurate as MC (apart from logarithmic factors) for \(d=4\) and less accurate than MC for \(d\geqslant 5\). They also require some special knowledge of the integrand. The problem is trivial and the solution is well known for \(d=1\). If \(f:[0,1]\to\mathbb{R}\) is nondecreasing then \[\frac{1}{n}\sum_{i=0}^{n-1}f\Big{(}\frac{i}{n}\Big{)}\leqslant\int_{0}^{1}f(x )\,\mathrm{d}x\leqslant\frac{1}{n}\sum_{i=1}^{n}f\Big{(}\frac{i}{n}\Big{)}. \tag{1}\] These bracketing inequalities hold even if some of the quantities in them are \(\pm\infty\). This works because \(f\) is nondecreasing, the evaluation points in the left hand side are 'biased low' and those in the right hand side are 'biased high'. To get a multivariate version of (1), we generalize the notion of points biased low to points biased towards the origin in terms of a non-negative local discrepancy (NNLD) property of the points. This property was shown to hold for two dimensional Hammersley points by Gabai [12] in 1967. We couple the NNLD property with a multivariate notion of monotonicity called complete monotonicity [1]. This paper is organized as follows. Section 2 gives some notation and then defines the properties of point sets and functions that we need. Theorem 1 there establishes the bracketing property we need. Section 3 gives fundamental properties of NNLD point sets with an emphasis on projection regular point sets. Only very trivial lattice rules, confined to the diagonal in \([0,1]^{d}\), can be both projection regular and NNLD. Cartesian products preserve the NNLD property as well as an analogous non-positive local discrepancy property. Section 4 compares our bounds to those obtainable from the Koksma-Hlawka inequality. Section 5 shows that digital nets whose generator matrices are permutation matrices produce NNLD point sets. Section 6 gives a construction of rank one lattice rules that are NNLD. We conclude with a discussion and some additional references in Section 7. Definitions and a bound Here we define a non-negative local discrepancy (NNLD) property of the points we use as well as a complete monotonicity criterion for the integrand. We then establish bounds analogous to (1). First we introduce some notation. ### Notation For integer \(b\geqslant 1\), let \(\mathbb{Z}_{b}=\{0,1,\ldots,b-1\}\). The set \(\{1,2,\ldots,d\}\) of variable indices is denoted by \([d]\). For \(u\subseteq[d]\), we use \(|u|\) for the cardinality of \(u\) and \(-u\) for the complement \([d]\setminus u\), especially in subscripts and superscripts. The singleton \(\{j\}\) may be abbreviated to just \(j\) and \(-\{j\}\) to \(-j\). For points \(\mathbf{x},\mathbf{z}\in[0,1]^{d}\) and a set \(u\subseteq[d]=\{1,2,\ldots,d\}\) let \(\mathbf{x}_{u}\colon\mathbf{z}_{-u}\) be the hybrid point with \(j\)'th component \(x_{j}\) for \(j\in u\) and \(j\)'th component \(z_{j}\) for \(j\not\in u\). The points with all coordinates \(0\) or all coordinates \(1\) are denoted by \(\mathbf{0}\) and \(\mathbf{1}\) respectively. When it is necessary to specify their dimension we use \(\mathbf{0}_{d}\) and \(\mathbf{1}_{d}\). The notation \(\mathbb{1}\{A\}\) is for an indicator variable equal to \(1\) when \(A\) is true and \(0\) otherwise. For integer \(d\geqslant 1\) we will use the following precedence notion on \([0,1]^{d}\). For \(\mathbf{x},\mathbf{z}\in\mathbb{R}^{d}\) we say that \(\mathbf{x}\leqslant\mathbf{z}\) when \(x_{j}\leqslant z_{j}\) holds for all \(j=1,\ldots,d\). ### Non-negative local discrepancy A QMC rule is given by a list of points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\in[0,1]^{d}\) and it yields the estimate \[\hat{\mu}=\frac{1}{n}\sum_{i=0}^{n-1}f(\mathbf{x}_{i})\] of \(\mu\). We refer to these points as a point set, \(P_{n}\), though in any setting where some \(\mathbf{x}_{i}\) are duplicated we actually treat \(P_{n}\) as a multiset, counting multiplicity of the points. The local discrepancy of \(P_{n}\) at \(\mathbf{z}\in[0,1]^{d}\) is given by \[\delta(\mathbf{z})=\delta(\mathbf{z};P_{n})=\widehat{\mathrm{VOL}}([\mathbf{0},\mathbf{z}))- \mathrm{VOL}([\mathbf{0},\mathbf{z}))\] where \(\mathrm{VOL}\) is Lebesgue measure and \(\widehat{\mathrm{VOL}}\) is the empirical measure with \[\widehat{\mathrm{VOL}}([\mathbf{0},\mathbf{z}))=\frac{1}{n}\sum_{i=0}^{n-1}1_{\mathbf{x}_{ i}\in[\mathbf{0},\mathbf{z})}.\] That is, \(\mathrm{VOL}\) is \(\mathbb{U}[0,1]^{d}\) while \(\widehat{\mathrm{VOL}}\) is \(\mathbb{U}(P_{n})\). The quantity \(D_{n}^{*}=\sup_{\mathbf{z}\in[0,1]^{d}}|\delta(\mathbf{z})|\) is called the star discrepancy of the point set \(P_{n}\). **Definition 1**.: The point set \(P_{n}\) with points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\) has non-negative local discrepancy (NNLD) if \[\delta(\mathbf{z})\geqslant 0 \tag{2}\] for all \(\mathbf{z}\in[0,1]^{d}\). A distribution for \(\mathbf{x}\in\mathbb{R}^{d}\) is positively lower orthant dependent [32] if \[\Pr(\mathbf{x}\leqslant\mathbf{z})\geqslant\prod_{j=1}^{d}\Pr(x_{j}\leqslant z_{j})\] for all \(\mathbf{z}\in\mathbb{R}^{d}\). A sufficient condition for NNLD is that the \(\mathbb{U}(P_{n})\) distribution on \([0,1]^{d}\) is positively lower orthant dependent and that the marginal distributions \(\mathbb{U}\{x_{0,j},\ldots,x_{n-1,j}\}\) for each \(j=1,\ldots,d\) are stochastically smaller than \(\mathbb{U}[0,1]\). The random variable \(X\) is stochastically smaller than the random variable \(Y\) if \(\Pr(X\leqslant z)\geqslant\Pr(Y\leqslant z)\) for all \(z\in\mathbb{R}\) and in that case we also say that the distribution of \(X\) is stochastically smaller than that of \(Y\). There is a related notion of positive upper orthant dependence as well as two related notions of negative orthant dependence, both upper and lower. In one dimension, the points \(0,1/n,\ldots,(n-1)/n\) are NNLD. As mentioned earlier, \(n=b^{m}\) Hammersley points in base \(b\geqslant 2\) and dimension \(d=2\) are NNLD [12]. Those Hammersley points are constructed as follows. For \(0\leqslant i<n\) write \(i=\sum_{k=1}^{m}a_{i}(k)b^{k-1}\) for digits \(a_{i}(k)\in\{0,1,\ldots,b-1\}\) and set \(i^{\prime}=\sum_{k=1}^{m}a_{i}(m-k+1)b^{k-1}\). Then the \(i\)'th such Hammersley point is \(\mathbf{x}_{i}=\big{(}i/n,i^{\prime}/n\big{)}\) for \(i=0,1,\ldots,n-1\). Some further properties of the Hammersley points, related to the work of [12], are given by [3]. We will also make use of a complementary property: non-positive local discrepancy. **Definition 2**.: The point set \(P_{n}\) with points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\) has non-positive local discrepancy (NPLD) if \[\delta(\mathbf{z})\leqslant 0 \tag{3}\] for all \(\mathbf{z}\in[0,1]^{d}\). One of our techniques is to take NNLD points \(\mathbf{x}_{i}\) and reflect them to \(\mathbf{1}-\mathbf{x}_{i}\) to get points that oversample rectangular regions near \(\mathbf{1}\). In doing so we will need to take care of two issues. One is that for \(d\geqslant 2\), the complement of a hyperrectangle \([\mathbf{0},\mathbf{a})\) under this transformation is not another hyperrectangle. The other is that even for \(d=1\), the complement of a half open interval \([0,a)\) is a closed interval \([a,1]\). To handle these issues we make two observations below. First, for an \(n\)-point set \(P_{n}\subset[0,1]^{d}\) let us additionally define the local discrepancy with respect to closed boxes: \[\overline{\delta}(\mathbf{z})=\overline{\delta}(\mathbf{z};P_{n})=\widehat{\mathrm{ VOL}}([\mathbf{0},\mathbf{z}])-\mathrm{VOL}([\mathbf{0},\mathbf{z}]).\] **Observation 1**.: _The point set \(P_{n}\) has the NNLD property if and only if_ \[\overline{\delta}(\mathbf{z})\geqslant 0\quad\text{ for all }\mathbf{z}\in[0,1]^{d}. \tag{4}\] _This is due to the following reasoning: First, we always have \(\overline{\delta}(\mathbf{z})\geqslant\delta(\mathbf{z})\) for all \(\mathbf{z}\in[0,1]^{d}\). Thus the NNLD property of \(P_{n}\) implies (4). For the converse, we _assume that \(P_{n}\) satisfies (4) and consider two cases. If \(z_{j}=0\) for some \(j\in[d]\) then \(\delta(\mathbf{z})=0\). If instead \(\min_{j\in[d]}z_{j}>0\) then_ \[\delta(\mathbf{z})=\lim_{\varepsilon\downarrow 0}\overline{\delta}(\mathbf{z}- \varepsilon\mathbf{1}).\] _Either way, (2) holds, i.e., \(P_{n}\) is NNLD._ **Observation 2**.: _The condition_ \[\overline{\delta}(\mathbf{z})\leqslant 0\quad\text{ for all }\mathbf{z}\in[0,1]^{d} \tag{5}\] _implies that \(P_{n}\) has the NPLD property, since \(\delta(\mathbf{z})\leqslant\overline{\delta}(\mathbf{z})\) for all \(\mathbf{z}\in[0,1]^{d}\). As a partial converse, if \(P_{n}\subset[0,1)^{d}\cup\{\mathbf{1}\}\), then the NPLD property also implies condition (5). Indeed, in that case we have \(\overline{\delta}(\mathbf{1})=0\) and_ \[\overline{\delta}(\mathbf{z})=\lim_{\varepsilon\downarrow 0}\delta(\mathbf{z}+ \varepsilon\mathbf{1})\leqslant 0\quad\text{ for all }\mathbf{z}\in[0,1)^{d}.\] _Now consider for any \(\mathbf{z}\in[0,1)^{d}\) and any \(\varnothing\neq u\subsetneq[d]\) the closed anchored box \([\mathbf{0},(\mathbf{z}_{u}{:}\mathbf{1}_{-u})]\). Due to \(P_{n}\subset[0,1)^{d}\cup\{\mathbf{1}\}\), it contains exactly the same number of points from \(P_{n}\) as the anchored box \([\mathbf{0},(\mathbf{z}_{u}{:}\mathbf{z}_{-u}^{*})]\), where \(\mathbf{z}^{*}\) is defined by \(z_{j}^{*}:=\max(\{x_{0,j},\ldots,x_{n-1,j}\}\setminus\{1\})\) for \(j=1,\ldots,d\) taking \(z_{j}^{*}=0\) in case it is \(\max(\varnothing)\). Consequently, we have_ \[\overline{\delta}(\mathbf{z}_{u}{:}\mathbf{1}_{-u})\leqslant\overline{ \delta}(\mathbf{z}_{u}{:}\mathbf{z}_{-u}^{*})\leqslant 0.\] _Hence for \(d=1\) we have equivalence of (5) and NPLD for all \(P_{n}\subset[0,1]\). But if \(d\geqslant 2\), then for arbitrary \(P_{n}\subset[0,1]^{d}\) not contained in \([0,1)^{d}\cup\{\mathbf{1}\}\) the NPLD property does not necessarily imply condition (5), as a trivial example with \(d=2\), \(n=1\), \(P_{n}=\{(1,1/2)\}\) shows: \(\delta(\mathbf{z})=-\mathrm{VOL}([\mathbf{0},\mathbf{z}))\leqslant 0\) for all \(\mathbf{z}\in[0,1]^{d}\), but \(\overline{\delta}((1,1/2))=1-1/2=1/2>0\)._ For \(d=1\) if the points in \(\tilde{P}_{n}\) are \(1-x_{i}\) for the points \(x_{i}\) of \(P_{n}\), then \[\overline{\delta}(z;P_{n})+\delta(1-z;\tilde{P}_{n})=0,\] i.e., \(\overline{\delta}(z;P_{n})=-\delta(1-z;\tilde{P}_{n})\) for all \(z\in[0,1]\). Then due to Observations 1 and 2, reflections of NNLD points are NPLD points and vice versa for \(d=1\). In addition to reflection, we consider another useful transformation. Let \(\tilde{\mathbf{x}}_{i}\) be the base \(b\) Hammersley points for \(i=0,\ldots,n-1\) where \(n=b^{m}\) and \(d=2\). Then [4] show that \[\mathbf{x}_{i}=(1/n+\tilde{x}_{i,1},1-\tilde{x}_{i,2}) \tag{6}\] are NPLD. ### Completely monotone functions Here we define completely monotone functions, describing them in words before giving the formal definition. If \(\mathbf{x}\leqslant\mathbf{z}\), then a completely monotone function can increase but not decrease if any \(x_{j}\) is replaced by \(z_{j}\). That is \(f(\mathbf{x}_{-j}{:}\mathbf{z}_{j})-f(\mathbf{x})\geqslant 0\) always holds. Next, the size of this difference can only be increasing as some other component \(x_{k}\) is increased to \(z_{k}\), so certain differences of differences must also be non-negative. This condition must hold for anywhere from \(1\) to \(d\) applications of differencing. The \(|u|\)-fold differences of differences are alternating sums of the form \[\Delta_{u}(\mathbf{x},\mathbf{z})=\sum_{v\subseteq u}(-1)^{|u-v|}f(\mathbf{x}_{-v}{:}\mathbf{z }_{v}).\] Note that the coefficient of \(f(\mathbf{x}_{-u}{:}\mathbf{z}_{u})\) in \(\Delta_{u}(\mathbf{x},\mathbf{z})\) is positive. **Definition 3**.: The function \(f:[0,1]^{d}\to\mathbb{R}\) is completely monotone if \(\Delta_{u}(\mathbf{x},\mathbf{z})\geqslant 0\) for all non-empty \(u\) and all \(\mathbf{x},\mathbf{z}\in[0,1]^{d}\) with \(\mathbf{x}_{u}\leqslant\mathbf{z}_{u}\). In [1], Aistleitner and Dick use completely monotone functions to analyze the total variation of \(f\) in the sense of Hardy and Krause, denoted by \(V_{\rm HK}(f)\). See [28] for an account. From Theorem 2 of [1], if \(V_{\rm HK}(f)<\infty\) then we can write \[f(\mathbf{x})=f(\mathbf{0})+f^{+}(\mathbf{x})-f^{-}(\mathbf{x})\] where \(f^{+}\) and \(f^{-}\) are completely monotone functions with \(f^{+}(\mathbf{0})=f^{-}(\mathbf{0})=0\). They call \(f^{+}-f^{-}\) the Jordan decomposition of \(f\). The functions \(f^{\pm}\) are uniquely determined. If \(f\) is right-continuous and \(V_{\rm HK}(f)<\infty\) then \(f(\mathbf{x})=\nu([\mathbf{0},\mathbf{x}])\) for a uniquely determined signed Borel measure \(\nu\), by Theorem 3 of [1]. Let this signed measure have Jordan decomposition \(\nu=\nu^{+}-\nu^{-}\) for ordinary (unsigned) Borel measures \(\nu^{\pm}\). Then \(f^{\pm}(\mathbf{x})=\nu^{\pm}([\mathbf{0},\mathbf{x}]\setminus\{\mathbf{0}\})\). The completely monotone functions that we study take the form \[f(\mathbf{x})=f(\mathbf{0})+\lambda\,\nu([\mathbf{0},\mathbf{x}]) \tag{7}\] where \(\nu\) is an arbitrary probability measure on \([0,1]^{d}\) (or, more precisely, on the Borel \(\sigma\)-algebra of \([0,1]^{d}\)) and \(\lambda\geqslant 0\). Note that every right-continuous completely monotone function \(f\) on \([0,1]^{d}\) can be represented in that way, see, e.g., [10, II.5.11 Korrespondenzsatz, p. 67]. If \(\nu\) is absolutely continuous with respect to the Lebesgue measure, then we may represent \(f\), due to the Radon-Nikodym theorem, as \[f(\mathbf{x})=f(\mathbf{0})+\lambda\int_{[\mathbf{0},\mathbf{x}]}g(\mathbf{z})\,\mathrm{d}\mathbf{z} \tag{8}\] where \(g\) is a probability density on \([0,1]^{d}\), i.e., a non-negative Lebesgue integrable function on \([0,1]^{d}\) with integral equal to one. ### Basic result Here we present the basic integration bounds. To bracket \(\mu\) we use up to \(2n\) function evaluations using \(n\) each for the lower and upper limits. For some constructions it is possible that some function evaluations might be usable in both limits, reducing the cost of computation. For \(d=1\) we only need \(n+1\) evaluations. **Theorem 1**.: _Let \(f\) be a completely monotone function of the form (7). Let \(P_{n}=\{\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\}\subset[0,1]^{d}\), and put \(\widetilde{P}_{n}=\{\mathbf{1}-\mathbf{x}_{0},\ldots,\mathbf{1}-\mathbf{x}_{n-1}\}\)._ 1. _Let_ \(\widetilde{P}_{n}\) _have non-negative local discrepancy. Then_ \[\overline{\mu}=\hat{\mu}=\frac{1}{n}\sum_{i=0}^{n-1}f(\mathbf{x}_{i})\geqslant \int_{[0,1]^{d}}f(\mathbf{x})\,\mathrm{d}\mathbf{x}.\] (9) 2. _Let_ \(P_{n}\) _have non-positive local discrepancy. If additionally either_ \(P_{n}\subset[0,1)^{d}\cup\{\mathbf{1}\}\) _or_ \(\nu\) _is absolutely continuous with respect to the Lebesgue measure, then_ \[\underline{\mu}=\frac{1}{n}\sum_{i=0}^{n-1}f(\mathbf{1}-\mathbf{x}_{i})\leqslant\int_{ [0,1]^{d}}f(\mathbf{x})\,\mathrm{d}\mathbf{x}.\] (10) Proof.: Without loss of generality take \(f(\mathbf{0})=0\) and \(\lambda=1\). Consequently, \(f(\mathbf{x})=\nu([\mathbf{0},\mathbf{x}])\) for all \(\mathbf{x}\in[0,1]^{d}\). We obtain \[\mu=\int_{[0,1]^{d}}\nu([\mathbf{0},\mathbf{x}])\,\mathrm{d}\mathbf{x}=\int_{[0,1]^{d}} \int_{[0,1]^{d}}1_{\mathbf{z}\leqslant\mathbf{x}}\,\mathrm{d}\nu(\mathbf{z})\,\mathrm{d} \mathbf{x}.\] Reversing the order of integration, \[\mu=\int_{[0,1]^{d}}\int_{[0,1]^{d}}1_{\mathbf{z}\leqslant\mathbf{x}}\, \mathrm{d}\mathbf{x}\,\mathrm{d}\nu(\mathbf{z})=\int_{[0,1]^{d}}\mathrm{VOL}([\mathbf{z}, \mathbf{1}])\,\mathrm{d}\nu(\mathbf{z}). \tag{11}\] Similarly, \[\hat{\mu}=\frac{1}{n}\sum_{i=0}^{n-1}\nu([\mathbf{0},\mathbf{x}_{i}])=\frac{1}{n}\sum_ {i=0}^{n-1}\int_{[0,1]^{d}}1_{\mathbf{z}\leqslant\mathbf{x}_{i}}\,\mathrm{d}\nu(\mathbf{z})\] from which \[\hat{\mu}=\int_{[0,1]^{d}}\frac{1}{n}\sum_{i=0}^{n-1}1_{\mathbf{z} \leqslant\mathbf{x}_{i}}\,\mathrm{d}\nu(\mathbf{z})=\int_{[0,1]^{d}}\widehat{\mathrm{ VOL}}([\mathbf{z},\mathbf{1}])\,\mathrm{d}\nu(\mathbf{z}). \tag{12}\] Combining (11) and (12) the integration error now satisfies \[\hat{\mu}-\mu =\int_{[0,1]^{d}}\Bigl{(}\widehat{\mathrm{VOL}}([\mathbf{z},\mathbf{1}]) -\mathrm{VOL}([\mathbf{z},\mathbf{1}]\Bigr{)}\,\mathrm{d}\nu(\mathbf{z})\] \[=\int_{[0,1]^{d}}\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n })\,\mathrm{d}\nu(\mathbf{z}), \tag{13}\] where \(\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\) is the local discrepancy of \(\widetilde{P}_{n}\) with respect to the anchored closed box \([\mathbf{0},\mathbf{1}-\mathbf{z}]\). Recall that \(\nu\) is a positive measure. For part (i), let \(\widetilde{P}_{n}\) have the NNLD property. Due to Observation 1 we have \(\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\geqslant 0\) for all \(\mathbf{z}\in[0,1]^{d}\). Hence \(\hat{\mu}\geqslant\mu\), establishing (9). For part (ii), let \(\widetilde{P}_{n}\) have the NPLD property. If additionally \(\widetilde{P}_{n}\subset[0,1)^{d}\cup\{\mathbf{1}\}\), then Observation 2 ensures that \(\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\leqslant 0\) for all \(\mathbf{z}\in[0,1]^{d}\), establishing \(\hat{\mu}\leqslant\mu\). If instead \(\nu\) is absolutely continuous with respect to the Lebesgue measure, then we can replace \(\overline{\delta}(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\) in (13) by \(\delta(\mathbf{1}-\mathbf{z};\widetilde{P}_{n})\) without changing the integral. Hence we get again \(\hat{\mu}\leqslant\mu\). In any case, exchanging the roles of \(P_{n}\) and \(\widetilde{P}_{n}\) establishes (10). Theorem 1 provides an upper bound for \(\mu\) when sampling from reflected NNLD points. This bound will approach \(\mu\) as \(n\to\infty\) if those points also satisfy \(D_{n}^{*}\to 0\) as \(n\to\infty\). To get a lower bound we can use reflected NPLD points, provided that either \(\nu\) is absolutely continuous or those points all belong to \([0,1)^{d}\cup\{\mathbf{1}\}\). The NPLD points could be those given by equation (6). We find in Section 5 that NPLD points are not as simple to construct as NNLD points. ### Example Here is a simple example to illustrate these bounds. The integrand is known to be completely monotone because it is a multivariate cumulative distribution function (CDF). For \(\mathbf{x}\in[0,1]^{2}\) we take \[f(\mathbf{x})=\Pr(X_{1}\leqslant x_{1},X_{2}\leqslant x_{2}) \tag{14}\] for \(\mathbf{X}\sim\mathcal{N}(0,\Sigma)\) with \(\Sigma=\left(\begin{smallmatrix}1&\rho\\ \rho&1\end{smallmatrix}\right)\) using \(\rho=0.7\). Due to (9), we can compute an upper bound for \(\mu=\int_{[0,1]^{2}}f(\mathbf{x})\,\mathrm{d}\mathbf{x}\) by sampling at points \(\mathbf{1}-\mathbf{x}_{i}\) where \(\mathbf{x}_{i}\in[0,1]^{2}\) are the first \(n=2^{m}\) Hammersley points in any base \(b\geqslant 2\). We can compute a lower bound for \(\mu\) by first transforming Hammersley points via (6) to get NPLD points \(\mathbf{x}_{i}\) and then sampling at \(\mathbf{1}-\mathbf{x}_{i}\). Note that the point sets in these bounds are not extensible in that the points for \(n=b^{m}\) are not necessarily reused for \(n=b^{m+1}\). Figure 1 shows the results for \(n=2^{m}\) and \(1\leqslant m\leqslant 13\). Over the given range, \(n(\overline{\mu}-\underline{\mu})\) increases with \(n\) while \(n(\overline{\mu}-\underline{\mu})/\log(n)\) decreases with \(n\). The computed upper and lower bounds for \(n=2^{13}\) show that \[0.5618735\leqslant\mu\leqslant 0.5619890.\] This function is so smooth and the dimension is so small that comparable accuracy could be attained by standard low dimensional integration methods with many fewer function evaluations. However, these computations took approximately five seconds in R on a MacBook Air M2 laptop, using the mvtnorm package [13, 14] to compute \(f\). A more efficient integration could save only about five seconds and it would not come with guaranteed bounds. ## 3 More about NNLD points Here we collect some observations about properties that any \(n\geqslant 1\) NNLD points in \([0,1]^{d}\) must necessarily have. Then we use those properties to describe constraints that the NNLD property imposes on customary QMC constructions (lattices and digital nets). Finally we show that the NNLD and NPLD properties are preserved by tensor products. The first and most obvious property of NNLD points is that \(\mathbf{0}\) must be one of those points or else there is a box \(B=[\mathbf{0},\boldsymbol{a})\) with \(0=\widehat{\mathrm{VOL}}(B)<\mathrm{VOL}(B)\) so that \(\delta(\boldsymbol{a})<0\). Next it must be true that all \(n\) points belong to \([0,1-1/n]^{d}\). Suppose to the contrary that \(x_{i1}>1-1/n\) for some \(0\leqslant i<n\). Then for some Figure 1: The top panel shows upper and lower bounds for \(\mu=\int_{[0,1]^{2}}f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}\) using transformations of the Hammersley points and \(n=2^{m}\) for \(1\leqslant m\leqslant 13\). The bottom panel plots the difference between those upper and lower bounds versus \(n\), on a logarithmic scale. \(\epsilon>0\) there exists \(B=[0,1-1/n+\epsilon)\times[0,1]^{d-1}\) with \(\widehat{\mathrm{VOL}}(B)\leqslant(n-1)/n<\mathrm{VOL}(B)\) so that \(\mathbf{x}_{i}\) are not NNLD. The same argument applies if \(x_{ij}>1-1/n\) for any \(i\) and any \(j\). Trivial constructions of NNLD points have \(\mathbf{x}_{i}=(i/n)\mathbf{1}\in[0,1]^{d}\) for \(0\leqslant i<n\). We observe that these points as well as the Hammersley points for \(d=2\) have variables that are positively correlated. We will use a general positive dependence property in Sections 5 and 6 to construct more NNLD point sets. The NPLD construction in (6) creates a negative lower orthant dependence property for the components of \(\mathbf{x}_{i}\in[0,1]^{2}\). Many of the constructions \(P_{n}\) we consider are projection regular by which we mean that the projections of \(P_{n}\) onto each single coordinate are equal to the full set \(\{0,1/n,2/n,\ldots,(n-1)/n\}\). Projection regularity is usually considered advantageous in QMC, as it guarantees a certain structure and even distribution of the integration node set, and simplifies the derivation of error bounds. However, combined with the NNLD property, it imposes a constraint on the point set that we will use to rule out certain constructions. **Proposition 1**.: _Let \(P_{n}\) be a point set with \(n\) points in \([0,1)^{d}\) that is projection regular. If \(P_{n}\) has the NNLD property, then \(P_{n}\) must contain the point_ \[\mathbf{x}_{*}=\left(\frac{n-1}{n},\frac{n-1}{n},\ldots,\frac{n-1}{n}\right).\] Proof.: Suppose that \(P_{n}\) is projection regular and does not contain \(\mathbf{x}_{*}\). Then there must exist at least one two dimensional projection \(Q_{n}\) of \(P_{n}\) which does not contain the point \(\mathbf{y}_{*}:=(\frac{n-1}{n},\frac{n-1}{n})\). Without loss of generality, assume that \(Q_{n}\) is the projection of \(P_{n}\) onto the first and second coordinates. This implies, due to projection regularity, that at least two points of \(Q_{n}\) do not lie in the box \([\mathbf{0},\mathbf{y}_{*})\). Thus, \[\delta(\mathbf{y}_{*})=\widehat{\mathrm{VOL}}([\mathbf{0},\mathbf{y}_{*}))-\mathrm{VOL}( [\mathbf{0},\mathbf{y}_{*}))\leqslant\frac{n-2}{n}-\frac{(n-1)^{2}}{n^{2}}=-\frac{1}{ n^{2}}.\] Therefore, \(P_{n}\) has negative local discrepancy for the box \([\mathbf{0},\mathbf{y}_{*})\times[0,1)^{d-2}\). Proposition 1 has some consequences for well known QMC points. We will consider digital nets and integration lattices. The most widely used and studied integration lattices are rank one lattices. Given a generating vector \(\mathbf{g}=(g_{1},\ldots,g_{d})\in\mathbb{N}^{d}\) and a sample size \(n\geqslant 1\), a rank one lattice uses points \[\mathbf{x}_{i}=\left(\frac{g_{1}i}{n},\frac{g_{2}i}{n},\ldots,\frac{g_{d}i}{n} \right)\,\mathrm{mod}\ 1\] for \(0\leqslant i<n\) where the modulus operation above takes the fractional part of its argument. These \(n\) points form a group under addition modulo \(1\). More general integration lattices having ranks between \(1\) and \(d\) can also be constructed [6, 26, 33]. Lattice rules with ranks larger than \(1\) are seldom used. They also have the group structure. **Corollary 1**.: _For fixed \(d,n\geqslant 1\) there is only one projection regular lattice point set in \([0,1)^{d}\) that consists of \(n\) points and has the NNLD property, namely the lattice point set_ \[\left\{\mathbf{0},\frac{1}{n}\mathbf{1},\frac{2}{n}\mathbf{1},\ldots,\frac{n- 1}{n}\mathbf{1}\right\},\] _whose points all lie on the main diagonal of the \(d\)-dimensional unit cube \([0,1)^{d}\)._ Proof.: Let \(P_{n}\) be a projection regular lattice point set, consisting of \(n\) points in \([0,1)^{d}\), that has NNLD. Due to Proposition 1, \(P_{n}\) has to contain the point \(\boldsymbol{x}_{*}=\frac{n-1}{n}\mathbf{1}\). Due to the additive group structure of \(P_{n}\), we have \[k\boldsymbol{x}_{*}\bmod 1=\frac{n-k}{n}\mathbf{1}\in P_{n}\quad\text{ for }k=0,1, \ldots,n-1.\] The set above has \(n\) distinct points, so they must be all of \(P_{n}\). From Corollary 1 we see, in particular, that the only projection regular rank one lattices that are NNLD are trivial, and equivalent to taking all \(g_{j}=1\). If we also consider lattices that are not projection regular, then we can find constructions that are NNLD and do not only consist of points on the main diagonal of the unit cube \([0,1)^{d}\). See Theorem 3. Now we look at \((t,m,d)\)-nets [7, 26]. The most widely used \((t,m,d)\)-nets are those of Sobol' in base \(b=2\). Sobol' points require one to choose parameters known as direction numbers, with those of [20] being especially prominent. By considering the point \(\boldsymbol{x}_{*}=\mathbf{1}(1-1/n)\), we often find that such Sobol' points cannot be NNLD. The first and third components of \(\boldsymbol{x}_{i}\in[0,1]^{d}\) for \(d\geqslant 3\) are projection regular but, for \(2\leqslant m\leqslant 20\) they fail to contain \((1-1/n,1-1/n)\). Therefore the projection of the Sobol' points onto those two dimensions fails to be NNLD and hence the \(d\) dimensional point set is not NNLD either. Like lattice point sets, digital \((t,m,d)\)-nets in base \(b\geqslant 2\) have a group structure; this time it is based on the digitwise addition modulo \(b\), which is performed in each component separately. Using this group structure and Proposition 1, we obtain a corollary with a similar flavor to Corollary 1, although with less dramatic consequences. **Corollary 2**.: _Let \(d,m\geqslant 1\) and \(b\geqslant 2\). Let_ \[\alpha_{b,m}=\sum_{\nu=1}^{m}b^{-\nu}=\frac{1-b^{-m}}{b-1}.\] _On the one hand, any digital \((t,m,d)\)-net in base \(b\geqslant 2\) that is projection regular and has the NNLD property contains the cyclic subgroup_ \[\{\mathbf{0},\alpha_{b,m}\mathbf{1},2\alpha_{b,m}\mathbf{1},\ldots,(b-1)\alpha _{b,m}\mathbf{1}\},\] _which consists of \(b\) points on the main diagonal._ _On the other hand, any \((t,m,d)\)-net in base \(b\geqslant 2\) has at most \(b^{t+\lceil\frac{m-t}{d}\rceil}\) points on the main diagonal._ Proof.: Let \(n=b^{m}\), and let \(P_{n}\) be a projection regular digital \((t,m,d)\)-net, consisting of \(n\) points in \([0,1)^{d}\), that has NNLD. Due to Proposition 1, \(P_{n}\) has to contain the point \(\mathbf{x}_{*}=\frac{n-1}{n}\mathbf{1}=(b-1)\alpha_{b,m}\mathbf{1}\). Using the specific commutative group addition of \(P_{n}\), we see that adding up \(\mathbf{x}_{*}\)\(k\) times yields \[k\mathbf{x}_{*}=(b-k)\alpha_{b,m}\mathbf{1}\in P_{n}\] for \(k=0,1,\ldots,b-1\). Now let \(P_{n}\) be an arbitrary \((t,m,d)\)-net in base \(b\). Put \(k:=\lceil\frac{m-t}{d}\rceil\). We may partition the half-open unit cube \([0,1)^{d}\) into \(b^{m-t}\) half-open axis-parallel boxes (of the same shape and of volume \(b^{t-m}\)) with side length \(b^{-k}\) and, possibly, side length \(b^{1-k}\). Due to the net property, each of these boxes contains exactly \(b^{t}\) points of \(P_{n}\), and at most \(b^{k}\) of the boxes have a non-trivial intersection with the main diagonal. The next result shows that Cartesian products of finitely many NNLD (or NPLD) point sets are also NNLD (respectively NPLD). **Lemma 1**.: _For positive integers \(d_{1}\), \(d_{2}\), \(n_{1}\) and \(n_{2}\), let \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n_{1}-1}\in[0,1]^{d_{1}}\) and \(\tilde{\mathbf{x}}_{0},\ldots,\tilde{\mathbf{x}}_{n_{2}-1}\in[0,1]^{d_{2}}\) be NNLD point sets. Let \(\mathbf{z}_{0},\ldots,\mathbf{z}_{N-1}\in[0,1]^{d_{1}+d_{2}}\) for \(N=n_{1}n_{2}\) be the Cartesian product of those two point sets. Then \(\mathbf{z}_{0},\ldots,\mathbf{z}_{N-1}\) are NNLD points. If both \(\mathbf{x}_{i}\) and \(\tilde{\mathbf{x}}_{i}\) are NPLD then \(\mathbf{z}_{i}\) are also NPLD._ Proof.: For any \(\mathbf{z}\in[0,1]^{d_{1}+d_{2}}\) define \(\mathbf{x}=\mathbf{z}_{[d_{1}]}\) and \(\tilde{\mathbf{x}}=\mathbf{z}_{-[d_{1}]}\). Let \(\mathrm{VOL}_{1}\), \(\mathrm{VOL}_{2}\) and \(\mathrm{VOL}\) denote Lebesgue measure on \([0,1]^{d_{1}}\), \([0,1]^{d_{2}}\) and \([0,1]^{d}\) for \(d=d_{1}+d_{2}\), respectively. Let \(\widehat{\mathrm{VOL}}_{1}\), \(\widehat{\mathrm{VOL}}_{2}\) and \(\widehat{\mathrm{VOL}}\) be empirical measures for \(\mathbf{x}_{i}\), \(\tilde{\mathbf{x}}_{i}\) and \(\mathbf{z}_{i}\) respectively. If \(\mathbf{x}_{i}\) and \(\tilde{\mathbf{x}}_{i}\) are NNLD then \[\widehat{\mathrm{VOL}}([\mathbf{0}_{d},\mathbf{z})) =\widehat{\mathrm{VOL}}_{1}([\mathbf{0}_{d_{1}},\mathbf{x}))\widehat{ \mathrm{VOL}}_{2}([\mathbf{0}_{d_{2}},\tilde{\mathbf{x}}))\] \[\geqslant\mathrm{VOL}_{1}([\mathbf{0}_{d_{1}},\mathbf{x}))\mathrm{VOL}_{2 }([\mathbf{0}_{d_{2}},\tilde{\mathbf{x}}))\] \[=\mathrm{VOL}([\mathbf{0}_{d},\mathbf{z})).\] Therefore \(\delta(\mathbf{z})\geqslant 0\) and \(\mathbf{z}_{i}\) are NNLD. The same argument, with the inequalities reversed, applies to the NPLD case. ## 4 Comparison to Koksma-Hlawka bounds The Koksma-Hlawka inequality is \[|\hat{\mu}-\mu|\leqslant D_{n}^{*}V_{\mathrm{HK}}(f) \tag{15}\] where \(D_{n}^{*}\) denotes again the star discrepancy and \(V_{\mathrm{HK}}(f)\) is the total variation of \(f\) in the sense of Hardy and Krause. We can be sure that \[\hat{\mu}-D_{n}^{*}V_{\mathrm{HK}}(f)\leqslant\mu\leqslant\hat{\mu}+D_{n}^{* }V_{\mathrm{HK}}(f)\] but the endpoints of this interval are in general far harder to compute than \(\mu\) is. One difficulty is that \(V_{\mathrm{HK}}(f)\) is a sum of \(2^{d}-1\) Vitali variations (see [28]) that in general are harder to compute than \(f\) itself is. However when \(\tilde{f}\), defined by \(\tilde{f}(\boldsymbol{x})=f(\boldsymbol{1}-\boldsymbol{x})\) for every \(\boldsymbol{x}\), is completely monotone then it is useful to work with an alternative definition of total variation \(V_{\mathrm{HK}\boldsymbol{0}}\) (see [1]). For this definition, \(V_{\mathrm{HK}\boldsymbol{0}}(\tilde{f})=V_{\mathrm{HK}}(f)\), and \(V_{\mathrm{HK}\boldsymbol{0}}(\tilde{f})=\tilde{f}(\boldsymbol{1})-\tilde{f}( \boldsymbol{0})=f(\boldsymbol{0})-f(\boldsymbol{1})\), see [1]. With an expression for total variation we still need a value or a bound for \(D_{n}^{*}\). The computation of \(D_{n}^{*}\) is expensive, but in some instances it might be worth doing, and for a given set of points we could pre-compute \(D_{n}^{*}\). It is possible to compute \(D_{n}^{*}\) exactly at cost \(O(n^{d/2+1})\) for fixed \(d\) as \(n\to\infty\), see [8]. The cost to compute \(D_{n}^{*}\) is exponential in the dimension \(d\). If \(n=d\to\infty\) together then computation of \(D_{n}^{*}\) is NP-complete, see [16, 15]. Nevertheless, there are algorithms known that provide either upper and lower bounds for \(D_{n}^{*}\) in moderate dimension, see [34], or lower bounds for \(D_{n}^{*}\) even in high dimensions, see [17]. For these and other facts about computing \(D_{n}^{*}\), cf. [9]. Then, if we have computed a value \(\varepsilon\geqslant D_{n}^{*}(P_{n})\) we then get an interval \[\hat{\mu}\pm\varepsilon(f(\boldsymbol{0})-f(\boldsymbol{1}))\] that is sure to contain \(\mu\), when \(f(\boldsymbol{1}-\boldsymbol{x})\) is completely monotone, whether or not \(P_{n}\) is NNLD. ## 5 Digital net constructions The NNLD points of [3, 12] are two dimensional Hammersley points which are a special kind of digital nets [7] in which the generator matrices are permutation matrices. In this section we show that digital nets constructed with permutation matrices can be used to get NNLD points with \(n=b^{m}\) points for any integer base \(b\geqslant 2\) in any dimension \(d\geqslant 1\). This generalizes the result of [3, 12] which holds for \(d=2\). We obtain this generalization by a probabilistic argument using the notion of associated random variables from reliability theory [11]. We also show that there is a limit to how good digital nets can be when their generator matrices are permutation matrices. ### Permutation digital nets Here we describe how permutation digital nets are constructed. We won't need the more general definition of digital nets until we study them more closely in Section 5.3. For a dimension \(d\geqslant 1\), an integer base \(b\geqslant 2\) and an integer \(m\geqslant 1\) we choose \(d\) matrices \(C^{(j)}\in\mathbb{Z}_{b}^{m\times m}\). For \(n=b^{m}\) and indices \(i=0,1,\ldots,n-1\), write \(i=\sum_{k=1}^{m}a_{i,k}b^{k-1}\) for \(a_{i,k}\in\mathbb{Z}_{b}\) and put \(\vec{i}=(a_{i,1},\ldots,a_{i,k})^{\mathsf{T}}\). Now let \[\vec{x}_{ij}=C^{(j)}\vec{i}\mod b\] have components \(\vec{x}_{ij}(k)\in\mathbb{Z}_{b}\). Then \(\boldsymbol{x}_{i}\) has j'th component \[x_{ij}=\sum_{k=1}^{m}\vec{x}_{ij}(k)b^{-k}\in[0,1).\] Here we use arithmetic modulo \(b\) to define the digital nets. It is customary to only use arithmetic modulo \(b\) when \(b\) is a prime number and to use a generalization based on finite fields when \(b=p^{r}\) for a prime number \(p\) and some power \(r\geqslant 2\). Our proofs of NNLD properties exploit a monotonicity of integers modulo \(b\) whether or not \(b\) is a prime. As an illustration, the first 16 Hammersley points in base \(b\geqslant 2\) for \(d=2\) are constructed this way with \[C^{(1)}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\quad\text{and}\quad C^{(2)}=\begin{pmatrix}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix}. \tag{16}\] Hammersley points for \(d=2\) and general \(m\geqslant 1\) are constructed similarly, with \(C^{(1)}=I_{m}\) and \(C^{(2)}\) a'reversed' identity matrix as in (16). The Hammersley points for \(d\geqslant 3\) are constructed using different bases for different components [18]. ### Associated random variables The settings with \(d=1\) or with \(n=1\) are trivial so we work with \(d\geqslant 2\) and \(n>1\). The key ingredient in constructing a short proof of the NNLD property is the notion of associated random variables [11] that originated in reliability theory. **Definition 4**.: Random variables \(T_{1},\ldots,T_{m}\) are associated if, for \(\boldsymbol{T}=(T_{1},\ldots,T_{m})\) we have \(\operatorname{Cov}(g_{1}(\boldsymbol{T}),g_{2}(\boldsymbol{T}))\geqslant 0\) for all pairs of functions \(g_{1},g_{2}:\mathbb{R}^{m}\to\mathbb{R}\) that are nondecreasing in each argument individually and for which \(\mathbb{E}(g_{1}(\boldsymbol{T}))\), \(\mathbb{E}(g_{2}(\boldsymbol{T}))\) and \(\mathbb{E}(g_{1}(\boldsymbol{T})g_{2}(\boldsymbol{T}))\) all exist. The next theorem uses points that are a digital net with permutation matrix generators, followed by shifting every component of each point to the right by a distance \(1/n\). It shows that they oversample sets of the form \((\boldsymbol{z},\boldsymbol{1}]\). **Theorem 2**.: _For integers \(m\geqslant 1\), \(b\geqslant 2\) and \(d\geqslant 2\), let \(\pi_{1},\ldots,\pi_{d}\) be permutations of \(\{1,\ldots,m\}\), not necessarily distinct. For \(n=b^{m}\) and \(i=0,\ldots,n-1\) and \(k=1,\ldots,m\) define \(a_{i}(k)\in\mathbb{Z}_{b}\) via \(i=\sum_{k=1}^{m}a_{i}(k)b^{k-1}\). If \(\boldsymbol{x}_{i}\in(0,1]^{d}\) has components_ \[x_{ij}=\frac{1}{n}+\sum_{k=1}^{m}b^{-k}a_{i}(\pi_{j}(k)),\quad j=1,\ldots,d \tag{17}\] _then for any \(\boldsymbol{z}\in[0,1]^{d}\)_ \[\frac{1}{n}\sum_{i=0}^{n-1}\prod_{j=1}^{d}\mathbbm{1}\left\{x_{ij}>1-z_{j} \right\}\geqslant\prod_{j=1}^{d}z_{j}. \tag{18}\] Proof.: We define a random index \(i\sim\mathbb{U}\{0,1,\ldots,n-1\}\) which then implies that for each index \(j\) the digits \(a_{i}(\pi_{j}(k))\sim\mathbb{U}(\mathbb{Z}_{b})\) independently for \(k=1,\ldots,m\). For each \(j=1,\ldots,d\) we have \(x_{ij}\sim\mathbb{U}\{1/n,2/n,\ldots,1\}\). Therefore for any \(z_{j}\in[0,1]\), \(\Pr(x_{ij}>1-z_{j})\geqslant z_{j}\). Let \(T_{j}\) be the value of the random variable \(x_{ij}\) where \(i\) is random and \(j\) is not. Letting \(\gamma_{j}\) be the inverse of the permutation \(\pi_{j}\), we may write \[T_{j}=x_{ij}=\frac{1}{n}+\sum_{k=1}^{m}b^{-\gamma_{j}(k)}a_{i}(k).\] Independent random variables \(a_{i}(k)\) are associated by Theorem 2.1 of [11]. Then \(T_{1},\ldots,T_{d}\) are associated by result P4 of [11] because they are nondecreasing functions of \(a_{i}(1),\ldots,a_{i}(m)\). For \(d=2\), let \(g_{1}(\mathbf{T})=\mathbb{1}\{x_{i1}>1-z_{1}\}\) and \(g_{2}(\mathbf{T})=\mathbb{1}\{x_{i2}>1-z_{2}\}\). These are nondecreasing functions of associated random variables and so by the definition of associated random variables \[\Pr(x_{i1}>1-z_{1},x_{i2}>1-z_{2})\geqslant\Pr(x_{i1}>1-z_{1})\Pr(x_{i2}>1-z_ {2}).\] Next, for \(2<r\leqslant d\) let \(g_{1}(\mathbf{T})=\prod_{j=1}^{r-1}\mathbb{1}\{x_{ij}>1-z_{j}\}\) and \(g_{2}(\mathbf{T})=\mathbb{1}\{x_{ir}>1-z_{r}\}\). Using induction we conclude that with our random \(i\), \[\Pr(x_{ij}>1-z_{j},\ j=1,\ldots,d)\geqslant\prod_{j=1}^{d}\Pr(x_{ij}>1-z_{j}) \geqslant\prod_{j=1}^{d}z_{j}\] which is equivalent to (18). **Corollary 3**.: _For integer \(b\geqslant 2\) and dimension \(d\geqslant 2\) let \(\tilde{\mathbf{x}}_{0},\ldots,\tilde{\mathbf{x}}_{n-1}\in[0,1]^{d}\) be points of a digital net constructed in base \(b\) using permutation matrices as generators. Then the points \(\mathbf{x}_{0},\ldots,\mathbf{x}_{n-1}\in[0,1]^{d}\) with \(x_{ij}=1-(1/n+\tilde{x}_{ij})\) are NNLD._ Proof.: Pick \(\mathbf{z}\in[0,1]^{d}\). Now \(\mathbb{1}\{x_{ij}<z_{j}\}=\mathbb{1}\{\tilde{x}_{ij}+1/n>1-z_{j}\}\) and so \[\widehat{\mathrm{VOL}}([\mathbf{0},\mathbf{z}))=\frac{1}{n}\sum_{i=0}^{n-1}\prod_{j=1} ^{d}\mathbb{1}\{x_{ij}<z_{j}\}=\frac{1}{n}\sum_{i=0}^{n-1}\prod_{j=1}^{d} \mathbb{1}\{\tilde{x}_{ij}+1/n>1-z_{j}\}\geqslant\prod_{j=1}^{d}z_{j}\] by Theorem 2. For \(d=2\) it was possible to turn an NNLD point set into an NPLD point set in (6) which includes a reflection \(x_{i,2}=1-\tilde{x}_{i,2}\). If we were to reflect two or more components of an NNLD point set, then those components would take on a positive upper orthant dependence, which does not generally provide the negative lower orthant dependence we want for NPLD points. For projection regular NNLD points the reflection of \(s\geqslant 2\) components will contain \(\mathbf{1}_{s}/n\) and there will be a box \(B=[\mathbf{0}_{s},\mathbf{1}_{s}(1/n+\epsilon))\) with \(\delta(B)=1/n-(1/n+\epsilon)^{s}>0\) for small enough \(\epsilon>0\). ### Quality of permutation digital nets It is clear on elementary grounds that a permutation digital net with two identical permutations among \(\pi_{1},\ldots,\pi_{d}\) would be very bad. The resulting points would satisfy \(x_{ij}=x_{ij^{\prime}}\) for \(0\leqslant i<n\) and some \(1\leqslant j<j^{\prime}\leqslant d\). Here we show that our restriction to permutation digital nets rules out the best digital nets when \(d\geqslant 3\). We begin with the definitions of these nets. **Definition 5**.: For integers \(d\geqslant 1\), \(b\geqslant 2\), and vectors \(\boldsymbol{k},\boldsymbol{a}\in\mathbb{N}^{d}\) with \(a_{j}\in\mathbb{Z}_{b^{k_{j}}}\) for \(j=1,\ldots,d\) the Cartesian product \[\mathcal{E}(\boldsymbol{k},\boldsymbol{a})=\prod_{j=1}^{d}\Bigl{[}\frac{a_{j} }{b^{k_{j}}},\frac{a_{j}+1}{b^{k_{j}}}\Bigr{)}\] is an elementary interval in base \(b\). **Definition 6**.: For integers \(b\geqslant 2\), \(d\geqslant 1\) and \(0\leqslant t\leqslant m\), the \(n\) points \(\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{n-1}\) are a \((t,m,d)\)-net in base \(b\) if \[\widehat{\mathrm{VOL}}(\mathcal{E}(\boldsymbol{k},\boldsymbol{a}))=\mathrm{VOL }(\mathcal{E}(\boldsymbol{k},\boldsymbol{a}))\] holds for all elementary intervals in base \(b\) for which \(\sum_{j=1}^{d}k_{j}\leqslant m-t\). Digital nets are \((t,m,d)\)-nets. Other things being equal, smaller values of \(t\) denote better equidistribution of the points \(\boldsymbol{x}_{i}\) which translates into a lower bound on \(D_{n}^{*}\) and hence a smaller upper bound in the Koksma-Hlawka inequality. From Theorem 4.10 of [26] \[D_{n}^{*}=O\Bigl{(}\frac{b^{t}\log(n)^{d-1}}{n}\Bigr{)}+O\Bigl{(}\frac{\log(n) ^{d-2}}{n}\Bigr{)} \tag{19}\] where the implied constants depend only on \(d\) and \(b\). The powers of \(\log(n)\) are not negligible but they are also not seen in examples of integration errors [30]. The quality parameter of a permutation digital net can be very bad. For \(d=2\), taking the Hammersley construction yields \(t=0\) which is the best possible value. Here we show that for \(d\geqslant 3\), the best available values of \(t\) are far from optimal. The following definition and result are based on [24, Sect. 2.3]. **Construction 1** (Digital Construction of \((t,m,d)\)-Nets).: _For prime \(b\), and \(C^{(1)},\ldots,C^{(d)}\in(\mathbb{F}_{b})^{m\times m}\), let \(\mathcal{C}=\{C^{(1)},\ldots,C^{(d)}\}\). For \(h\in\mathbb{F}_{b}^{m}\) define \(p(h)\in[0,1)^{d}\) componentwise by its \(b\)-adic digit expansion_ \[p(h)_{j}=\delta_{1}^{(j)}(h)b^{-1}+\delta_{2}^{(j)}(h)b^{-2}+\cdots+\delta_{m }^{(j)}(h)b^{-m}\in[0,1),\ \ \ \ j=1,\ldots,d,\] _where \(\delta^{(j)}(h)=(\delta_{1}^{(j)}(h),\ldots,\delta_{m}^{(j)}(h))\) is simply the vector \(C^{(j)}h\in\mathbb{F}_{b}^{m}\). We define the point set_ \[P(\mathcal{C})=(p(h))_{h\in\mathbb{F}_{b}^{m}}. \tag{20}\] _Clearly, \(|P(\mathcal{C})|=b^{m}\)._ _To assess the quality of \(P(\mathcal{C})\), we define the quality criterion \(\rho(\mathcal{C})\): For \(\mathbf{m}=(m_{1},m_{2},\ldots,m_{d})\in\{0,1,\ldots,m\}^{d}\) with \(|\mathbf{m}|=\sum_{j=1}^{d}m_{j}\) let_ \[\mathcal{C}^{(\mathbf{m})}=\begin{pmatrix}C^{(1)}(1{:}m_{1},\cdot)\\ C^{(2)}(1{:}m_{2},\cdot)\\ \vdots\\ C^{(d)}(1{:}m_{d},\cdot)\end{pmatrix}\in\mathbb{F}_{b}^{|\mathbf{m}|\times d}\] _where \(C^{(j)}(1{:}m_{j},\cdot)\in\mathbb{F}_{b}^{m_{j}\times d}\) represents the first \(m_{j}\) rows of \(C^{(j)}\). Now \(\rho(\mathcal{C})\) is the maximum number \(\rho\in\{0,1,\ldots,m\}\) such that for all \(\mathbf{m}\in\{0,1,\ldots,m\}^{d}\) with \(|\mathbf{m}|=\rho\) we have \(\operatorname{rank}(\mathcal{C}^{(\mathbf{m})})=\rho\)._ **Proposition 2**.: _Let \(b,m,\mathcal{C}\), and \(P(\mathcal{C})\) be as in Construction 1. Then \(P(\mathcal{C})\) is a \((t,m,d)\)-net for \(t=m-\rho(\mathcal{C})\)._ **Observation 3**.: _The proposition shows that the best possible \(t\)-value \(t(\mathcal{C})\) of \(P(\mathcal{C})\) is at most \(m-\rho(\mathcal{C})\). But similar arguments as in the corresponding proof of [24, Proposition 2.7] show that actually_ \[t(\mathcal{C})=m-\rho(\mathcal{C}).\] **Proposition 3**.: _Let \(V:=\{v_{1},\ldots,v_{m}\}\) be a set of linearly independent vectors in \(\mathbb{F}_{b}^{m}\). Let \(m=\ell d+r\), where \(\ell\in\mathbb{N}_{0}\) and \(0\leqslant r<d\). If the rows \(C_{k}^{(j)}\), \(k=1,\ldots,m\), of the matrices \(C^{(j)}\), \(j=1,\ldots,d\), are all contained in \(V\), then \(\rho(\mathcal{C})\leqslant 2\lfloor m/d\rfloor+1\). Therefore, the smallest \(t\)-value \(t(\mathcal{C})\) of \(P(\mathcal{C})\) satisfies_ \[t(\mathcal{C})\geqslant(d-2)\lfloor m/d\rfloor+r-1.\] Proof.: Consider the \(m\) row vectors \[C_{1}^{(1)},C_{1}^{(2)},\ldots,C_{1}^{(d)},\quad C_{2}^{(1)},C_{2}^{(2)}, \ldots,C_{2}^{(d)},\quad\ldots\quad,C_{\ell+1}^{(1)},C_{\ell+1}^{(2)},\ldots,C_ {\ell+1}^{(r)}.\] _Case 1_: Two of these row vectors are equal. Assume these rows are \(C_{k}^{(j)}\) and \(C_{k^{\prime}}^{(j^{\prime})}\). If \(j=j^{\prime}\), then we consider the matrix \(C:=\mathcal{C}^{(\mathbf{m})}\) with \(m_{j}=\max\{k,k^{\prime}\}\) and \(m_{\nu}=0\) for all \(\nu\neq j\). Obviously, \(\operatorname{rank}(C)\leqslant\max\{k,k^{\prime}\}-1\). Hence it follows that \(\rho(\mathcal{C})\leqslant\max\{k,k^{\prime}\}-1\leqslant\lceil m/d\rceil-1\). If \(j\neq j^{\prime}\), then we consider the matrix \(C:=\mathcal{C}^{(\mathbf{m})}\) with \(m_{j}=k\), \(m_{j^{\prime}}=k^{\prime}\), and \(m_{\nu}=0\) for all \(\nu\notin\{j,j^{\prime}\}\). Obviously, \(\operatorname{rank}(C)\leqslant k+k^{\prime}-1\). Hence it follows that \(\rho(\mathcal{C})\leqslant k+k^{\prime}-1\leqslant 2\lceil m/d\rceil-1\). _Case 2_: All of these row vectors are different. Consider \(C_{\ell+1}^{(d)}\). Then there exist \(1\leqslant j<d\) and \(1\leqslant h\leqslant\ell+1\) or \(j=d\) and \(1\leqslant h\leqslant\ell\) such that \(C_{\ell+1}^{(d)}=C_{h}^{(j)}\). Now we argue similarly as in case 1: If \(j=d\), then it is easy to see that \(\rho(\mathcal{C})\leqslant\ell=\lfloor m/d\rfloor\). If \(j\neq j^{\prime}\), then \(\rho(\mathcal{C})\leqslant h+\ell\leqslant 2\ell+1\leqslant 2\lfloor m/d\rfloor+1\). In any case, we have shown that \(\rho(\mathcal{C})\leqslant 2\lfloor m/d\rfloor+1\). **Corollary 4**.: _Let \(m=\ell d+r\), where \(\ell\in\mathbb{N}\) and \(0\leqslant r<d\). If \(C^{(1)},\ldots,C^{(d)}\in\mathbb{F}_{b}^{m\times m}\) are all permutation matrices, then the smallest \(t\)-value \(t(\mathcal{C})\) of \(P(\mathcal{C})\) satisfies_ \[t(\mathcal{C})\geqslant(d-2)\lfloor m/d\rfloor+r-1.\] Proof.: This follows directly from Proposition 3, since the rows of the matrices \(C^{(1)},\ldots,C^{(d)}\) are all in \(\{e_{1},\ldots,e_{m}\}\), where \(e_{i}\) denotes the \(i\)-th standard unit vector of \(\mathbb{F}_{b}^{m}\). Let us represent the permutation matrix where row \(k\) has a one in column \(\pi(k)\) as simply the column vector with entries \(\pi(k)\). Then we can represent our permutation nets with an \(m\times d\) matrix \(\Pi\) with \(j\)'th column \(\pi_{j}\). For example the Hammersley points with generator matrices \(I_{m}\) and reversed \(I_{m}\) are represented this way by \[\Pi=\begin{pmatrix}1&m\\ 2&m-1\\ \vdots&\vdots\\ m&1\end{pmatrix}. \tag{21}\] For \(d=3\) we want \(\Pi\in\{1,\ldots,m\}^{m\times 3}\) with the largest possible value of \[\rho=\min\bigl{\{}k+k^{\prime}\mid\Pi_{k,j}=\Pi_{k^{\prime},j^{\prime}},1 \leqslant j<j^{\prime}\leqslant 3\bigr{\}}-1.\] Then we get quality parameter \(t=m-\rho\). If we simply adjoin a third column to \(\Pi\) in (21) the best \(\rho\) we can get is \(m/2\) if \(m\) is even and \((m+1)/2\) if \(m\) is odd. These lead to \(t\geqslant m/2\) if \(m\) is even and \(t\geqslant(m-1)/2\) if \(m\) is odd, which is much worse than the bound in Corollary 4. For \(t=m/2\) the first term in (19) is \(O(b^{m/2}\log(n)^{2}/n)=O(\log(n)^{2}/\sqrt{n})\) because \(b=n^{1/m}\). If \(m=3\ell\), then we can choose the first \(\ell\) rows of \(\Pi\) to be \[\begin{pmatrix}1&2&3\\ 4&5&6\\ \vdots&\vdots&\vdots\\ 3\ell-2&3\ell-1&3\ell\end{pmatrix}.\] Let us label these first \(\ell\) rows of \(\Pi\) by \(\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{\ell}\in\mathbb{N}^{3}\). Now, for \(\mathbf{r}=(a,b,c)\) let \(\mathbf{r}^{\prime}=(b,c,a)\) and \(\mathbf{r}^{\prime\prime}=(c,a,b)\) be one and two rotations of the elements of \(\mathbf{r}\) to the left with wraparound. By taking the rows of \(\Pi\) in this order \[\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{\ell},\ \ \mathbf{r}^{\prime}_{\ell},\mathbf{r}^{ \prime}_{\ell-1},\ldots,\mathbf{r}^{\prime}_{1},\ \ \mathbf{r}^{\prime\prime}_{\ell},\mathbf{r}^{\prime\prime}_{\ell-1},\ldots,\mathbf{r}^{ \prime\prime}_{1}\] we get \(\rho=2\ell\) and hence \(t=m/3\). This is very close to the bound \(\lfloor m/d\rfloor+0-1=m/3-1\) from Corollary 4. We prefer the ordering \[\mathbf{r}_{1},\mathbf{r}_{2},\ldots,\mathbf{r}_{\ell},\ \ \mathbf{r}^{\prime}_{\ell},\mathbf{r}^{ \prime\prime}_{\ell},\ \ \mathbf{r}^{\prime}_{\ell-1},\mathbf{r}^{\prime\prime}_{\ell-1},\ \ \mathbf{r}^{\prime}_{\ell-2},\mathbf{r}^{\prime\prime}_{\ell-2},\ \ \ \ldots\ \ \mathbf{r}^{\prime}_{2},\mathbf{r}^{\prime\prime}_{2},\ \ \mathbf{r}^{ \prime}_{1},\mathbf{r}^{\prime\prime}_{1}\] because while it attains the same value of \(t\) it has fewer pairs of columns for which \(k+k^{\prime}=2\ell+1\). With \(t=m/3\) for \(d=3\) the first term in (19) is \(O(b^{t}\log(n)^{2}/n)=O(n^{-2/3}\log(n)^{2})\). Using the same method for \(d=4\) and \(m=4\ell\) we can get \(\rho=2\ell=m/2\), implying that \(t=m/2\), and yielding a rate of \(O(b^{t}\log(n)^{3}/n)=O(n^{-1/2}\log(n)^{3})\). This result for \(d=4\) matches the rate for plain MC apart from the power of \(\log(n)\). So the \(100\%\) error bounds available from NNLD sampling come with a logarithmic accuracy penalty in comparison to plain MC. A second choice for \(d=4\) is to use a Cartesian product of two Hammersley point sets with \(\sqrt{n}\) points each. The error of such a Cartesian product would ordinarily be the same as that of the individual Hammersley rules in two dimensions with their reduced sample sizes. That is \(O(n^{-1/2}\log(n))\) which is then a better logarithmic factor than the 4 dimensional permutation nets attain. For \(d=3\) we could also use a Cartesian product of Hammersley points with \(n=b^{2}\) points and a one dimensional grid \(\{0,1/n,\ldots,1-1/n\}\). This then uses \(N=n^{2}\) points and we expect an error of \(O(\log(n)/n)=O(\log(N)/N^{1/2})\) which is a worse rate than we can get with the permutation net in \([0,1]^{3}\). ### Other generator matrices Permutation matrices are not the only generator matrices that can produce points with the NNLD property. For digital nets in base 2, we know from Proposition 1 that if \(C^{(1)}=I_{m}\) then we must have \(C^{(j)}\mathbf{1}_{m}=\mathbf{1}_{m}\bmod 2\). This in turn implies that every row of \(C^{(j)}\) must have an odd number of 1s in it. A numerical search shows there are 221 choice of nonsingular \(C^{(2)}\) when \(m=4\) and \(C^{(1)}=I_{4}\). Below are some examples: \[C^{(2)}=\begin{pmatrix}1&0&0&0\\ 1&1&0&1\\ 0&1&1&1\\ 1&1&1&0\end{pmatrix}\quad\text{or}\quad\begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 1&0&1&1\\ 1&1&1&0\end{pmatrix}\quad\text{or}\quad\begin{pmatrix}0&0&1&0\\ 1&0&0&0\\ 1&1&0&1\\ 0&1&0&0\end{pmatrix}.\] Nevertheless, it is hard to find an example where non-permutation matrices perform better than permutation matrices with respect to the \(t\)-value. When \(d=3\), one can verify, either by lengthy reasoning or brute-force enumeration, that NNLD digital nets constructed by non-permutation matrices cannot attain a better t-value than those constructed by permutation matrices for \(m\leqslant 7\) and \(b=2\). ## 6 Non-trivial Rank 1 lattices that are NNLD Here we consider special cases of rank-1 lattice rules that are suboptimal in terms of discrepancy, but produce NNLD points. While they can be defined in any dimension \(d\geqslant 2\) it is only for dimension 1 that they are projection regular. Therefore the conclusions from Proposition 1 and Corollary 1 do not hold for them when \(d>1\). **Theorem 3**.: _For integers \(m\geqslant d\) and \(b\geqslant 2\) and \(0\leqslant i<n=b^{m}\), let_ \[\boldsymbol{x}_{i}=\Big{(}\frac{i}{n},\frac{ib}{n},\ldots,\frac{ib^{j-1}}{n}, \ldots,\frac{ib^{d-1}}{n}\Big{)}\mod 1.\] _Then points \(\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{n-1}\) are NNLD._ Before proving this theorem we note that these points are quite poor for integration; however, the structure of the points can be useful for showing good integration bounds in suitably weighted spaces, see [5]. There are only \(b^{d-j+1}\) unique values of \(x_{ij}\). Further, when \(|j-j^{\prime}|\) is small the points \((x_{ij},x_{ij^{\prime}})\) lie within at most \(b^{|j-j^{\prime}|}\) lines in \([0,1)^{2}\) and have a large discrepancy. Proof.: We write \(i=\sum_{k=1}^{m}a_{i}(k)b^{k-1}\) and then \[nx_{ij}=b^{j-1}\sum_{k=1}^{m}a_{i}(k)b^{k-1}\mod b^{m}=\sum_{k=1}^{m+1-j}a_{i} (k)b^{j+k-2}.\] For \(i\sim\mathbb{U}\{0,1,\ldots,n-1\}\) the digits \(a_{i}(1),\ldots,a_{i}(m)\) are independent \(\mathbb{U}(\mathbb{Z}_{b})\) random variables. Hence they are associated random variables which makes \(nx_{i1},\ldots,nx_{id}\) and hence \(x_{i1},\ldots,x_{id}\) into associated random variables. Finally, \(x_{ij}\) has the uniform distribution on \(\{0,1/n_{j},2/n_{j},\ldots,1-1/n_{j}\}\) where \(n_{j}=n/b^{j-1}\). This distribution is stochastically smaller than \(\mathbb{U}[0,1]\) and so \(\boldsymbol{x}_{i}\) are NNLD. The values \(x_{ij}\) for \(0\leqslant i<b^{m}\) in these lattices take \(n_{j}=b^{d-j+1}\) distinct values \(\ell/n_{j}\) for \(0\leqslant\ell<n_{j}\) with each of those values appearing \(n/n_{j}\) times. As such they constitute a left endpoint integration rule on \(n_{j}\) points and so for nonperiodic smooth integrands we anticipate an error rate of \(O(n_{j}^{-1})\). For this to be better than plain MC we require \(n_{j}\geqslant\sqrt{n}\) or \(j\leqslant m/2\). While a better rate is available for periodic integrands, those cannot be completely monotone unless they are constant. ## 7 Discussion and further references We find that it is possible to get computable bounds on some integrals by using points with a suitable bias property (non-negative local discrepancy (NNLD)) on integrands with a suitable monotonicity property (complete monotonicity). The method of associated random variables is useful for showing that a given point set is NNLD. There are several generalizations of multivariate monotonicity in [25]. They include the complete monotonicity discussed here as well as the more commonly considered monotonicity in each of the \(d\) inputs one at a time. The complexity of integrating coordinate-wise monotone functions has been studied by [27, 31]. Scrambled \((t,m,d)\)-nets have been shown to be negatively orthant dependent if and only if \(t=0\)[35]. Similarly, it was shown in [36] that randomly shifted and jittered (RSJ) rank-1 lattices based on a random generator are also negatively orthant dependent and that, in some sense, one cannot achieve this result by employing less randomness. Using the NLOD property of the distribution of these RQMC points, it follows from [23] that for functions which are monotone in each variable scrambled nets and RSJ rank-1 lattices cannot increase variance over plain Monte Carlo in any dimension \(d\). While complete monotonicity is a very special property, its applicability can be widened by the method of control variates. If \(h(\cdot)\) is completely monotone with known integral \(\theta\), we will in some settings be able to find \(\lambda_{+}>0\) for which \(f+\lambda_{+}h\) is a completely monotone function of \(\mathbf{x}\). Then by Theorem 1 we can compute an upper bound \(B_{+}\geqslant\mu+\lambda_{+}\theta\) and conclude that \(\mu\leqslant B_{+}-\lambda_{+}\theta\). Similarly a lower bound can be found by choosing \(\lambda_{-}\) such that \(\lambda_{-}h-f\) is a completely monotone function of \(\mathbf{x}\), using Theorem 1 to get an upper bound \(\lambda_{-}\theta-\mu\leqslant B_{-}\) and then concluding that \(\mu\geqslant\lambda_{-}\theta-B_{-}\). Details on how to choose \(h\) and find \(\lambda_{\pm}\) are beyond the scope of this article. The customary way to quantify uncertainty in QMC is to use RQMC replicates with statistically derived asymptotic confidence intervals. For a recent thorough empirical evaluation of RQMC, see [22], who found the usual confidence intervals based on the central limit theorem to be even more reliable than sophisticated bootstrap methods. Here we have found an alternative computable non-asymptotic approach with 100% coverage, but so far it does not give very good accuracy for high dimensions. ## Acknowledgments We thank Josef Dick, David Krieg, Frances Kuo, Dirk Nuyens and Ian Sloan for discussions. Much of this work took place at the MATRIX Institute's location in Creswick Australia as part of their research program on 'Computational Mathematics for High-Dimensional Data in Statistical Learning', in February 2023, and the paper was finalized during the Dagstuhl Seminar 23351 'Algorithms and Complexity for Continuous Problems', in Schloss Dagstuhl, Wadern, Germany, in August 2023. We are grateful to MATRIX and to the Leibniz Center Schloss Dagstuhl. The contributions of ABO and ZP were supported by the U.S. National Science Foundation under grant DMS-2152780. Peter Kritzer is supported by the Austrian Science Fund (FWF) Project P34808. For the purpose of open access, the authors have applied a CC BY public copyright licence to any author accepted manuscript version arising from this submission.
2302.14368
Enhanced Controllability of Diffusion Models via Feature Disentanglement and Realism-Enhanced Sampling Methods
As Diffusion Models have shown promising performance, a lot of efforts have been made to improve the controllability of Diffusion Models. However, how to train Diffusion Models to have the disentangled latent spaces and how to naturally incorporate the disentangled conditions during the sampling process have been underexplored. In this paper, we present a training framework for feature disentanglement of Diffusion Models (FDiff). We further propose two sampling methods that can boost the realism of our Diffusion Models and also enhance the controllability. Concisely, we train Diffusion Models conditioned on two latent features, a spatial content mask, and a flattened style embedding. We rely on the inductive bias of the denoising process of Diffusion Models to encode pose/layout information in the content feature and semantic/style information in the style feature. Regarding the sampling methods, we first generalize Composable Diffusion Models (GCDM) by breaking the conditional independence assumption to allow for some dependence between conditional inputs, which is shown to be effective in realistic generation in our experiments. Second, we propose timestep-dependent weight scheduling for content and style features to further improve the performance. We also observe better controllability of our proposed methods compared to existing methods in image manipulation and image translation.
Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Khuc, Krishna Kumar Singh, Jingwan Lu, David I. Inouye, Ajinkya Kale
2023-02-28T07:43:00Z
http://arxiv.org/abs/2302.14368v3
# Towards Enhanced Controllability of Diffusion Models ###### Abstract _Denoising Diffusion models have shown remarkable capabilities in generating realistic, high-quality and diverse images. However, the extent of controllability during generation is underexplored. Inspired by techniques based on GAN latent space for image manipulation, we train a diffusion model conditioned on two latent codes, a spatial content mask and a flattened style embedding. We rely on the inductive bias of the progressive denoising process of diffusion models to encode pose/layout information in the spatial structure mask and semantic/style information in the style code. We propose two generic sampling techniques for improving controllability. We extend composable diffusion models to allow for some dependence between conditional inputs, to improve the quality of generations while also providing control over the amount of guidance from each latent code and their joint distribution. We also propose timestep dependent weight scheduling for content and style latents to further improve the translations. We observe better controllability compared to existing methods and show that without explicit training objectives, diffusion models can be used for effective image manipulation and image translation._ ## 1 Introduction Diffusion Models [46, 18] (DM) have gained much attention due to their impressive performance in image generation [8, 41, 42] and likelihood estimation [38]. While many efforts have concentrated on improving image generation quality [38, 45, 53] and sampling speed [47, 28, 36], relatively less attention has focused on enhancing controllability of diffusion models. Improving editability and controllability in various other forms of generative models (e.g., GANs [14, 15, 50], VAE [27, 2] and Flow-based Models [10, 11]) has been one of the most prominent research topics in the past few years. GANs such as StyleGAN-v2 [22] have been shown to inherently learn smooth and regular latent spaces [15, 50] that enable meaningful edits and manipulations on a real or generated image. The enhanced controls are useful for many practical applications such as Image Synthesis [39], Domain Adaptation [20], Style Transfer [21, 32] and Interpretability [31] to name a few. Despite high quality and diverse image generations, it is less clear how to manipulate the latent space of diffusion models that is composed of a sequence of gradually denoised 2d samples. An alternative to using the inherent latent space of GANs for manipulation is to learn multiple external disentangled latent spaces to condition the generation [39, 21, 32, 29]. A common theme across such methods is to learn a structure/content code to capture the underlying structure (e.g., facial shape and pose in face images) and a texture/style code to capture global semantic information (e.g. visual appearance, color, hair style etc.). Similar approaches have been tried in diffusion models in [30, 40], however these techniques do not learn multiple controllable latent spaces. Other inference time editing techniques such as [16, 24, 33, 49, 34] either require computationally expensive optimization (of the conditional embeddings and/or the model) for each sample during inference or do not provide fine-grained controllability. Composable Diffusion Models [34] (CDM) proposes a way to compose multiple conditional inputs but assumes the inputs are independent, which may not always be true (Section 3.3). In this paper, we propose a novel framework as shown in Fig. 2 to effectively learn two latent spaces to enhance controllability in diffusion models. Inspired by [39, 29] we add a _Content Encoder_ that learns a spatial layout mask and a _Style Encoder_ that outputs a flattened semantic code to condition the diffusion model during training (Section 3.1). The content and style codes are injected differently into the UNet [43] to ensure they encode different semantic factors of an image. Though decomposing content and style information from an image enables better controllability, enforcing independence between the codes may not always be ideal. For example, _face structure_ (e.g. square or round face) that is ideally encoded in the content code and _gender_ (e.g. male or female) an attribute encoded in the style code [39], may not be independent and treating them as such might lead to unnatural compositions (Fig. 3). Similarly, CDM [34] assumes conditioning inputs are independent and hence shows unnatural compositions for certain prompts like 'a flower' and 'a bird' (see Fig.6 in [34]). We extend the formulation in [34] and propose _Generalized Composable Diffusion Models_ (GCDM) to support compositions during inference when the conditional inputs are not necessarily independent (Section 3.3). This also provides the ability to control the amount of information from content, style and their joint distribution separately during sampling. We observe significantly better translations with GCDM and also show improved compositions in Stable Diffusion compared to CDM (Fig. 5). In addition, we leverage the inductive bias [1, 5, 6] of diffusion models that learns low frequency layout information in earlier steps and high frequency or imperceptible details in the later steps of the reverse diffusion process, to further improve results. We use a predefined controllable timestep dependent weight schedule to compose the content and style codes during generation. This simulates the mixture of denoising experts proposed in [1] by virtue of varying the conditional information (instead of the entire model) at different timesteps during inference. Some examples generated using the proposed model are shown in Fig. 1. Moreover, we also show that the learned latent spaces are manipulatable. We apply PCA on the style and content latent spaces and identify meaningful attribute specific manipulation directions similar to [15] as shown in Fig. 1 (c). We also observe that the proposed setup learns latent spaces that support smooth interpolations (Fig. 1 (b)). To the best of our knowledge, there is no existing work that trains diffusion models with multiple latent spaces, generalizes composable diffusion models and leverages timestep scheduling for image translation and manipulation. ## 2 Preliminaries and Related Works In this section, we describe the preliminaries that our approach builds on and related works in the literature. ### Diffusion Models Diffusion Models [46] like DDPM [18] showed impressive image generation and likelihood estimation but Figure 2: (Top) overview of our proposed framework. We first obtain \(z_{0}\) from the pretrained Autoencoder [12], which is the actual input for the LDM [42]. The external encoders \(E_{c}(\psi)\) and \(E_{s}(\phi)\) and the denoising UNet \(\epsilon(\theta)\) are trained together without any additional objectives. (Bottom) shows the details of injecting style and content information into the denoising UNet at the \(\ell\)-th layer as described in Section 3.1. had a computationally expensive sampling procedure. DDIM [47] reduced the sampling time by deriving a non-Markovian variant of DDPM. Similarly, Improved-DDPM [38] also improved sampling speed and proposed to learn the variance schedule that was fixed in previous works to enhance mode coverage. DPM-solver [35] and DPM-solver++ [36] proposed high-order ODE solvers for faster sampling. DDGAN [51] combined the best of GANs and diffusion models to retain the mode coverage and quality of diffusion models while making it faster like GANs. LDM [42] used a pretrained autoencoder [12] to learn a lower capacity latent space and trained a diffusion model on the learned latent space (in contrast to pixel space in previous works), reducing time and memory complexity significantly without loss in quality. More descriptions are provided in Section C in the supplementary. ### Controllability in Diffusion Models **Guidance:** Some recent works have explored modeling the conditional density \(p(x_{t}|c)\) for controllability. Dhariwal et al. [8] proposed to use a pretrained classifier but finetuning a classifier that estimates gradients from noisy images, which increases the complexity of the overall process [19]. Ho et al. [19] proposed to use an implicit classifier while Composable Diffusion Models [34] (CDM) extend the classifier free guidance approach to work with multiple conditions assuming conditional independence. Though guidance approaches help control the generation, they do not offer fine grained controllability or support applications such as reference based image translation. **Conditional DMs:** Conditional Diffusion Models have been explored in diverse applications showing state-of-the-art performance in text to image generation (DALLE2 [41], Imagen [45], Parti [53]). These methods use pretrained CLIP or similar embeddings that support interpolation but not further editability. DiffAE [40] proposed to learn a semantic space that has nice properties making it suitable image manipulation. However, a single latent space capturing all the information makes it difficult to isolate attributes to manipulate. **Inference only Editing:** Several works have proposed inference-time editing techniques on top of pretrained diffusion models. SDEdit [37] enables structure preserving edits while Prompt-to-prompt [16] modifies the attention maps from cross-attention layers to add, remove or reweigh importance of an object in an image. DiffusionCLIP [25], Imagic [24] and Unitune [49] propose optimization based techniques for text based image editing. Textual Inversion [13] and Dream-Booth [44] finetunes pretrained models using few reference images to get personalized models. Though the above techniques are helpful with editing, most of these methods require computationally expensive optimization, modify the weights of pretrained model for each sample, and/or doesn't support fine-grained controllability for reference based image translation. The closest related work to ours is DiffuseIT [30]. They enabled reference and text guided image translation by leveraging Dino-VIT [3] to encode content and style. However, their approach requires costly optimization during inference and doesn't support controlling the final generation. **Inductive Bias of Diffusion Models:** On top of the inductive bias [5, 6] of Diffusion Models, eDiffi [1] proposed to train models specialized to a subset of the timesteps to improve generations drastically. MagicMix [33] interpolates noise maps while providing different embeddings at different timesteps. Though these approaches show the advantages of the inductive bias, it hasn't been used to provide more controllability for image manipulation. ### Controllability in GANs MUNIT [21], DRIT [32] and SAE [39] propose frameworks for reference-based image translation by learning disentangled latent spaces. StarGAN v2 [7] uses domain labels to support image to image translation whereas DAG [29] adds an extra content space on top of the style space of StyleGAN v2 [23] for disentanglement. Though these techniques achieve impressive results for translation, they suffer the same limitations as GANs such as mode coverage and difficulty in training. To overcome the limitations, we use similar techniques and build on top of diffusion models, that has shown to have better mode coverage and higher quality generations [9] compared to GANs. ## 3 Proposed Method Our framework is based on the LDM [42] architecture as it is faster to train and sample from, compared to pixel-based diffusion models. Let \(x\) be an input image and \(E_{LDM}\) and \(D_{LDM}\) be the pretrained and fixed encoder and decoder respectively. The actual input space for our diffusion model is the low-dimensional latent space \(z=E_{LDM}(x)\). The output of the reverse diffusion process is the low dimensional latent \(\hat{z}_{0}\) which is then passed through the pretrained decoder as \(x=D_{LDM}(\hat{z}_{0})\) to get the final image \(\hat{x}_{0}\). ### Learning Content and Style Latent spaces Inspired by DiffAE [40] and similar approaches in GANs [29], we introduce a content encoder \(E_{c}(\,\cdot\,;\psi)\) and a style encoder \(E_{s}(\,\cdot\,;\phi)\) in our framework as shown in Fig. 2. The objective for training is formulated as: \[\min_{\theta,\psi,\phi}\mathbb{E}_{z_{0},\epsilon_{t}}\left[\|\epsilon_{t}- \epsilon(z_{t},t,E_{c}(z_{0};\psi),E_{s}(z_{0};\phi);\theta)\|_{2}^{2}\right],\] where \(z_{t}\) is from the forward process, i.e., \(z_{t}=q(z_{t}|z_{0})\). To ensure that the encoders capture different semantic factors of an image, we design the shape of \(z_{s}\) and \(z_{c}\) asymmetrically as done in [39, 48, 21, 32, 29, 4]. The content encoder \(E_{c}(z_{0};\psi)\) outputs a spatial layout mask \(z_{c}\in\mathbb{R}^{1\times\frac{h}{h}\times\frac{w}{h}}\) where \(w\) and \(h\) are the width and height of \(z_{0}\) latent. In contrast, \(E_{s}(z_{0};\phi)\) outputs \(z_{s}\in\mathbb{R}^{512\times 1\times 1}\) after global average pool layer to capture global high-level semantics. At each layer of the denoising UNet \(\epsilon(\,\cdot\,;\theta)\), the style code \(z_{s}\) is applied using channel-wise affine transformation along with timestep information (\(t_{1}\), \(t_{2}\), and \(t_{3}\)) while the content code \(z_{c}\) is applied in a spatial manner as shown below. \[\underbrace{t_{1}(1+\varphi^{\ell}(z_{c}))}_{\text{spatial-wise}}\odot \underbrace{(1+\zeta^{\ell}(z_{s}))\cdot(t_{2}(h^{\ell}+t_{3}))}_{\text{ channel-wise}}, \tag{1}\] where \(\varphi^{\ell}\) is a down or upsampling operation at \(\ell\)-th layer to make the dimensions of \(\varphi^{\ell}(z_{c})\) and \(h^{\ell}\) match, and \(\zeta^{\ell}\) is a MLP layer to optimize \(z_{s}\) particularly for \(\ell\)-th layer. \(h^{\ell}\) denotes the group-normalized feature map at \(\ell\)-th layer from the denoising networks \(\epsilon(\,\cdot\,;\theta)\), and \(t_{1}\), \(t_{2}\) and \(t_{3}\) are timestep information from \(\text{MLP}(\text{enc}(t))\) following sinusoidal embedding layer. Group Normalization is used, following the prior work [40]. ### Timestep Scheduling for Conditioning It has been observed in [5, 6, 1] that low-frequency information, i.e., coarse features such as pose and facial shape are learned in the earlier timesteps (e.g., \(0<\text{SNR(t)}<10^{-2}\)) while high-frequency information such as fine-grained features and imperceptible details are encoded in later timesteps (e.g., \(10^{0}<\text{SNR(t)}<10^{4}\)) in the reverse diffusion process. Here, SNR(t) stands for signal-to-noise ratio per timestep [26]. Inspired by this, we introduce a weight scheduler for \(z_{c}\) and \(z_{s}\) that determines how much the content and the style conditions are applied into the denoising networks. We use the following schedule: \[w_{c}(t)=\frac{1}{1+\exp{(-a(t-b))}} \tag{2}\] \[w_{s}(t)=\frac{1}{1+\exp{(-a(-t+b))}}, \tag{3}\] where \(a\) is a coefficient for determining how many timesteps content and style are jointly provided while \(b\) indicates the timestep at which \(w_{s}(t)\geq w_{c}(t)\). We also tried simple linear weighting schedule (decreasing for content and increasing for style with every timestep during the reverse diffusion process) and constant schedule but observed that the proposed schedule gave consistently better results (examples are provided in Section F.2 in the supplementary). We additionally evaluate using timestep scheduling during training. It is a promising future direction showing better decomposition between factors controlled by content and style (Section F.1 in the supplementary). ### Generalized Composable Diffusion Models As mentioned in Section 1, generalizing CDM by introducing the joint component can potentially improve composition of multiple conditions and enhancing controllability over the generation. Fig. 3 shows a conceptual illustration of the possible benefit of GCDM over CDM. Let \(z_{s}^{*}\) and \(z_{c}^{*}\) be the ground-truth content and style features that are not observed in Fig. 3 (a). The approximated content and the style features \(\hat{z}_{c}\) and \(\hat{z}_{s}\) can be better separated by leveraging the inductive bias during training. Using the inductive bias only during sampling, would represent scaling due to the variation in their magnitude across timesteps. Note that the approximated \(\hat{z}_{c}\) and \(\hat{z}_{s}\) are used as the axes in (b) and (c). Fig. 3 (b) shows an example that the content and the style guidances from CDM generate unrealistic samples because the combined guidance is outside the manifold. On Figure 3: Conceptual illustration of Composable DMs and our proposed sampling method. (a) shows the effects of leveraging the inductive bias during training and sampling. Leveraging the inductive bias during the training may more disentangle the feature representation. On the other hand, the inductive bias can be used for balancing the amount of the content and the style during the sampling. (b) compares CDM and the joint guidance. The result based on CDM can be outside of manifold while the joint guidance stays on manifold. (c) shows the proposed GCDM. GCDM trades off between the independent guidance provided by CDM (stronger effects of the condition) and the joint guidance (more realistic). Corresponding experiment results can be found in Fig. 5, 6 (main paper) and Fig. 25 (supplementary). the contrary, the joint guidance helps keep the generation within the manifold. (c) visualizes the proposed GCDM which can be seen as a linear interpolation between CDM and the joint guidance. GCDM has the added advantage of enabling separate controls for style, content and realism. Moreover, CDM and the joint guidance are special cases of GCDM. Hence, we argue that it is helpful to derive a generalized composing method without constraining the style and content to be conditionally independent as done in [34]. We would like to sample images given multiple conditions (i.e., style and content in our case), which we formulate as sampling from \(\tilde{p}(x_{t}|c_{1},c_{2})\propto p(x_{t})[p(c_{1},c_{2}|x_{t})^{\lambda}(p(c_ {1}|x_{t})^{\beta_{1}}p(c_{2}|x_{t})^{\beta_{2}})^{1-\lambda}]^{\alpha}\), where \(\alpha\geq 0\) controls the overall strength of conditioning, \(\lambda\in[0,1]\) controls the trade-off between the dependent and independent conditional information, and \(\beta_{1}\) and \(\beta_{2}\) control the weight for style and content information. The guidance gradient in terms of the denoising network \(\epsilon\) (which may depend on zero, one or both conditions) is as follows: \[\nabla_{x_{t}}\log\tilde{p}(x_{t}|c_{1},c_{2})= \tag{4}\] \[\underbrace{\epsilon(x_{t},t)}_{\nabla\log p(x_{t})}+\alpha[ \lambda\underbrace{(\epsilon(x_{t},t,c_{1},c_{2})-\epsilon(x_{t},t))}_{\nabla \log p(c_{1},c_{2}|x_{t})}\] (5) \[+(1-\lambda)\underbrace{\sum_{i=\{1,2\}}\beta_{i}\underbrace{( \epsilon(x_{t},t,c_{i})-\epsilon(x_{t},t))}_{\nabla\log p(c_{i}|x_{t})}}_{ \nabla\log p(c_{1}|x_{t})p(c_{2}|x_{t})}, \tag{6}\] If \(\lambda=0\), this simplifies to CDM [34] and thus can be seen as a generalization of it. In the following experiments on image translation, \(\beta_{1}\) and \(\beta_{2}\) denote \(\beta_{s}\) and \(\beta_{c}\), respectively. The detailed derivation and the effect of various hyperparameters are in Sections B and D.1 in the supplementary. Note that GCDM and timestep scheduling are generic sampling techniques for diffusion models that can also be applied to other tasks beyond image translation (Fig. 5). ## 4 Experiments We comprehensively evaluate the proposed model on image to image translation and additionally show qualitative examples of GCDM and CDM on text to image composition with stable diffusion. Implementation details are provided in Section D. For sampling, we use _reverse DDIM_[40] approach conditioned on the content image and its corresponding content and style codes to get \(x_{T}\) instead of sampling random noise unless otherwise mentioned. This helps with better identity preservation for faces. Analysis on the effects of _reverse DDIM_ is provided in Section D.2. ### Experimental Setup #### Datasets We train different models on the commonly used datasets such as AFHQ [7], FFHQ [22] and LSUN-church [52]. #### Baselines **DiffuseIT:** The most similar work to ours based on diffusion models is DiffuseIT [30] that tackles the same problem formulation. We compare our results with DiffuseIT using their pretrained model and default parameters. **DiffAE+SDEdit:** Since Diffusion Autoencoder [40] does not directly support image-to-image translation, we combine that with SDEdit [37]. The input image for the reverse process is \(x_{600}\) (chosen empirically) obtained as \(q(x_{600}|x_{c})\) by running the forward process on the content image. The semantic feature \(z_{sem}\) from the semantic encoder of DiffAE is used given the style image \(x_{s}\). **DiffAE+MagicMix:** We also combine MagixMix [33] with DiffAE. Similar to DiffAE+SDEdit, this model takes \(x_{600}\) from \(x_{c}\) as input and \(z_{sem}\) from \(x_{s}\) as conditioning. Additionally, at each timestep, the approximated previous timestep \(\hat{x}_{t-1}\) is combined with \(x_{t-1}\) from the content image \(x_{c}\), i.e., \(\hat{x}_{t-1}=v\hat{x}_{t-1}+(1-v)q(x_{t-1}|x_{c})\). For this experiment, \(v=0.5\) is used and the noise mixing technique is applied between \(t=[600,300]\). **SAE:** Swapping Autoencoder [39] based on GAN [14] is also evaluated. Since the available pretrained model is on the resolution of 512, we resize the generated results to 256 for fair comparison. **Evaluation Metrics** **FID:** We use the commonly used Frechet inception distance (FID) [17] to ensure the generated samples are realistic. We follow the protocol proposed in [7] for reference based image translation. To obtain statistics from generated images, 2000 test samples are used as the content images and five randomly chosen images from the rest of the test set are used as style images for each content image to generate 10000 synthetic images. **LPIPS:** Even though FID evaluates realism of the generations, the model could use just content and ignore style (or vice versa) and still get good FID. Following [7], we use LPIPS score obtained by measuring the feature distances between pairs of synthetic images generated from the same content image but with different style images. **Higher LPIPS indicates more diverse results**. It is ideal for the model to tradeoff between LPIPS and FID, i.e incorporate enough style information from different style images for the same content image (increasing LPIPS) but without going out of the real distribution (decreasing FID). ### Comparison with Existing Works In this section, we compare the reference-based image translation performance of the proposed model with baseline models on FFHQ dataset. **Qualitative Results.** Fig. 4 visually shows example generations from different techniques. We observe that DiffAE+SDEdit loses content information while DiffAE+MagicMix generates unnatural images that naively combine the two images. This indicates that a single latent space even with additional techniques such as SDEdit and MagicMix is not suitable for reference based image translation. DiffuseIT and SAE models maintain more content information but does not transfer enough information from the style image and have no control over the amount of information transferred from style. An important benefit of the proposed method is better controllability. By manipulating \(\lambda\), we can control how much guidance is applied. In Fig. 4, decreasing \(\lambda\) increases the effect of style from the style image when \(\beta_{c}=0\) and \(\beta_{s}=1\), where \(\beta_{c}\) and \(\beta_{s}\) are the weights for each conditional guidance (Eq. 4-6). For example, the man on the second row has more wrinkles and beard as \(\lambda\) decreases. Visualizations on the behavior of each hyperparameter are provided in Fig. 10 in the supplementary. **Quantitative Results.** Table 1 shows quantitative comparison in terms of FID and LPIPS metrics on FFHQ dataset. Our variants generate images that are realistic as indicated by the lowest FID scores compared with other models while also performing better on diversity as measured by the highest LPIPS except for DiffAE+SDEdit method. However, DiffAE+SDEdit does not show meaningful translation of style onto the content image. DiffAE+MagicMix shows the worst performance because of its unrealistic generation. SAE and DiffuseIT show lower LPIPS score than ours, indicating that it translates very little information from the style image onto the content image. We can also observe that increasing \(\lambda\) (when \(\beta_{c}=0\) and \(\beta_{s}=1\)) makes LPIPS worse while improving FID. In other words, the stronger the joint guidance is, the more realistic but less diverse the generations. This verifies our assumption in Fig. 3 that the joint component has an effect of pushing the generations into the real manifold. ### Effect of GCDM and Timestep Scheduling **CDM vs GCDM.** The key benefit of leveraging GCDM is that the guidance by GCDM would help keep the sample on the real manifold and thereby generate more realistic images. We compare SAE [39] (the best performing baseline) and ours on AFHQ dataset in Table 2. The joint guidance (\(\lambda=1\)) gets the lowest FID indicating that the generations are more realistic as it pulls the guided results to be within the real data manifold. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & DiffusIT [30] & SAE [39] & DiffAE+SDEdit [40, 37] & DiffAE+MagicMix [33] & Ours(\(\lambda=0.9\)) & Ours(\(\lambda=0.6\)) & Ours(\(\lambda=0.3\)) \\ \hline FID & 29.99 & 25.06 & 26.63 & 84.55 & **11.99** & 13.40 & 15.45 \\ LPIPS & 0.47 & 0.39 & 0.64 & 0.41 & 0.34 & 0.42 & 0.49 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison between the proposed and baseline models using FID and LPIPS on FFHQ dataset. Figure 4: Comparison of the proposed model with baselines for reference based image translation on FFHQ dataset. Our method generates more plausible, realistic combinations of the content and style images with better controllability. Other models either show poorer performance or lack sufficient controllability. \begin{table} \begin{tabular}{c c c c c} \hline \hline & SAE & CDM & GCDM & GCDM \\ & & & (\(\lambda=0.9\)) & (\(\lambda=1.0\)) \\ \hline FID & 9.29 & 10.57 & 9.75 & 8.58 \\ LPIPS & 0.45 & 0.59 & 0.59 & 0.57 \\ \hline \hline \end{tabular} \end{table} Table 2: FID comparisons between SAE and our model with CDM and GCDM on AFHQ dataset. We can also see that GCDM can be thought of as interpolating between CDM and the joint guidance term, since FID for GCDM (\(\lambda=0.9\)) is in between the joint and CDM. By comparing LPIPS and FID of the variants of GCDM, we can see that the outputs become less diverse as realism is increased. SAE shows worse performance than ours in terms of both diversity and realism. The qualitative comparisons can be found in Fig. 25 in the supplementary. **Generalizability of GCDM.** We also compare the performance of CDM and GCDM in composing text prompts for text to image generation using Stable Diffusion V2[42] in Fig. 5. The phrases before and after 'and' are used as the first and the second guidance terms in Eq. 6. The full sentence is used to represent the joint conditioning. As shown in Fig. 5, CDM tends to fail in composing multiple conditions if both conditions contain object information. For example, _the red bird_ and _the yellow flower_ are merged in two out of three generations using CDM. On the other hand, GCDM consistently shows better compositions in the generated images. This emphasizes that GCDM is a generalized formulation for composing multiple conditioning inputs providing more control to the user in terms of realism and diversity as illustrated in Fig. 3. Additional results and the used GCDM hyperparameters can be found in Fig. 28. **Effect of Timestep Scheduling.** To more carefully analyze the effect of time-step scheduling when combined with GCDM or CDM, we alter the time-step scheduling so that there is at least a 0.1 weight on style or content. Specifically, we change the upper and lower bounds of the the sigmoid to be 0.1 and 0.9 in Eq 2 and 3, e.g., \(w^{\prime}_{c}(t)=0.8w_{c}(t)+0.1\). The results can be seen in Table 3 and Fig. 6. Without timestep scheduling, GCDM shows better performance in both FID (realism) and LPIPS (diversity). Combined with timestep scheduling, both CDM and GCDM show meaningful improvements in FID in exchange for losing diversity. This is because, timestep scheduling improves content identity preservation, e.g., pose and facial structure causing less variations in structural information and consequently lower LPIPS/diversity. Additionally, timestep scheduling with GCDM variants show better FID or LPIPS than CDM depending on the strength of guidance terms showing varied control over the generations. ### Analysis and Discussion In this section, we analyze the importance of each of the components of our framework using AFHQ and LSUN-church dataset and aim to better understand the content and style latent spaces. Further analysis and results based PCA, latent interpolation and K-Nearest Neighbor are provided in Figure 5: GCDM vs CDM for text-to-image generation with Stable Diffusion. We can observe that CDM generates unnatural images (e.g., blending two objects) that may be out of the real manifold while GCDM ensures realistic generations (e.g., combining two objects in a realistic way). Figure 6: Effect of timestep scheduling in CDM and GCDM. Timestep scheduling improved the results of both CDM and GCDM and gives the best results when combined with GCDM. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{w/o schedule} & \multicolumn{3}{c}{w/ schedule} \\ \cline{2-5} & CDM & GCDM & CDM & GCDM (\(\beta_{c}=1\)) & GCDM (\(\beta_{s}=1\)) \\ FID & 21.43 & **14.46** & 10.50 & 10.21* & 10.61 \\ LPIPS & 0.47 & **0.51** & 0.31 & 0.28 & 0.33* \\ \hline \hline \end{tabular} \end{table} Table 3: FID comparisons between CDM and GCDM with and without the timestep scheduling in FFHQ dataset. Best method without timestep scheduling is highlighted in bold and with timestep scheduling is highlighted with a *. Figure 7: Visualization of the effect of each guidance term (described in Eq. 4-6) on generation. \(x_{T}\) is randomly sampled. Section E.1, E.2 and E.3 respectively in the supplementary. **Visualization of Each Guidance Term.** The proposed GCDM in Section 3.3 has guidance from three terms, the joint distribution of style and content and style and content codes separately. Fig. 7 shows comparison of the effect of these terms. Columns 3 shows images generated only using guidance from content image. It can be seen that the generated animals are not the same as the content image but has the exact same structure and pose. Similarly, columns 4 shows generations when only style guidance is used. Since content information is not used at all, the pose is random while the style such as color, fur etc. corresponds to the style image. Column 5 shows result of joint guidance whereas the last column shows generations using GCDM. It can be observed that GCDM with \(\beta_{s}=1.0\) has more semantic information from the style than the joint guidance. **Classifier-based comparisons.** To further understand what kind of attributes are encoded in style and content latent spaces, we use pretrained classifiers to predict the attributes of translated images and compare with the original style and content images. We sample 2000 random images from test set to use as \(x_{c}\) and another 2000 as \(x_{s}\) to form 2000 content-style pairs. Next, we acquire the translated output \(x_{o}\) and corresponding pseudo labels \(y_{c}\), \(y_{s}\) and \(y_{o}\) by leveraging an off-the-shelf pretrained attribute classifier (EasyFace). In Table 4, we show the probabilities that the final generated image \(x_{o}\) has an attribute from content image as \(p(y_{c}^{att}=y_{o}^{att})\) and likewise for style image. Both ours and SAE are designed to make \(z_{s}\) encode global high-level semantics, e.g., Gender, Age, etc. Thus, methods would show ideal performance if \(y_{o}^{att}=y_{s}^{att}\neq y_{c}^{att}\). We see that most global attributes come from the content image for SAE indicating conservative translations from the style image (as seen in Fig. 4 and lower LPIPS in Table 1). In contrast, ours has a controllable way of deciding the strength of attributes from the style image through \(\lambda\). The lower the value of \(\lambda\), the more disentangled and consistent the attributes will be in the generations. **Information Encoded in Each Latent Space.** We analyze the role of the denoising network \(\epsilon_{\theta}\) and the encoders \(E_{c}\) and \(E_{s}\) by analyzing what information is encoded in the respective latent spaces. Fig. 8 and Fig. 9 show the role of \(\epsilon_{\theta}\) in the reverse process evaluated on LSUN-church dataset. Fig. 8 shows the results of fixing the content while varying the style images (and vice versa). \(x_{T}\) is fixed as well to reduce the stochasticity. The remaining stochasticity comes from the white noise at each timestep during the reverse process. From the results, we can see that the structure information is maintained while style information changes according to the style image (and vice versa) as we intended. Similarly, in Fig. 9 we forward the same image to content and style encoders while the generation starts from different random noise \(x_{T}\). The images show that the denoising network play a role in stochasticity since the outputs have consistent shape, color and texture information while minor details of the buildings or clouds are changed. ## 5 Conclusion We propose a novel framework for enhancing controllability in image conditioned diffusion models for reference based image translation and image manipulation. Our content and style encoders trained along with the diffusion model do not require additional objectives or labels to learn to decompose style and content from images. The proposed generalized composable diffusion model extends \begin{table} \begin{tabular}{c c c c c c c} \hline \multicolumn{1}{c}{\multirow{2}{*}{ \begin{tabular}{c} Probability \\ Att. is Equal (\%) \\ \end{tabular} }} & \multicolumn{2}{c}{\(x_{c}\)} & \multicolumn{2}{c}{\(x_{s}\)} \\ \cline{2-7} & Gender & Age & Race & Gender & Age & Race \\ SAE & 65.95 & 62.36 & 50.40 & 34.05 & 26.40 & 27.91 \\ Ours (\(\lambda=0.9\)) & 65.14 & 53.79 & 53.31 & 34.86 & 31.60 & 28.51 \\ Ours (\(\lambda=0.25\)) & 26.61 & 25.94 & 31.73 & 73.39 & 56.77 & 44.48 \\ \hline \end{tabular} \end{table} Table 4: Classifier-based comparisons in FFHQ. Figure 8: Example generations on LSUN-church dataset showing that the content and style codes are robust to changes. \(x_{T}\) is randomly sampled. Figure 9: Example showing the role of the denoising network during sampling when content and style codes are unchanged. \(x_{T}\) is randomly sampled. CDM for a more generalized scenario. It shows significantly better performance when compared with CDM for translation as well as compositing text prompts. We also build on the inductive bias and show that timestep dependent weight schedules for conditioning inputs can help improve overall results and controllability. Additionally, the learned latent spaces are observed to have desirable properties like PCA based attribute manipulation and smooth interpolations. Quantitative and qualitative evaluation shows the benefits of the proposed sampling techniques.
2309.17275
Utility-based Adaptive Teaching Strategies using Bayesian Theory of Mind
Good teachers always tailor their explanations to the learners. Cognitive scientists model this process under the rationality principle: teachers try to maximise the learner's utility while minimising teaching costs. To this end, human teachers seem to build mental models of the learner's internal state, a capacity known as Theory of Mind (ToM). Inspired by cognitive science, we build on Bayesian ToM mechanisms to design teacher agents that, like humans, tailor their teaching strategies to the learners. Our ToM-equipped teachers construct models of learners' internal states from observations and leverage them to select demonstrations that maximise the learners' rewards while minimising teaching costs. Our experiments in simulated environments demonstrate that learners taught this way are more efficient than those taught in a learner-agnostic way. This effect gets stronger when the teacher's model of the learner better aligns with the actual learner's state, either using a more accurate prior or after accumulating observations of the learner's behaviour. This work is a first step towards social machines that teach us and each other, see https://teacher-with-tom.github.io.
Clémence Grislain, Hugo Caselles-Dupré, Olivier Sigaud, Mohamed Chetouani
2023-09-29T14:27:53Z
http://arxiv.org/abs/2309.17275v1
# Utility-based Adaptive Teaching Strategies using Bayesian Theory of Mind ###### Abstract Good teachers always tailor their explanations to the learners. Cognitive scientists model this process under the rationality principle: teachers try to maximise the learner's utility while minimising teaching costs. To this end, human teachers seem to build mental models of the learner's internal state, a capacity known as Theory of Mind (ToM). Inspired by cognitive science, we build on Bayesian ToM mechanisms to design teacher agents that, like humans, tailor their teaching strategies to the learners. Our ToM-equipped teachers construct models of learners' internal states from observations and leverage them to select demonstrations that maximise the learners' rewards while minimising teaching costs. Our experiments in simulated environments demonstrate that learners taught this way are more efficient than those taught in a learner-agnostic way. This effect gets stronger when the teacher's model of the learner better aligns with the actual learner's state, either using a more accurate prior or after accumulating observations of the learner's behaviour. This work is a first step towards social machines that teach us and each other, see [https://teacher-with-tom.github.io](https://teacher-with-tom.github.io). ## 1 Introduction When tasked with imparting an understanding of the solar system, a physics teacher tailors their explanation based on the audience. The approach taken for a 10-year-old astrophysics enthusiast differs significantly from that employed for an advanced master's student. In fact, the teacher provides an explanation that maximises the likelihood of the listener understanding the concept. This pedagogical sampling phenomenon has been explored in cognitive science notably in Gweon et al. (2018). This study involves children being asked to demonstrate the use of a toy to knowledgeable or ignorant children learners. It shows that the behaviour of the teacher-child depends on prior observations of the learner-child. Specifically, if the learner has previously interacted with a similar toy in the presence of the teacher, the teacher only exhibits partial functionality of the toy. Conversely, when no prior interaction is observed, the teacher demonstrates the complete use of the toy. By definition, the aim of a teacher is to ensure the learner's understanding. An option for the teacher would be to demonstrate the full functionality of the toy each time, but this comes with a cost. Rather, the teacher strikes a balance between the learner's understanding, reflected in its subsequent behaviour, and the costs of teaching. Assuming the teacher is rational, we can thus consider that this trade-off is the teacher's _utility_(Goodman and Frank, 2016). Importantly, teachers who solely provide the missing information for the learner to achieve the task are also perceived as more trustworthy than over-informative ones (Gweon et al., 2018). More generally, human teachers choose how to teach based on a prediction of how their guidance signal will be received, as outlined in the Inferential Social Learning (ISL) framework (Gweon, 2021). In this framework, humans acquire knowledge by making inferences from observing others' behaviour and leverage this knowledge to help others learn. More precisely, ISL is grounded on a set of cognitive mechanisms constituting the Theory of Mind (ToM), which refers to the human ability to understand and predict the actions of others by inferring their mental states, such as prior knowledge, goals, intentions, beliefs etc. (Baker and Saxe, 2011). ToM can be understood as the inverse planning of an intuitive behavioural model predicting what others would do given their mental state (Baker et al., 2009). To be efficient, human pedagogical interventions such as selection of examples (Shafto et al., 2014) or demonstrations (Ho et al., 2021) require ToM. ISL is considered a key component to humans mutual understanding as well as a foundation of humans' powerful capacity to efficiently learn from others. Therefore, incorporating ISL mechanisms into AI systems is a promising way to make human-machine interactions more informative, productive, and beneficial to humans (Gweon et al., 2023; Sigaud et al., 2022). In this paper, we introduce teacher agents equipped with a ToM model of the learner agent's internal state, including its goal, intention, belief, and sensory capacity. The goal of this work is to study whether learner-specific teachers who model the learner's internal state are more efficient than learner-agnostic ones. In particular, we explore the limitations of ToM models not being able to recover the learner actual internal state from its behaviour, either due to inaccurate priors or limited observation, in a context where providing guidance incurs a cost proportional to its informativeness. To achieve this, as depicted in Figure 1, we define _ToM-teachers_ able to 1. update a _belief_ about the internal state (i.e. goal, intention, belief, sensory capacity) of an unknown learner through Bayesian inference based on observations of its behaviour in a simple environment, see Figure 1(A), and 2. leverage this belief to estimate the utility of different demonstrations in a more complex environment, similarly to human planning as described in Ho et al. (2022), in order to select the most effective one for the specific observed learner, see Figure 1(B). To conduct our experiments, we present two environments: a toy environment reminiscent of Gweon's study mentioned above (Gweon et al., 2018), and a more challenging gridworld environment for goal-conditioned 2D navigation, see Figure 1. Depending on its sensory capacity, the learner might require the help of a teacher agent providing a demonstration showing the locations of the objects needed to complete the task. However, the teacher ignores the goal of the learner and its sensory capacity, but can infer them from a past trajectory of the learner in a simpler environment. In this setup, the teacher must select the most useful demonstration providing enough information to help the learner reach its goal, but at a minimal teaching cost. The demonstration utility is optimal if it contains the necessary and sufficient amount of information for the learner to reach its goal. In this context, we show that the teacher must display accurate ISL abilities, inferring the learner's goal Figure 1: (A) The teacher observes a learner with a particular internal state behaving in a simple environment \(\mathcal{M}^{\text{obs}}\) and infers a ToM model of this learner. (B) In a more complex environment \(\mathcal{M}^{\text{demo}}\), the teacher uses this ToM model to predict the usefulness for the observed learner of each demonstration of a provided dataset \(\mathcal{D}\), out of which it selects the utility-optimal demonstration \(d^{*}\). The learner observes \(d^{*}\) and updates its knowledge about \(\mathcal{M}^{\text{demo}}\). (C) The learner behaves in \(\mathcal{M}^{\text{demo}}\) and receives a reward. The teacher is evaluated on the utility of \(d^{*}\), which is the learner’s reward minus the cost incurred by the teacher in delivering that demonstration. and sensory capacity from the past trajectory to effectively assist the learner. However, we find that this depends on the accuracy of the ToM-teacher's behavioural model of the learner as well as the amount of observation of its behaviour. ## 2 Related work In addition to cognitive science researches on human pedagogy (Shafto et al., 2014; Gweon, 2021; Ho et al., 2021), this work is related to the following interconnected research areas: **Theory of Mind (ToM):** Observer agents capable of inferring the internal state, including the goal, of another agent have been developed based on Bayesian Inference (Ying et al., 2023; Reddy et al., 2019) and neural networks (Rabinowitz et al., 2018; Nguyen et al., 2022). However, these works do not explore how to leverage these models of ToM to assist the learner in achieving its goal, as humans do, as explained in Ho et al. (2022). Our teacher agent is capable of both modelling the learner's internal state, including its goal as well as sensory capacity, and leveraging this model to assist the learner through adapted demonstration selection. **Machine teaching:** Machine Teaching is formalised as the problem of identifying the minimal teaching signal maximising the learner's reward (Zhu et al., 2018; Brown and Niekum, 2019). The teacher possesses knowledge of the learner's goal and aims to either generate the teaching data (Zhu, 2013) or to extract it from a dataset (Yang and Shafto, 2017), helping the learner agent achieve its goal. A teaching signal is considered optimally useful if it maximises utility, that is it enables the learner to achieve its goal while minimising the teaching cost (Zhu et al., 2018). In our framework as in Machine Teaching, the teacher must select the most helpful demonstration from a given set. However, in contrast to these previous works, our teacher assists various learners with different goals and sensory capacities, and thus different optimal demonstrations. Furthermore, when teaching, the teacher is unaware of the learner's goal and infers it from past interactions, hence the introduction of a ToM model of the learner. The demonstration selection strategy of our teacher is similar to the one used in cognitive science to model human's strategy as described in Ho et al. (2022): it uses the learner's ToM model to predict the outcomes of different possible demonstrations for the learner, in order to select the demonstration of optimal utility. While our work uses communication through demonstrations as sequences of actions, enhancing teaching by incorporating ToM model of the learner has already been investigated in the context of language-based teacher-learner communication in Zhao et al. (2023); Zhou et al. (2023). **Bayesian Inference:** Bayesian Inference is a widely used mechanism for inferring the goals of other agents by computing posterior probabilities based on their actions and policies (Baker et al., 2009; Baker and Saxe, 2011; Zhi-Xuan et al., 2020; Ying et al., 2023). In our work, we employ it as a tool to infer the internal state of the learner, including its goal and sensory capacity. Additionally, similarly to Zhu (2013); Ho et al. (2022), we assume a Bayesian learner to ensure direct communication from the teacher to the learner as the demonstration selected by the teacher modifies the belief of the learner about the environment. ## 3 Methods Our general framework is depicted in Figure 1. Below we describe the components in more details. ### Environment We introduce our environment as a Goal-Conditioned Partially Observable Markov Decision Problem (GC-POMDP), which is a combination of a Goal-Conditioned Markov Decision Problem (GC-MDP) and a Partially Observable Markov Decision Problem (POMDP). In GC-POMDPs, agents aim at achieving different goals with limited information on the current state of the environment. An instance \(\mathcal{M}^{j}\) of a GC-POMDP is defined by: \(\bullet\) A set of states \(\mathcal{S}^{i}\), a set of possible actions \(\mathcal{A}^{j}\), a transition function \(\mathcal{T}^{j}:\mathcal{S}^{j}\times\mathcal{A}^{j}\rightarrow\mathcal{S}^{j}\), \(\bullet\) A set of possible goals \(\mathcal{G}^{j}\), \(\bullet\) A history-dependent goal-conditioned reward function \(R^{j}:\mathcal{H}^{j}\times\mathcal{G}^{j}\rightarrow\mathbb{R}\), where \(\mathcal{H}^{j}\) is the space of histories. We define a _history_ as a sequence of state-action pairs over time, which can be formulated as \(\mathcal{H}^{j}=\bigcup_{t}\mathcal{H}^{j}_{t}\) in which \(\mathcal{H}^{j}_{t}=\{(s_{0},a_{0},\ldots,s_{t-1},a_{t-1})\}=\prod_{t}\big{(}S ^{j}\times\mathcal{A}^{j}\big{)}\). We consider that all GC-POMDPs share their action and goal spaces denoted \(\mathcal{A}\) and \(\mathcal{G}\). In summary, a GC-POMDP is defined as \(\mathcal{M}^{j}=(\mathcal{S}^{j},\mathcal{A},\mathcal{T}^{j},\mathcal{G},R^{j})\). In practice, our GC-POMDPs are different instances of similar gridworld environments constructed from the MiniGrid library (Chevalier-Boisvert et al., 2023). Another example with a toy environment is described in Appendix A. ### Learner We consider a finite family of agents \(\mathcal{L}=\{L_{i},i\in I\}\) that we call _learners_. A learner \(L_{i}\) is defined by a goal \(g_{i}\in\mathcal{G}\) and an observation function \(v_{i}\), i.e. \(L_{i}=(g_{i},v_{i})\). In an environment \(\mathcal{M}^{j}=(\mathcal{S}^{j},\mathcal{A},\mathcal{T}^{j},\mathcal{G},R^{j})\), the observation function is defined on the state space towards an observation space \(\Omega_{i}\), \(v_{i}:\mathcal{S}^{j}\rightarrow\Omega_{i}\). The set of observation functions is denoted \(\mathcal{V}\) and is assumed to be identical for all the considered GC-POMDPs. The aim of the learner is to maximise the reward functions \(R^{j}\), conditioned on the learner's goal \(g_{i}\). In practice, the learner must achieve its goal in minimum time to maximise its reward. We characterise the behaviour of a learner \(L_{i}\) on \(\mathcal{M}^{j}\) as a trajectory \(\tau_{i}=\{(s_{t},a^{i}_{t})\in\mathcal{S}^{j}\times\mathcal{A}\}_{t=0}^{T}\). For the same trajectory, two learners \(L_{i}\) and \(L_{i^{\prime}}\) with different observation functions \(v_{i}\neq v_{i^{\prime}}\) acquire different knowledge about the environment, and two learners with different goals \(g_{i}\neq g_{i^{\prime}}\) receive different rewards. As shown in Kaelbling et al. (1998); Ross et al. (2007), a POMDP, and by extension a GC-POMDP, can be defined as a Bayes Adaptive Partially Observable Markov Decision Problem (BAPOMDP). In this formulation, the observation is augmented by a belief of the agent about uncertain aspects of the environment, such as the reward function, transition function, or state. In our context, from the learner's point of view, the uncertainty is limited to the state of the environment. To model learner's \(L_{i}\) policy, we thus consider at every step \(t\) its _belief_\(b^{i,j}_{t}\), which is a probability distribution over a set of possible states \(\mathcal{S}^{j}_{B}\) of environment \(\mathcal{M}^{j}\). We assume that the support of the belief contains the real state space, \(\mathcal{S}^{j}\subset\mathcal{S}^{j}_{B}\) and note \(\mathcal{B}^{j}\) the continuous space of beliefs. At every step \(t\), the environment being in a state \(s_{t}\in\mathcal{S}^{j}\) and the observation being \(o^{i}_{t}=v_{i}(s_{t})\), the belief of learner \(L_{i}\) about the state \(s\in\mathcal{S}^{j}_{B}\) of the environment is updated using Bayesian update: \[\forall s\in\mathcal{S}^{j}_{B},\quad b^{i,j}_{t+1}(s)=\frac{b^{i,j}_{t}(s) \times\mathbb{P}(o^{i}_{t}|s)}{\int_{s^{\prime}\in\mathcal{S}^{j}_{B}}b^{i,j}_ {t}(s^{\prime})\times\mathbb{P}(o^{i}_{t}|s^{\prime})}. \tag{1}\] Unless mentioned otherwise, we assume that the learner's initial belief \(b^{i,j}_{0}\) on the state of \(\mathcal{M}^{j}\) is uniform over the set of possible states \(\mathcal{S}^{j}_{B}\). In the experiments presented below, we additionally assume that all learners share a policy on the environment \(\mathcal{M}^{j}\) conditioned by a goal, an observation function and a belief: \[\pi^{j}(.|g,v,b^{L}):\cup_{i}\Omega_{i}\times\mathcal{A}\rightarrow[0,1],\quad \text{with }(g,v,b^{L})\in\mathcal{G}\times\mathcal{V}\times\mathcal{B}^{j}. \tag{2}\] To simulate a trajectory \(\tau^{i}\) of learner \(L_{i}\) on \(\mathcal{M}^{j}\), one only needs to know the tuple \((\pi^{j},g_{i},v_{i},b^{i,j}_{0})\). In practice, the learners use a single policy denoted \(\pi\) for all the considered GC-POMDPs. Moreover, within MiniGrid environments, the observation functions \(v_{i}\) are defined by a square area of size \(v_{i}\times v_{i}\) cells, known as the _receptive field_ of learner \(L_{i}\). This receptive field defines the localised region in front of the learner, mimicking visual sensory capacities and a larger receptive field size helps the learner reach its goal faster. We denote \(C^{i}_{t}\) the set of visible cells in observation \(o^{i}_{t}\) at time \(t\). The probability \(\mathbb{P}(o^{i}_{t}|s)\) in Equation 1 is then computed as \(\mathbb{P}(o^{i}_{t}|s)=\prod_{c\in C^{i}_{t}}\mathds{1}(o^{i}_{t}[c_{o}]=s[c])\), where \(c_{o}\) corresponds to the cell in the observation matching cell \(c\). ### Teacher We introduce an agent called _teacher_ whose aim is to optimally help the learner maximise its reward on a GC-POMDP \(\mathcal{M}^{\text{demo}}=(\mathcal{S}^{\text{demo}},\mathcal{A},\mathcal{T}^{ \text{demo}},\mathcal{G},R^{\text{demo}})\) by providing a demonstration. #### 3.3.1 Utility based demonstration selection strategy We define a demonstration of length \(n\in\mathbb{N}\) on \(\mathcal{M}^{\text{demo}}\) as a sequence of actions \(d=(a_{0}^{\text{demo}},\dots,a_{n-1}^{\text{demo}})\in(\mathcal{A})^{n}\). We consider the demonstration to be provided as if the teacher were _teleoperating_ the learner as described in Silva and Costa (2019). Thus, at step \(t\) of the demonstration, learner \(L_{i}\) observes \(\bar{o}_{t+1}^{t}=v_{i}\left(\mathcal{T}_{\text{demo}}(s_{t},a_{t}^{\text{demo }})\right)\). The learner's belief about the new environment \(\mathcal{M}^{\text{demo}}\) is updated based on the observations \((\bar{o}_{1}^{i},\dots,\bar{o}_{n}^{i})\) resulting from the demonstration, as in Equation 1 and depicted in Figure 1(B). This updated belief is then used as initial belief \(b_{0}^{i,\text{demo}}\) by the learner. In other words, the aim of the demonstration is to provide to the learner a prior knowledge about the new environment. The environment is then reset to its initial state, and the learner behaves following a policy \(\pi^{\text{demo}}\) defined in Equation 2 starting with belief \(b_{0}^{i,\text{demo}}\). As shown in Figure 1(C), the execution of this policy produces a trajectory \(\tau^{\text{demo}}=\{(\hat{s}_{\text{demo}}^{\text{demo}},a_{\text{demo}}^{ \text{demo}})\}_{t=0}^{T}\) where \(T\in\mathbb{N}\) and the learner receives a reward \(R^{\text{demo}}(\tau^{\text{demo}},g_{i})\) denoted \(R^{\text{demo}}(L_{i}|d)\), which represents the reward of learner \(L_{i}\) on environment \(\mathcal{M}^{\text{demo}}\) after having observed demonstration \(d\). We assume that the teacher knows the environment \(\mathcal{M}^{\text{demo}}\) and has access to a set of potential demonstrations \(\mathcal{D}\) to be shown on \(\mathcal{M}^{\text{demo}}\) as well as a teaching cost function \(c_{\alpha}:\mathcal{D}\rightarrow\mathbb{R}\) parameterised \(\alpha\in\mathbb{R}_{+}\). For a given parameter \(\alpha\), the cost of a demonstration \(d\in\mathcal{D}\), denoted \(c_{\alpha}(d)\), represents the cost for the teacher of showing demonstration \(d\) to a learner. In our context, this function increases with the length of the demonstration. We introduce on the environment \(\mathcal{M}^{\text{demo}}\) the _utility_ of a demonstration \(d\) for a learner \(L_{i}\) as the reward of the learner after having observed the demonstration \(d\) on \(\mathcal{M}^{\text{demo}}\) minus the cost for the teacher of showing this demonstration: \(u_{\alpha}(d,L_{i})=R^{\text{demo}}(L_{i}|d)-c_{\alpha}(d)\). The aim of the teacher is to select the demonstration \(d_{i}^{*}\) that maximises the utility for the learner \(L_{i}\): \[d_{i}^{*}=\arg\max_{d\in\mathcal{D}}\underbrace{u_{\alpha}(d,L_{i})}_{R^{ \text{demo}}(L_{i}|d)-c_{\alpha}(d)}. \tag{3}\] However, the teacher does not know neither the learner's goal \(g_{i}\) nor its observation function \(v_{i}\). Instead, it can only access a past trajectory \(\tau^{\text{obs}}\) of the same learner \(L_{i}\), but in a different environment \(\mathcal{M}^{\text{obs}}=(\mathcal{S}^{\text{obs}},\mathcal{A},\mathcal{T}^{ \text{obs}},\mathcal{G},R^{\text{obs}})\), see Figure 1(A). Therefore, in order to approximate Equation 3, the teacher should estimate the utility of each demonstration \(d\) in \(\mathcal{D}\) for this learner, see Figure 1(B). As the teacher knows the teaching cost function, this is equivalent to estimating the learner's reward. #### 3.3.2 Bayesian ToM-teacher To estimate the utility of a demonstration \(d\) for an unknown learner \(L\), we introduce a teacher equipped with a Theory of Mind (ToM) model that we refer to as _ToM-teacher_. In our case, the ToM model is used to predict the learner's behaviour on \(\mathcal{M}^{\text{demo}}\) after having observed demonstration \(d\), leading to the estimation of the demonstration's utility. We present a ToM-teacher using Bayesian inference, called _Bayesian ToM-teacher_. We assume that the teacher has knowledge of the learner's uniform initial belief and has access to a behavioural model of the learner - that is an approximation of its policy \(\hat{\pi}\) - along with sets of possible goals \(\mathcal{G}_{B}\) and observation functions \(\mathcal{V}_{B}\). These spaces are assumed discrete. In practice, the latter set represents a range of possible sizes of receptive fields. We assume that both sets contain the real sets of goals and observation functions (\(\mathcal{G}\subset\mathcal{G}_{B}\) and \(\mathcal{V}\subset\mathcal{V}_{B}\)). In this context, from the teacher's perspective, the uncertainty relies solely on the goals and observation functions of the learners. Therefore a teacher considers learner \(L_{i}\) as the tuple \((\hat{\pi},g_{i},v_{i})\). From a past trajectory \(\tau^{\text{obs}}=\{(s_{k},a_{k}^{\text{obs}})\}_{k=0}^{K-1}\) of an unknown learner \(L\) on the first environment \(\mathcal{M}^{\text{obs}}\), the Bayesian ToM-teacher computes a probability distribution over the joint space \(\mathcal{G}_{B}\times\mathcal{V}_{B}\) that is its belief \(b^{T}\) about the goal and observation function of the learner. At step \(k\in[0,K-1]\) of the observed trajectory \(\tau^{\text{obs}}\), for every pair \((g,v)\in\mathcal{G}_{B}\times\mathcal{V}_{B}\), it derives from Equation 1 the belief that a learner would have with observation function \(v\) after producing the trajectory \(\tau^{\text{obs}}[0:k-1]\), denoted \(b^{v}_{k}\). It then updates its own belief about the learner goal and observation function based on the Bayesian update rule: \[\forall(g,v)\in\mathcal{G}_{B}\times\mathcal{V}_{B},\quad b^{T}_{k+1}(g,v)= \frac{b^{T}_{k}(g,v)\times\hat{\pi}\left(v(s_{k-1}),a^{\text{obs}}_{k}|g,b^{v} _{k}\right)}{\sum_{g^{\prime}\times v^{\prime}\in\mathcal{G}_{B}\times \mathcal{V}_{B}}b^{T}_{k}(g^{\prime},v^{\prime})\times\hat{\pi}\left(v^{\prime }(s_{k-1}),a^{\text{obs}}_{k}|g^{\prime},b^{v}_{k}\right)}. \tag{4}\] The quantity \(b^{T}_{k}(g,v)\) represents the probability of the learner having a goal \(g\) and an observation function \(v\), given that it produced trajectory \(\tau^{\text{obs}}[0:k-1]\), under the assumption that, to generate \(\tau^{\text{obs}}[0:k-1]\), the learner follows policy \(\hat{\pi}\). After having observed the entire trajectory, the teacher estimates the utility of a demonstration \(d\in\mathcal{D}\) on a second environment \(\mathcal{M}^{\text{demo}}\) for the observed learner by computing the expected value: \[\hat{u}_{\alpha}(d)=\sum_{(g,v)\in\mathcal{G}_{B}\times\mathcal{V}_{B}}\hat{u} _{\alpha}\left(d,L=(g,v)\right)\times b^{T}_{K}(g,v), \tag{5}\] where \(\hat{u}_{\alpha}(d,L)\) is the estimated utility of demonstration \(d\) for learner \(L=(\hat{\pi},g,v)\). To compute this quantity, the teacher computes the initial belief \(b^{v,\text{demo}}_{0}\) of the learner \(L=(g,v)\) on the environment \(\mathcal{M}^{\text{demo}}\) after having observed demonstration \(d\), based on Equation 1. From the tuple \((\hat{\pi},g,v,b^{v,\text{demo}}_{0})\), the teacher simulates a trajectory \(\hat{\tau}^{\text{demo}}\) and computes the associated estimated reward \(\hat{R}^{\text{demo}}(L|d)=R^{\text{demo}}(\hat{\tau}^{\text{demo}},g)\) leading to the estimated utility \(\hat{u}_{\alpha}(d,L)=\hat{R}^{\text{demo}}(L|d)-c_{\alpha}(d)\). The expected utility can be expressed as the expected reward of the observed learner after following demonstration \(d\) minus the cost of the demonstration: \[\hat{u}_{\alpha}(d)=\underbrace{\left(\sum_{(g,v)\in\mathcal{G}_{B}\times \mathcal{V}_{B}}\hat{R}^{\text{demo}}(L=(g,v)|d)\times b^{T}_{K}(g,v)\right)}_{ \text{Expected reward}}-c_{\alpha}(d). \tag{6}\] The teacher selects the utility-optimal demonstration \(d^{*}\), approximating Equation 3 with \(d^{*}=\arg\max_{d\in\mathcal{D}}\hat{u}_{\alpha}(d)\). We define two ToM-teachers which differ in their prior model of the learner's policy \(\hat{\pi}\): \(\bullet\) The _aligned ToM-teacher_ possesses exact knowledge of the learner's policy, \(\hat{\pi}=\pi\). \(\bullet\) The _rational ToM-teacher (with parameter \(\lambda\))_ only assumes that the learner is rational, meaning it tries to reach the goal in minimum time, but its approximate policy \(\hat{\pi}\neq\pi\) is based on a Boltzmann policy that considers the expected distance between the learner and the goal after taking different actions. The temperature parameter \(\lambda\) of the Boltzmann policy represents the assumed degree of rationality of the learner in terms of how much the learner favours actions towards its goal, see Appendix B.3 for more details. ## 4 Experiments **Environments:** The observation environment \(\mathcal{M}^{\text{obs}}\) is a \(11\times 11\) MiniGrid gridworld (Chevalier-Boisvert et al., 2023) and is enclosed by walls along its borders. The environments contains four door-key pairs of colours in the set \(\mathcal{G}=\{green,blue,purple,yellow\}\). To open a door, an agent has to possess the key of the same colour. The demonstration environment \(\mathcal{M}^{\text{demo}}\), contains the same objects as the observation environment but over \(33\times 33\) cells. It is composed of nine rooms of \(11\times 11\) cells, separated by walls. In both environments, a trajectory stops either when the learner opens its goal door or when the maximum number of actions is elapsed. **Learner:** The learner's goal is to open a door as fast as possible. To model this, we use the default goal-conditioned trajectory reward function of the MiniGrid environments: \(\frac{\text{length}(\tau)}{\text{max\_steps}}\) if the door of colour \(g\in\mathcal{G}\) is open at the end of trajectory \(\tau\), and \(R(\tau,g)=0\) otherwise. In \(\mathcal{M}^{\text{obs}}\), we set \(\text{max\_steps}=11^{2}=121\), and in \(\mathcal{M}^{\text{demo}}\), we use \(\text{max\_steps}=\frac{33^{2}}{2}=544\). The learner possesses either a view with dimensions \(v\times v\) cells with \(v\in\{3,5\}\) or full observability (\(v=full\_obs\)) of the environment. We define the learner's policy as a decision tree depicted in Appendix B.1. We assume that the learner attempts to reach the corresponding key before trying to open the door and acts greedily when it knows the location of the object to reach and actively explores otherwise. The greedy policy follows the shortest path computed by the \(A^{*}\) algorithm (Hart et al., 1968) within the parts of the environment that have been discovered to go to the object. The active exploration policy selects actions best reducing the uncertainty on the environment state. **Teachers:** As defined above in Section 3.3, we consider two teachers equipped with a ToM model of the learner, an _aligned ToM-teacher_ and a _rational ToM-teacher_ with a parameter \(\lambda\). We compare the utilities of their demonstrations to that of 5 baseline teachers, one for upper-bound and four learner-agnostic teachers which do not leverage the past observations of the learner in their strategies for demonstration selection: The _omniscient teacher_ knows the actual goal, observation function and policy of the learner and provides the utility-optimal demonstration. It sets an upper-bound teacher's utilities. The _reward-optimal non-adaptive teacher_ selects the demonstration in \(\mathcal{D}\) maximising the mean reward over all the possible learners without considering the teaching cost. In practice, this teacher provides the demonstration showing all the objects (keys and doors) of the environment. The _utility-optimal non-adaptive teacher_ selects the demonstration in \(\mathcal{D}\) maximising the mean utility over all possible learners. The _uniform modelling teacher_ uniformly samples a learner in \(\mathcal{L}\): it uniformly samples a goal \(g\) and a receptive field size \(v\) for the observed learner and provides the demonstration maximising the utility for \(L=(g,v)\). The _uniform sampling teacher_ selects a demonstration uniformly among the set \(\mathcal{D}\) of available demonstrations. This teacher does not have any model of the learner. **Demonstration set:** The demonstration set \(\mathcal{D}\) contains shortest demonstrations for each goal-observation function pairs \((g,v)\in\mathcal{G}\times\mathcal{V}\) showing the learner's key and door goal at a distance of at least \(v\). In addition, we generate demonstrations showing \(N\in[3,8]\) random objects (key or door) of the environment, see Appendix B.2 for details. We use a linear teaching cost with parameter \(\alpha=0.6\) normalised by the size \(l_{max}\) of the longest demonstration of \(\mathcal{D}\). For a demonstration of length \(l_{d}\), the teaching cost is \(c_{\alpha}(l_{d})=\alpha\times\frac{l_{d}}{l_{max}}\). In practice, the longest demonstration is the one showing all objects, \(N=8\). **Metrics:** A teacher is evaluated based on the measured utility of the demonstration it has selected for the observed learner \(L\), given by \(u_{\alpha}(d^{*},L)=R^{\text{demo}}(L|d^{*})-c_{\alpha}(d^{*})\). **Experiments**: We conducted \(100\) experiments for each pair \((g,v)\in\mathcal{G}\times\mathcal{V}\). The mean utilities of the demonstrations selected by the teachers for learners with a fixed receptive field size \(v\) are displayed in Figure 2 and detailed in Appendix C Table 1. They are computed over \(400\) trials with a \(95\%\) confidence interval and we perform Student T-tests to assess significant difference between the mean utilities of two teachers. In each trial, both the observation and demonstration environments are randomly generated, and all teachers are evaluated within the same environment pair (\(\mathcal{M}^{\text{obs}},\mathcal{M}^{\text{demo}}\)) - all teachers select a demonstration from the same demonstration set \(\mathcal{D}\), and the ToM-teachers observe the same trajectory of the learner on \(\mathcal{M}^{\text{obs}}\). ## 5 Results We provide results when the learners are observed under two conditions: for a full episode or for only their \(10\) first actions, leading to more uncertain inference about their goals and sensory capacities. ### Observing a full trajectory of the learner Figure 2 illustrates the mean utility of the demonstrations selected by each teacher, for learners with varying receptive field sizes acting in \(\mathcal{M}^{\text{obs}}\) during a full episode. Across all the considered learners with varying receptive field sizes, the demonstrations chosen by the ToM-teachers outperform those of learner-agnostic baseline teachers. As the task difficulty increases for the learner (i.e., when its receptive field size decreases), the learner requires both more informative and more specific demonstrations to achieve its goal. Consequently, having an accurate model of the learner becomes necessary to ensure the selection of helpful demonstrations. The mean utility of aligned ToM-teachers is not significantly different from that of the omniscient demonstrations (p-values \(>0.3\))1 for learners with receptive field of sizes \(3\) and \(5\). In contrast, uniform teachers select demonstrations with close-to-null mean utility for learners with a receptive field size of \(3\) and demonstrations that are four times less useful than those of the ToM-teachers for learners with receptive field size of \(5\). The utility-optimal and reward-optimal non-adaptive teachers perform at most half as well as the ToM-teachers for these learners, see Appendix C Table 1. Footnote 1: A t-test with null hypothesis \(H_{0}\): there is no significant difference between the utilities of both teachers. On the contrary, as the task becomes easier for the learners (with wider sensory capacities), the mean utilities of the demonstrations selected by learner-agnostic teachers get closer to those of the ToM and omniscient teachers' demonstrations, as the need for selecting a specific demonstration based on an accurate model of the learner decreases. In fact, with full observability, any demonstration from the demonstration set suffices for the learner to reach the goal. With a teaching cost of \(\alpha=0.6\) it is worth noting that the utility-optimal non-adaptive teacher tends to select less informative demonstrations (with low teaching cost) leading to higher mean utility for learners with full observability and lower mean utility for learners with a limited view. Selecting the demonstration maximising the mean reward over the learners proves to be too expensive and consistently results in poor utility. We further discuss the teaching cost parameter in Appendix F. The precision of the ToM-teacher's behavioural model of the learner (i.e. its policy) directly impacts the utility of the selected demonstrations. The aligned ToM-teacher selects more beneficial demonstrations on average than the rational ToM-teacher which relies on an approximation of the learner's policy, for learners with receptive field of sizes \(3\) and \(5\) (p-values \(<0.01\)) and their utilities are not significantly different for learner with full observability (p-value \(>0.15\)), see Appendix C Table 1. A high degree of accuracy of the ToM-teacher's model of the learner's behavioural policy enhances belief updates of Equation 4, resulting in more accurate modelling of the learner's internal state. To illustrate this, we derive in Appendix D explicit inferences regarding the learner's goal and receptive field size from ToM-teachers beliefs featuring varying degrees of accuracy. Figure 2: Mean utilities and 95% confidence interval of ToM-teachers (rational teacher with parameter \(\lambda=0.01\)) and baseline teachers for learners with varying receptive field sizes of \([3,5,full\_obs]\) observed on \(\mathcal{M}^{\text{obs}}\) during a full episode. ### Limited observation of the learner Now, instead of having access to the entire trajectory \(\tau^{\text{obs}}\) of the learner in \(\mathcal{M}^{\text{obs}}\), the teacher only has access to its first \(10\) actions, that is the partial trajectory \(\tau^{\text{obs}}[:10]\). As expected, with limited information about the learner, both ToM-teachers select demonstrations achieving mean utilities that are further away from the utility of the omniscient teacher's demonstrations. Nonetheless, the aligned ToM-teacher still outperforms the learner-agnostic teachers on average for all the considered learners, as depicted in Figure 3. However, relying solely on the hypothesis that the learner is highly rational is not enough to accurately model its internal state when having access to limited observation of its behaviour. In fact, the utility of the demonstration selected by the rational ToM-teacher with low temperature parameter \(\lambda=0.01\) decreases approximately by \(100\%\), \(75\%\) and \(25\%\) for learners with receptive field sizes of 3, \(5\) and full observability, see Appendix C Table 2. As detailed in Appendix F E, with the approximate learner's policy, the rational ToM-teacher misinterprets the learner's behaviour. This leads to incorrect conclusions about the learner's internal state and, consequently, inaccurate demonstration selection. As a result, the performance of the rational teacher is not significantly different from that of the uniform modelling teacher for learners with limited view (p-values \(>0.15\)) but significantly lower for learners with full observability (p-value \(<0.01\)). Furthermore, in this limited information context, providing the demonstration maximising the mean utility on all the learners proves to be more useful that relying on an imprecise behavioural model of the learner. For all considered learners, the utility-optimal non-adaptive teacher significantly outperforms the rational ToM-teacher (p-values \(<0.01\)), see Appendix C Table 2. ## 6 Conclusion and future works In this work, we have studied the integration of ISL mechanism for teaching learners with different goals, beliefs or sensory capacities. We integrated a Theory of Mind model using Bayesian inference into a teacher agent to infer the learner's internal state and adapt its teaching strategy. We demonstrated that leveraging this ToM model, combined with a behavioural model of the learner, is more efficient than adopting learner-agnostic teaching strategies. We also explored the limitations of ToM models with limited observation of the learner and approximate behavioural models. In summary, we have shown that machine ISL can enhance knowledge transmission between AI systems, and we are convinced that it represents a pathway toward richer and more trustworthy knowledge exchange between AI systems and humans (Gweon et al., 2023; Sigaud et al., 2022). There are many exciting directions for future work, particularly towards more tractable models of ToM mechanisms in higher-dimensional environments, for example, using variational methods (Zintgraf et al., 2020) or ensembling to approximate Bayesian inference. Another direction for fu Figure 3: Mean utilities and 95% confidence interval of teachers as in Figure 2 observed on \(\mathcal{M}^{\text{obs}}\) during the \(10\) first steps of an episode (\(\tau^{\text{obs}}[:10]\)). ture research is to employ reinforcement learning to train the teacher to generate the appropriate demonstration as done in Caselles-Dupre et al. (2022), rather than selecting demonstrations from a provided set. Finally, the prior information introduced in the teacher's Bayesian ToM model of the learners, particularly through belief supports, could be reduced by employing deep neural network-based ToM models as in Rabinowitz et al. (2018). ## Acknowledgements We thank Cedric Colas for useful discussions and feedback. This work has received funding from the European Commission's Horizon Europe Frameworks Program under grant agreements \(N^{o}\) 101070381 (PILLAR-robots) and \(N^{o}\) 101070596 (euRobin), European Union's Horizon 2020 ICT-48 research and innovation actions under grant agreement No 952026 (HumanE-AI-Net). This work was performed using HPC resources from GENCI-IDRIS (Grant 2022-[A0131013011]).
2301.13631
TopoBERT: Plug and Play Toponym Recognition Module Harnessing Fine-tuned BERT
Extracting precise geographical information from textual contents is crucial in a plethora of applications. For example, during hazardous events, a robust and unbiased toponym extraction framework can provide an avenue to tie the location concerned to the topic discussed by news media posts and pinpoint humanitarian help requests or damage reports from social media. Early studies have leveraged rule-based, gazetteer-based, deep learning, and hybrid approaches to address this problem. However, the performance of existing tools is deficient in supporting operations like emergency rescue, which relies on fine-grained, accurate geographic information. The emerging pretrained language models can better capture the underlying characteristics of text information, including place names, offering a promising pathway to optimize toponym recognition to underpin practical applications. In this paper, TopoBERT, a toponym recognition module based on a one dimensional Convolutional Neural Network (CNN1D) and Bidirectional Encoder Representation from Transformers (BERT), is proposed and fine-tuned. Three datasets (CoNLL2003-Train, Wikipedia3000, WNUT2017) are leveraged to tune the hyperparameters, discover the best training strategy, and train the model. Another two datasets (CoNLL2003-Test and Harvey2017) are used to evaluate the performance. Three distinguished classifiers, linear, multi-layer perceptron, and CNN1D, are benchmarked to determine the optimal model architecture. TopoBERT achieves state-of-the-art performance (f1-score=0.865) compared to the other five baseline models and can be applied to diverse toponym recognition tasks without additional training.
Bing Zhou, Lei Zou, Yingjie Hu, Yi Qiang, Daniel Goldberg
2023-01-31T13:44:34Z
http://arxiv.org/abs/2301.13631v2
# TopoBERT: Plug and Play Toponym Recognition Module Harnessing Fine-tuned BERT* ###### Abstract Extracting precise geographical information from textual contents is crucial in a plethora of applications. For example, during hazardous events, a robust and unbiased toponym extraction framework can provide an avenue to tie the location concerned to the topic discussed by news media posts and pinpoint humanitarian help requests or damage reports from social media. Early studies have leveraged rule-based, gazetteer-based, deep learning, and hybrid approaches to address this problem. However, the performance of existing tools is deficient in supporting operations like emergency rescue, which relies on fine-grained, accurate geographic information. The emerging pretrained language models can better capture the underlying characteristics of text information, including place names, offering a promising pathway to optimize toponym recognition to underpin practical applications. In this paper, TopoBERT, a toponym recognition module based on a one-dimensional Convolutional Neural Network (CNN1D) and Bidirectional Encoder Representation from Transformers (BERT), is proposed and fine-tuned. Three datasets (CoNLL2003-Train, Wikipedia3000, WNUT2017) are leveraged to tune the hyperparameters, discover the best training strategy, and train the model. Another two datasets (CoNLL2003-Test and Harvey2017) are used to evaluate the performance. Three distinguished classifiers, linear, multi-layer perceptron, and CNN1D, are benchmarked to determine the optimal model architecture. TopoBERT achieves state-of-the-art performance (f1-score=0.865) compared to the other five baseline models and can be applied to diverse toponym recognition tasks without additional training. Natural Language Processing; Geoparser; Convolutional Neural Network; Toponym Recognition; BERT ## 1 Introduction Since the emergence of social sensing, scholars have been endeavoring to sense the pulse of society with the help of satellite images, sensor networks from IoT and various forms of textual information from the Internet. Extra attention has been paid to mining knowledge from social media because people nowadays are consciously or unconsciously sharing their views towards ongoing events online, which propels social media to become one of the few agents that reflects the real-time societal awareness, reactions and impacts of particular events. This trait is a rare feature seldom shared by other forms of data sources. In the light of this feature, Avvenuti et al. presented an early earthquake detecting and warning system using Twitter data, which offers prompt detection of events [1]. Several case studies processed social media data with geocoding and sentiment analysis tools to analyze the spatial patterns of changing public awareness and emotions toward hurricanes in different phases of the disaster management cycle [2, 3]. Huang et al. scrutinized the human mobility patterns during the COVID-19 pandemic at multiple scales based on geotagged Twitter data [4]. Zhou et al. proposed VictimFinder which is capable of harvesting social media help requests during hurricanes [5]. Let alone the fact that geographical information being one of the key elements of knowledge generation, the aforementioned studies and other similar spatial analysis and modeling are highly dependent on the location information of the social media data. However, social media users start to pay more attention to user privacy, which results in a significant drop of the number of geotagged tweets. Simultaneously, Twitter published policies forbidding users to attach precise longitudes and latitudes to tweets. Moreover, the geographical information bound up with the social media posts might not necessarily be equivalent to the place names described in the textual content of the post. Thus, extracting location information from the textual content of social media data has inevitably become an issue that needs to be addressed. This breeds the process of geoparsing, a two-step approach which includes toponym recognition (identifying place names from texts) and toponym resolution (transforming location names to geographical coordinates). This paper focuses on the first component of geoparsing. Existing studies on toponym recognition can be categorized into four parties based on the character of the solutions, namely rule-based, gazetteer-based, statistical learning-based, and hybrid approaches. In general, statistical learning and hybrid methods that incorporate deep learning techniques render better performance than methods that solely rely on rules or gazetteers [6, 7, 8, 9]. Based on Bidirectional Long Short-Term Memory (BiLSTM), Wang et al. introduced NeuroTPR to extract place names [6]. Qi et al. extended CoreNLP and brought about an open-sourced named entity recognition python toolkit called Stanza, which is able to detect place names and support multiple languages [7]. SAVITR is a system that combines both NLP techniques and gazetteers for real-time location extraction [8]. Hu et al. addressed the incompleteness of gazetteers and fused gazetteers, rules, and deep learning to render a reliable place name extractor, GazPNE [9]. However, those studies suffer from several limitations. First, some models do not focus only on place names, so their prediction of location name extraction might be disturbed. Second, recurrent neural network based deep learning models might suffer from information vanishing problems when the input sequence gets larger and network deeper. Third, complicated deep neural networks frequently require large, annotated datasets and are time-consuming to train to achieve promising results. To address the aforementioned latent flaws, this paper proposes TopoBERT, a toponym recognition module based on a one-dimensional Convolutional Neural Network (CNN) and Bidirectional Encoder Representation from Transformers (BERT). It contributes in the following directions. First, several classifiers were tested and one feasible model and classifier combination based on the evaluation result of a standard dataset is determined. Second, TopoBERT was tested by an unseen dataset together with some other existing tools to verify its generalizability. Third, the tool is ready-to-use and the dataset we generated in this study can be used by other scholars to train, test, and compare different toponym recognition models and tools. The remainder of this paper is structured as follows. The datasets involved in fine-tuning and testing the framework, a concise introduction of the holistic design of the framework, the implementation of the framework, and the parameters used in fine-tuning the framework are detailed in section 2. The results of the experiments conducted are documented in section 3. Section 4 illustrates the potential limitations of this work and lists several future research directions. Section 5 epitomizes the findings of this paper and presents the implications of this study. ## 2 Methodology ### Datasets Totally four different datasets were utilized to train the module and evaluate the performance. CoNLL2003 is a shared task that concerns named entity recognition, which has been widely applied to training deep learning models [10]. The data contains entities of five types: persons (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC) and other words that are irrelevant to named entities of the aforementioned four groups (O). The prefix "B-" and "I-" are used to tag the beginning of a named entity and words that fall inside a named entity [10]. The dataset is originally divided into training, validation, and test data which are noted as CoNLL2003-Train, CoNLL2003-Validation and CoNLL2003-Test. Training data is used to train a deep learning model, validation data is used to tune the hyperparameters of the model, and the test data is used to evaluate the performance of the trained model. The data distribution of each label type in the three datasets is depicted in Figures 1(a), 1(b), and 1(c), respectively. The dataset is later modified to suit the purpose of this study by labeling all the named entities as "O" except for the location entities. Around 4.1% of the tags are location entities in these datasets. WNUT2017 is a relatively smaller dataset collected from Twitter and manually annotated, the objective of which is to tackle the issues caused by novel, emerging, singleton named entities in noisy text [11]. It aims to offer support to sustainable named entity recognition systems. This dataset contains seven different groups: person, location, corporation, product, creative work, group and none of the above. Considering the main focus of this paper and different tags used to label the dataset, this dataset is preprocessed to retain only the location entities tag and to unify the tag symbols used based on CoNLL2003 (location entities are tagged with "B-LOC" or "I-LOC" while the rest are tagged with "O"). The distribution of data under each label type in the modified dataset is shown in Figure 2(a). The total number of location names in this dataset is 1140. Wiki3000 is an automatically generated dataset from Wikipedia articles by a data producing workflow proposed by Wang et al. [6]. The proposed auto-annotation approach utilizes the first paragraph of Wikipedia articles which usually encompass various entities presented with hyperlinks. These hyperlinks are later checked if they are associated with a geographical location. If so, the hyperlinked word will be labeled as a toponym. Then the Wikipedia article is divided into multiple short sentences within 280 characters with additional strategies such as random flipping to mimic the general patterns of Twitter posts [6]. The distribution of data under each label type is shown in Figure 2(b). Harvey2017 is a dataset originally collected from the North Texas University repository ([https://digital.library.unt.edu/ark:/67531](https://digital.library.unt.edu/ark:/67531) /metadc993940/), which contains 7,041,866 tweets collected based on hashtag query. It was pruned, randomly subsampled and manually annotated by Wang et al. to form a new dataset with 1000 Figure 1: Data Distribution of CoNLL2003 Dataset Figure 2: Data Distribution of WNUT2017, Wiki300 and Harvey2017 Dataset tweets aiming to evaluate NeuroTPR [6]. This dataset is adopted by this paper to test the performance of TopoBERT. The distribution of data under each label type is shown in Figure 2(c). ### Framework Design and Implementation As mentioned in section 1, there is an acute conflict between robust spatial analysis on social media or news media and the diminishing availability of geolocated textual context. Additionally, the location mentioned in the textual content of the tweets might differ from the geotags attached. A reliable and ready-to-use geoparser can be the mediator of such conflicts. Therefore, we present a general location extractor that can be used upon social media and news media. The workflow is shown in Figure 3. The existing geotags of the data will be retained, and the textual contents will go through a rule-based data preprocessing module before they are fed to a zip code extractor and place name extractor. Once the place names are pulled out, a geocoding service will be applied to transform the place names into precise coordinates. The place name extractor is marked with an orange dashed rectangle in Figure 3 and serves as the crucial backbone of the entire workflow. Identifying location names from input sentences is a token classification task (Figure 4), which contains two parts. A language model and a classifier. It behaves similar to how human beings analyze whether the given words are place names or not. First the language model attempts to understand the language by transforming the tokenized input data into higher dimensional space which captures the meaning of words in a given sentence, then the classifier makes predictions based on the transformed vectors and determines whether the input word belongs to location entity. The heart of the proposed toponym recognition module, TopoBERT, is the Bidirectional Encoder Representation from Transformers (BERT). It is structured by stacking the encoder components of the Transformer architecture and is designed to be pretrained in an unsupervised manner. BERT takes advantage of the Attention [25] mechanism, which resolves the information vanishing issue that often upsets recurrent neural networks such as Long Short-Term Memory [26] and Gated Recurrent Neural Network [27] when the input sequence gets longer. Moreover, distinguished from many other bidirectional language models, such as ELMo designed by Peters et al. [28], in which the contextual representation of every word is the concatenation or summation of the forward and backward representations, BERT reads the entire sequence of words at once and is trained using a Masked Language Model (MLM) approach and a Next Sentence Prediction (NSP) approach which genuinely implemented the bidirectional concept or unidirectional concept. These two features combined facilitate better language understanding and bring the topply to BERT throughout a number of NLP tasks under the General Language Understanding Evaluation (GLUE) benchmark [12]. Off-the-shelf pretrained BERT model weights can be separated into several categories based on the size of the model, whether upper and lower cases are taken into consideration, the targeted language, and unique training strategies ([https://huggingface.co/transformers/v3.3.1/pretrained_models.ht](https://huggingface.co/transformers/v3.3.1/pretrained_models.ht) ml). Since place names are highly case sensitive and only the English language is involved in this study, 'bert-base-cased' and 'bert-large-cased' are selected as the candidate pretrained models Figure 4: Demonstration of token classification workflow. Figure 3: Holistic Design of Location Extraction Framework for Textual Contents to be evaluated. The "bert-base-cased' model comprises 12 layers, and each hidden layer has 768 nodes, with 12 self-attention heads and a total number of 110 million parameters. The 'bert-large-cased' model consists of 24 layers, and each hidden layer has 1024 nodes, with 16 self-attention heads and 340 million parameters. The parameters are pretrained with English text from BooksCorpus (800 million words) and English Wikipedia (2,500 million words). By stacking a classifier on top of BERT, the combo can be fine-tuned to accomplish this downstream. Recent study showed that model performance can be enhanced by applying classifiers more complex than simple linear classifier or Conditional Random Field (Zhou et al., 2022). Therefore, three classifiers were examined in this study, namely linear classifier, multi-layer perceptron (MLP, Figure 5) and one-dimensional CNN (CNN1D, Figure 6). The simple linear classifier connects the output of the language model to the final prediction results with the softmax activation function. MLP applied in this study contains three fully connected layers and links the language model output with a layer with the input size equivalent to the output vector size. The number of hidden layer nodes is 256 and the output layer size equals the number of distinct labels from the training dataset. The CNN models are competent in detecting underlying features (Zhou et al., 2022) and one-dimensional CNN has been successfully applied to process natural language (Xu et al., 2019; Chen et al., 2020). Realizing location names might share some common characteristics, the idea of CNN1D is adopted. The vector output of the language model can be considered as a one-dimensional signal and a CNN1D with kernel size 3 is applied. The output channel of the convolution is 16. Followed by a max pooling layer of size 2, which further generalizes the features and reduces model complexity. All channels of the max pooling layer output are concatenated into a single vector and is fed to a fully connected MLP with hidden layer size equals to 128. All model combinations were implemented using Python language and pertinent packages. The dataset splitting took advantage of the ScikitLearn library and the BERT models were implemented based on the huggingface Transformer library ([https://huggingface.co/transformers/](https://huggingface.co/transformers/)). The model finetuning pipeline was built using PyTorch functions. ### Training and Evaluation TopoBERT is envisioned to be a ready-to-use module that renders optimal performance in toponym recognition. Models with different architectures were trained and evaluated with six datasets specified in Section 2.1 to determine the best model architecture and training strategy. The training process utilized CoNLL2003-Train as the training dataset by default and compared to another larger dataset fusing CoNLL2003, Wiki3000, and WNUT2017. The original dataset is labelled at word-level which cannot be input to BERT directly due to BERT's word-piece encoding, otherwise it will lead to large numbers of out of vocabulary words. To tackle with this issue, we first split the input data at word-level, and applied BERT word-piece tokenizer to each word. The same label was assigned to each word-piece of a single word. The labeled word-pieces are then merged to form the new input data which could be processed by BERT. This experiment aimed at measuring the performance fluctuations caused by training data size and heterogeneity. CoNLL2003-Validation was used during the training process to tune several fundamental hyperparameters such as training epochs and learning rate. CoNLL2003-Test and Harvey2017 datasets were used to evaluate the model performance. The Harvey2017 dataset was also used to benchmark TopoBERT with five prevailing toponym recognition models, namely Stanford NLP (Xu et al., 2019), spaCy ([https://spacy.io/](https://spacy.io/)), Bidirectional LSTM-CRF (Xu et al., 2019), DM_NLP (Xu et al., 2019), and NeuroTPR (Xu et al., 2019). The parameters of the classifier component of the module were initialized with random non-zero numbers and the BERT Figure 5: TopoBERT Architecture with Multi-layer Perceptron as Classifier Figure 6: TopoBERT Architecture with One-Dimensional Convolutional Neural Network as Classifier component was initialized with pre-trained parameters. The entire module was trained with the fine-tuning approach [12], and the parameters were updated using a mini-batch gradient descent approach with early stopping. The maximum length of the input sequence was limited to 128 in this paper. The maximum number of training epochs was set to 50. As recommended by the original BERT paper, the initial learning rate and the training batch size were set to 2e-5 and 32 respectively [12]. Most commonly used loss function for multi-class classification task, the cross-entropy loss was employed. AdamW was selected as the optimizer during training which adjusts the learning rate dynamically to accelerate parameter convergence and implements weight decay to lower the chance of overfitting. Warm up steps, which is using a very low learning rate for the first several weight updating iterations, were also introduced during training to reduce the impact of deviating the model drastically from sudden exposure to unseen datasets. Three commonly used evaluation metrics, precision, recall, and F1-score (Equation 1-3), were applied to gauge the performance and bias of the models. Precision calculates the percentage of correctly identified location names (noted as True Positives, TP) among all the location names predicted by the model, which combines both TP and False Positives (FP). Recall measures the percentage of correctly identified ones amongst all ground truth, which is the combination of TP and False Negatives (FN). F1-score is the harmonic mean of precision and recall, providing a comprehensive metric to evaluate model performance. \[Precision=\frac{TP}{TP+FP}\] (Equation 1) \[Recall=\frac{TP}{TP+FN}\] (Equation 2) \[F1-score=2*\frac{Precision+Recall}{Precision+Recall}\] (Equation 3) The outputs of BERT models are at word-piece level and they are concatenated using the special prefix '\(\#\#\)' and the word-level labels are assigned base on the starting word-piece of the word. The evaluation metrics are based on 'per-token' scores. Additionally, location name entity consists of two types of labels (B-LOC and I-LOC). In order to gauge the comprehensive performance of the model on toponym recognition, the evaluation metrics were calculated using a micro average approach, which computes a global average of precision, recall, and F1-score. It calculates the TP, FP and FN by counting the total number of TP, FP and FN under each class, namely, "B-LOC" and "I-LOC". ## 3 Results and Analysis The first step of the experiment targeted at determining the optimal pretrained parameters for BERT model. We hypothesize that larger models outperform smaller models. To verify this hypothesis, the performance of the models initialized with 'bert-base-cased' and 'bert-large-cased' with a linear classifier stacked on top were tested. The results are displayed in Table 1. These two models were trained with CoNLL2003-Train and evaluated with CoNLL2003-Test. Compared to 'bert-base-cased', the precision of the prediction increased from 0.900 to 0.934 by using 'bert-large-cased' while the recall almost remained static. The F1-scores showed that 'bert-large-cased' rendered better results which is in conformity with the original BERT paper [12] and validated our initial hypothesis. Therefore, 'bert-large-cased' was harnessed in all the follow-up experiments. The second step of the experiments aimed to measure the influence of the training data and determine the optimal classifier. The model performances were evaluated using two different datasets, CoNLL2003-Test and Harvey2017. We hypothesize that (a) the model with CNN1D classifier yield better results and (b) models trained with larger datasets perform better in placename recognition. Table 2 and Table 3 list the evaluation metrics of all the tests. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Training Data** & **Classifier** & **Precision** & **Recall** & **F1-score** \\ \hline CoNLL2003 & Linear & 0.895 & 0.804 & 0.847 \\ \hline CoNLL2003 & MLP & 0.885 & 0.811 & 0.846 \\ \hline CoNLL2003 & CNN1D & **0.898** & **0.835** & **0.865** \\ \hline Combined & Linear & 0.872 & 0.589 & 0.703 \\ \hline Combined & MLP & 0.932 & 0.541 & 0.685 \\ \hline Combined & CNN1D & **0.941** & **0.668** & **0.781** \\ \hline \end{tabular} The “CoNLL2003” under the Training Data column means CoNLL2003-Train dataset and the “Combined” represents the dataset merging CoNLL2003-Test, Wiki3000 and WNUT2017. \end{table} Table 3: Evaluation results with Harvey2017 dataset for testing on training data variation and classifier types. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **BERT Model** & \begin{tabular}{c} **Classifier** \\ **r** \\ \end{tabular} & \begin{tabular}{c} **Precision** \\ **n** \\ \end{tabular} & \begin{tabular}{c} **Recall** \\ **1** \\ \end{tabular} & \begin{tabular}{c} **F1-score** \\ **score** \\ \end{tabular} \\ \hline \begin{tabular}{c} bert-base-cased \\ \end{tabular} & Linear & 0.900 & **0.904** & 0.902 \\ \hline \begin{tabular}{c} bert-large- \\ cased \\ \end{tabular} & Linear & **0.934** & 0.901 & **0.917** \\ \hline \end{tabular} \end{table} Table 1: Evaluation results for testing on different pretrained parameters. In Table 2, when models were trained with CoNLL2003-Train, the one with a simple linear classifier produced the best precision (0.934), and the one with CNN1D produced the best recall (0.920) and F1-score (0.921). MLP performed the worst among the three classifiers. When models were trained with a combined dataset, the model with CNN1D outperformed the rest in all three metrics with precision equal to 0.942, recall of 0.916, and F1-score of 0.929. The one with a linear classifier produced the worst results with an F1-score of 0.866. In Table 3, when models were trained with CoNLL2003-Train, the one with the CNN1D classifier outperformed the rest with precision equal to 0.898, recall of 0.835, and F1-score of 0.865. When models were trained with a combined dataset, the model with CNN1D successfully defended its trophy by rendering precision of 0.941, recall of 0.668, and F1-score of 0.781. The models with MLP worked slightly worse than the ones with linear classifiers. The above elucidation certifies the hypothesis that models with CNN1D generate the optimal performance. It also shows that more complicated classifiers like multi-layer perceptron do not necessarily render better results. However, when viewing Tables 2 and 3 contemporaneously, the results from training with different datasets, the metrics indicated that the model trained with the combined dataset generally performed worse than the ones trained with merely CoNLL2003-Train. This phenomenon contradicts the hypothesis that models trained with larger datasets perform better. After scrutinizing the dataset used for training, we noticed some inconsistencies in the labeling criteria of the datasets. Some examples are listed in Table 4 and the unexpected phenomenon can be interpreted by the heterogeneity of the datasets. Twitter developer API. The locations of those tweets without geotags are retrieved by running TopoBERT and google geocoding service. The module also enjoys the potential of being used for location name detection for news media to pinpoint the discussed topics [14; 15] and help to identify fake news [16]. This paper concentrates mainly on designing a novel architecture of a reliable and versatile module for toponym recognition. However, the performance enhancement can continue by addressing the following issues. First, the models are trained and evaluated based on well prepared datasets. This can be regarded as a best-case scenario compared to real life situations. Place name usage can be highly ambiguous and random, especially within social media platforms. Typos are extremely common which might cause out-of-vocabulary words in language models. Place name abbreviations such as "Boulevard" and "blvd", "Drive" and "Dr.", "Street" and "St." and so forth are frequently utilized interchangeably. People might unconsciously ignore the correct upper-case and lower-case usage, such as "college station" and "College Station", "mexico" and "MEXICO". Meticulous data preprocessing methods can be incorporated to tackle this problem in order to achieve better overall performance. Second, several rule-base approaches can be leveraged to further boost the performance. Enlightened by the success of hybrid models [9], sets of grammar rules based on the composition of nouns, determiners, adjectives, conjunctions, numbers and possessive ending can be designed [17]. Additionally, commonly used gazetteers such as OpenStreetMap and GeoNames can be used as extra named entity matching criteria which will enhance the True Positives of the model. Regional criteria can be appended to the model while identifying place names by making country name, state names, county names, or bounding boxes as input variables of the model. This will allow the model to add constraints during processing. The top-N words from word embedding models [9; 35], which are not place names, can be applied to filter words during data preprocessing. This will to some extent eliminate the False Positives of the prediction. Third, due to the data-hunpy nature of deep learning, data availability and quality are topics being inevitably discussed when large complicated deep learning models are involved. It is common knowledge in the deep learning world that larger datasets lead to better generalizability and performance. However, this statement fails to hold true in this paper due to the fact that the larger datasets are derived from several distinguished smaller datasets labeled under their own unique regime. Therefore, there is an urgent need to define criteria and build unified datasets for toponym recognition model training, evaluating and benchmarking. The dataset can be manually modified based on existing datasets and augmented using rule-based methods, gazetteers or Generative Adversarial Network [18; 19; 20]. Fourth, fine-tuned language models can be few-shot or zero-shot learners, which means that the models can be applied directly to certain downstream tasks with very little or even no further training [21; 22; 23]. This is because advanced language models can better capture the meaning of the text. This claim is also underpinned by the result of this paper which leverages BERT to boost the module capability. Therefore, incorporating gigantic models such as GPT-3 [24] might lead to another round of performance enhancement. ## 5 Conclusion To further enhance the performance of toponym recognition by better understanding natural language, TopoBERT, which incorporate pretrained language model, BERT, is introduced. Experiments on the pretrained parameters, training dataset combinations, and model architecture reveal the following findings. First, the toponym recognition model performance is sensitive to the architecture of pre-trained language models and classifiers. The models initialized with a larger-structured BERT model ("bert-large-cased") show an advantage over the models initialized with a basic BERT model ("bert-base-cased"). More complicated classifiers like MLP do not necessarily win over simple linear classifiers. Second, increasing training data size produces worse results, especially for the recall, due to data heterogeneity. The model trained with single dataset, CoNLL2003-Train, and stacked on top with a CNN1D classifier renders the optimum results both on CoNLL2003-Test and Harvey2017 datasets. Finally, the developed TopoBERT module outperforms existing models in recognizing place names in texts. The clinched TopoBERT with the optimal model architecture and training strategy produces reliable toponym prediction and achieves F1-score of 0.865 on Harvey2017 dataset, which surpasses other prevailing models or tools by at least 18%. In nutshell, the discoveries of this paper contribute in determining the optimal model structure on toponym recognition tasks and urges a large standardized dataset labeled with unified regime to support model training and benchmarking. A plug and play module is implemented and open sourced to support pertinent applications and similar research. ## Acknowledgments The research is supported by a project funded by the U.S. National Science Foundation: Reducing the Human Impacts of Flash Floods Figure 7: Toponym recognition applied to locate Twitter posts during disasters. - Development of Microdata and Causal Model to Inform Mitigation and Preparedness (Award No. 1931301).
2309.07066
CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion Prediction
Human motion prediction is important for mobile service robots and intelligent vehicles to operate safely and smoothly around people. The more accurate predictions are, particularly over extended periods of time, the better a system can, e.g., assess collision risks and plan ahead. In this paper, we propose to exploit maps of dynamics (MoDs, a class of general representations of place-dependent spatial motion patterns, learned from prior observations) for long-term human motion prediction (LHMP). We present a new MoD-informed human motion prediction approach, named CLiFF-LHMP, which is data efficient, explainable, and insensitive to errors from an upstream tracking system. Our approach uses CLiFF-map, a specific MoD trained with human motion data recorded in the same environment. We bias a constant velocity prediction with samples from the CLiFF-map to generate multi-modal trajectory predictions. In two public datasets we show that this algorithm outperforms the state of the art for predictions over very extended periods of time, achieving 45% more accurate prediction performance at 50s compared to the baseline.
Yufei Zhu, Andrey Rudenko, Tomasz P. Kucner, Luigi Palmieri, Kai O. Arras, Achim J. Lilienthal, Martin Magnusson
2023-09-13T16:26:48Z
http://arxiv.org/abs/2309.07066v1
# CliFF-LHMP: Using Spatial Dynamics Patterns for ###### Abstract Human motion prediction is important for mobile service robots and intelligent vehicles to operate safely and smoothly around people. The more accurate predictions are, particularly over extended periods of time, the better a system can, e.g., assess collision risks and plan ahead. In this paper, we propose to exploit _maps of dynamics_ (MoDs, a class of general representations of place-dependent spatial motion patterns, learned from prior observations) for long-term human motion prediction (LHMP). We present a new MoD-informed human motion prediction approach, named CliFF-LHMP, which is data efficient, explainable, and insensitive to errors from an upstream tracking system. Our approach uses CliFF-map, a specific MoD trained with human motion data recorded in the same environment. We bias a constant velocity prediction with samples from the CLiFF-map to generate multi-modal trajectory predictions. In two public datasets we show that this algorithm outperforms the state of the art for predictions over very extended periods of time, achieving 45% more accurate prediction performance at 50s compared to the baseline. ## I Introduction Accounting for long-term human motion prediction (LHMP) is an important task for autonomous robots and vehicles to operate safely in populated environments [1]. Accurate prediction of future trajectories of surrounding people over longer periods of time is a key skill to improve motion planning, tracking, automated driving, human-robot interaction, and surveillance. Long-term predictions are useful to associate observed tracklets in sparse camera networks, or inform the robot of the long-term environment dynamics on the path to its goal [2, 3], for instance when following a group of people. Very long-term predictions are useful for global motion planning to produce socially-aware unobtrusive trajectories, and for coordinating connected multi-robot systems with sparse perception fields. Human motion is complex and may be influenced by several hard-to-model factors, including social rules and norms, personal preferences, and subtle cues in the environment that are not represented in geometric maps. Accordingly, accurate motion prediction is very challenging [1]. Prediction on the very long-term scale (i.e., over \(20\,\mathrm{s}\) into the future) is particularly hard as complex, large-scale environments influence human motion in a way that cannot be summarized and contained in the current state of the moving person or the observed interactions but rather have to be modelled explicitly [4]. In this paper, we examine and address the novel task of very long-term human motion prediction [5], aiming to predict human trajectories for up to \(50\,\mathrm{s}\) into the future. Prior works have addressed human motion prediction using physics-, planning- and pattern-based approaches [1]. The majority of existing approaches, however, focuses on relatively short prediction horizons (up to \(10\,\mathrm{s}\)) [6] and the popular ETH-UCY benchmark uses \(4.8\,\mathrm{s}\)[1, 7, 8, 9]. To predict very long-term human motion, we exploit _maps of dynamics_ (MoDs) that encode human dynamics as a feature of the environment. There are several MoD approaches for mapping velocities [10, 11, 12, 13, 14]. In this work, we use Circular Linear Flow Field map (CLiFF-map) [12], which captures multimodal statistical information about human flow patterns in a continuous probabilistic representation over velocities. The motion patterns represented in a CLiFF-map implicitly avoid collisions with static obstacles and follow the topological structure of the environment, e.g., capturing the dynamic flow through a hall into a corridor (see Fig. 1). In this paper we present a novel, MoD-informed prediction approach (CLiFF-LHMP)1 that predicts stochastic trajectories by sampling from a CLiFF-map to guide a velocity filtering model [6]. Examples of prediction results are shown in Fig. 1. Footnote 1: The approach is available at [https://github.com/test-bai-cpu/CLiFF-LHMP](https://github.com/test-bai-cpu/CLiFF-LHMP) In qualitative and quantitative experiments we demonstrate our CLiFF-LHMP approach is 45% more accurate than the baseline at \(50\,\mathrm{s}\), with average displacement error (ADE) Fig. 1: Long-term (\(50\,\mathrm{s}\)) motion prediction result obtained with CLiFF-LHMP for one person in the ATC dataset. **Red** line: ground truth trajectory. Green line: observed trajectory. **Blue** lines: predicted trajectories. The CLiFF-map is shown with colored arrows. below \(5\,\mathrm{m}\) up to \(50\,\mathrm{s}\). In contrast to prior art in long-term environment-aware motion prediction [4], our method does not make any assumptions on the optimality of human motion and instead generalizes the features of human-space interactions from the learned MoD. Furthermore, our method does not require a list of goals in the environment as input, in contrast to prior planning-based prediction methods. Finally, our method can flexibly estimate the variable time end-points of human motion, predicting both short- and long-term trajectories, in contrast to the prior art which always predicts up to a fixed prediction horizon. The paper is structured as follows: we review related work in Sec. II, describe the proposed approach in Sec. III, present our evaluation in Sec. IV, discuss the results in Sec. V and conclude in Sec. VI. ## II Related Work Human motion prediction has been studied extensively in recent years. With different prediction horizons, the human motion prediction problem can be divided into short-term (\(1\)-\(2\,\mathrm{s}\)), long-term (up to \(20\,\mathrm{s}\)) [1], and very long-term (which we define as over \(20\,\mathrm{s}\)). Several approaches address long-term motion prediction, e.g., full-body motion [5] or in the context of vehicle routing and GPS positioning [15, 16], but, to the best of our knowledge, very long-term prediction of dense navigation trajectories has not been addressed before. One approach to predict long-term human motion is to account for various semantic attributes of the static environment. For instance, prior knowledge of potential goals in the environment can be used in planning-based methods. Ziebart et al. [17] and Karasev et al. [18] propose planning MDP-based approaches for long-term goal-directed global motion prediction. Rudenko et al. [4] extends this line of work by accounting for local social interactions, which is shown to outperform prior art in the long-term map-aware perspective. Another popular approach to make long-term predictions is using clustering to represent observed long-term motion patterns, e.g., using expectation-maximization [19]. Chen et al. [20] use constrained gravitational clustering for dynamically grouping the observed trajectories, learning also how motion patterns change over time. Bera et al. [21] learn global and local motion patterns using Bayesian inference in real-time. One shortcoming of clustering-based methods is that they depend on complete trajectories as input. In many cases, e.g. in cluttered environments or from a first-person perspective [22], it is difficult to observe long trajectories, or cluster shorter tracklets and incomplete trajectories in a meaningful way. Clustering-based methods directly model the distribution over full trajectories and are non-sequential. By contrast, transition-based approaches [23, 24, 25, 26, 27] describe human motion with causally conditional models and generate sequential predictions from learned local motion patterns. Further, there are physics-based approaches that build a kinematic model without considering other forces that govern the motion. The constant velocity model (CVM) is a simple yet potent approach to predict human motion. Scholler et al. [28] have shown CVM to outperform several state-of-the-art neural predictors at the \(4.8\,\mathrm{s}\) prediction horizon. On the other hand, CVM is not reliable for long-term prediction as it ignores all environment information. Finally, many neural network approaches for motion prediction have been presented in recent years, based on LSTMs [29], GANs [30], CNNs [31], CVAEs [32] and transformers [33]. Most of these approaches focus on learning to predict stochastic interactions between diverse moving agents in the short-term perspective in scenarios where the effect of the environment topology and semantics is minimal. Our approach, on the other hand, targets specifically the long-term perspective, where the environment effects become critical for making accurate predictions. Our approach to motion prediction leverages maps of dynamics (MoDs), which encode motion as a feature of the environment by building spatio-temporal models of the patterns followed by dynamic objects (such as humans) in the environment [14, 12]. There are several approaches for building maps of dynamics from observed motion. Some MoDs represent human dynamics in occupancy grid maps [24]. Another type of MoDs clusters human trajectories as mentioned above [19]. Chen et al. [34] present an approach that uses a dictionary learning algorithm to develop a part-based trajectory representation. The above mentioned MoDs encode the direction but not the speed of motion. MoDs can also be based on mapping sparse velocity observations into flow models, which has the distinct advantage that the MoD can be built from incomplete or spatially sparse data. An example of this class of MoDs is the probabilistic Circular-Linear Flow Field map (CLiFF-map) [12] that we use in this paper. CLiFF-map uses a Gaussian mixture model (GMM) to describe multimodal flow patterns at each location. In this paper, we use sampled directions from the CLiFF-map to predict stochastic long-term human motion. A method similar to ours is presented in Barata et al. [35]. It constructs a vector field that represents the most common direction at each point and predicts human trajectories by inferring the most probable sequence through this vector field. By contrast, our approach uses a probabilistic vector field that represents speed and direction jointly in a multimodal distribution. Further, the evaluation in Barata et al. [35] assumes a fixed prediction horizon of \(4.8\,\mathrm{s}\), whereas we show our approach to estimate human motion more accurately than the state of the art for up to \(50\,\mathrm{s}\). ## III Method In this section, we first describe the CLiFF-map representation for site-specific motion patterns (Sec. III-A) and then present the CLiFF-LHMP approach for single-agent long-term motion prediction exploiting the information accumulated in a CLiFF-map (Sec. III-B). ### _Circular-Linear Flow Field Map (CLiFF-map)_ To predict human trajectories we exploit the information about local flow patterns represented in a CLiFF-map as a multimodal, continuous distribution over velocities. CLIFF-map [12] is a probabilistic framework for mapping velocity observations (independently of their underlying physical processes), i.e., essentially a generalization of a vector field into a Gaussian mixture field. Each location in the map is associated with a Gaussian mixture model (GMM). A CLIFF-map represents motion patterns based on local observations and estimates the likelihood of motion at a given query location. CLiFF-maps represent speed and direction jointly as velocity \(\mathbf{V}=[\theta,\rho]^{T}\) using direction \(\theta\) and speed \(\rho\), where \(\rho\in\mathbb{R}^{+}\), \(\theta\in[0,2\pi)\). As the direction \(\theta\) is a circular variable and the speed is linear, a mixture of _semi-wrapped_ normal distributions (SWNDs) is used in CLiFF-map. At a given location, the semi-wrapped probability density function (PDF) over velocities can be visualized as a function on a cylinder. Direction values \(\theta\) are wrapped on the unit circle and the speed \(\rho\) runs along the length of the cylinder. An SWND \(\mathcal{N}_{\mathbf{\Sigma},\mathbf{\mu}}^{SW}\) is formally defined as \(\mathcal{N}_{\mathbf{\Sigma},\mathbf{\mu}}^{SW}(\mathbf{V})=\sum_{k\in\mathbb{Z}} \mathcal{N}_{\mathbf{\Sigma},\mathbf{\mu}}[(\theta,\rho]^{T}+2\pi[k,0]^{T})\), where \(\mathbf{\Sigma},\mathbf{\mu}\) denote the covariance matrix and mean value of the directional velocity \((\theta,\rho)^{T}\), and \(k\) is a winding number. Although \(k\in\mathbb{Z}\), the PDF can be approximated adequately by taking \(k\in\{-1,0,1\}\) for practical purposes [36]. To preserve the multi-modal characteristic of the flow, a semi-wrapped Gaussian mixture model (SWGMM) is used, which is a PDF represented as a weighted sum of \(J\) SWNDs: \(p(\mathbf{V}|\mathbf{\xi})=\sum_{j=1}^{J}\pi_{j}\mathcal{N}_{\mathbf{\Sigma}_{j},\mathbf{\mu}_{j}}^{SW}(\mathbf{V})\), where \(\mathbf{\xi}=\{\xi_{j}=(\mathbf{\mu}_{j},\mathbf{\Sigma}_{j},\pi_{j})|j\in\mathbb{Z}^{+}\}\) denotes a finite set of components of the SWGMM, and \(\pi_{j}\) denotes the mixing factor and satisfies \(0\leq\pi_{j}\leq 1\). ### _Human Motion Prediction Using CLiFF-map_ We frame the task of predicting a person's future trajectory as inferring a sequence of future states. The algorithm is presented in Alg. 1. With the input of an observation history of \(O_{p}\) past states of a person and a CLiFF-map \(\Xi\), the algorithm predicts \(T_{p}\) future states. The length of the observation history is \(O_{s}\in\mathbb{R}^{+}\)\(\mathrm{s}\), equivalent to \(O_{p}>0\) observation time steps. With the current time-step denoted as the integer \(t_{0}\geq 0\), the sequence of observed states is \(\mathcal{H}=\langle s_{t_{0}-1},...,s_{t_{0}-O_{p}}\rangle\), where \(s_{t}\) is the state of a person at time-step \(t\). A state is represented by 2D Cartesian coordinates \((x,y)\), speed \(\rho\) and direction \(\theta\): \(s=(x,y,\rho,\theta)\). ``` Input:\(\mathcal{H}\), \(x_{t_{0}}\), \(y_{t_{0}},\Xi\) Output:\(\mathcal{T}\) 1\(\mathcal{T}=\{\}\) 2\(\rho_{\mathrm{obs}},\theta_{\mathrm{obs}}\leftarrow\) getObservedVelocity(\(\mathcal{H}\)) 3\(s_{t_{0}}=(x_{t_{0}},y_{t_{0}},\rho_{\mathrm{obs}},\theta_{\mathrm{obs}})\) 4for\(t=t_{0}+1\), \(t_{0}+T_{p}\)do 5\(x_{t},y_{t}\leftarrow\) getNewPosition(\(s_{t-1}\)) 6\(\theta_{s}\leftarrow\) sampleDirectionFromCLiFFmap(\(x_{t},y_{t},\Xi\)) 7(\(\rho_{t}\), \(\theta_{t}\)) \(\leftarrow\) predictVelocity(\(\theta_{s}\), \(\rho_{t-1}\), \(\theta_{t-1}\)) 8\(s_{t}\leftarrow(x_{t},y_{t},\rho_{t},\theta_{t})\) 9\(\mathcal{T}\leftarrow\mathcal{T}\cup s_{t}\) 10 return\(\mathcal{T}\) ``` **Algorithm 1**CLiFF-LHMP From the observed sequence \(\mathcal{H}\), we derive the observed speed \(\rho_{\mathrm{obs}}\) and direction \(\theta_{\mathrm{obs}}\) at time-step \(t_{0}\) (line 2 of Alg. 1). Then the current state becomes \(s_{t_{0}}=(x_{t_{0}},y_{t_{0}},\rho_{\mathrm{obs}},\theta_{\mathrm{obs}})\) (line 3 of Alg. 1). The values of \(\rho_{\mathrm{obs}}\) and \(\theta_{\mathrm{obs}}\) are calculated as a weighted sum of the finite differences in the observed states, as in the recent ATLAS benchmark [6]. With the same parameters as in [6], the sequence of observed velocities is weighted with a zero-mean Gaussian kernel with \(\sigma=1.5\) to put more weight on more recent observations, such that \(\rho_{\mathrm{obs}}=\sum_{t=1}^{O_{p}}v_{t_{0}-t}g(t)\) and \(\theta_{\mathrm{obs}}=\sum_{t=1}^{O_{p}}\theta_{t_{0}-t}g(t)\), where \(g(t)=(\sigma\sqrt{2\pi}e^{\frac{1}{2}(\frac{t}{2})^{2}})^{-1}\). Given the current state \(s_{t_{0}}\), we estimate a sequence of future states. Similar to past states, future states are predicted within a time horizon \(T_{s}\in\mathbb{R}^{+}\)\(\mathrm{s}\). \(T_{s}\) is equivalent to \(T_{p}>0\) prediction time steps, assuming a constant time interval \(\Delta t\) between two predictions. Thus, the prediction horizon is \(T_{s}=T_{p}\Delta t\). The predicted sequence is then denoted as \(\mathcal{T}=\langle s_{t_{0}+1},s_{t_{0}+2},...,s_{t_{0}+T_{p}}\rangle\). To estimate \(\mathcal{T}\), for each prediction time step, we sample a direction from the CLiFF-map at the current position (\(x_{t}\), \(y_{t}\)) to bias the prediction with the learned motion patterns represented by the CLiFF-map. The main steps for each iteration are shown in lines 5-9 of Alg. 1. For each iteration, we first compute the predicted position \((x_{t},y_{t})\) at time step \(t\) from the state at the previous time step Fig. 2: Steps of sampling a direction \(\theta_{s}\) from the CLiFF-map. **(a)** CLIFF-map built from the ATC data. The location to sample from is marked with an orange arrow. **(b)** Selection of SWGMMs in the CLiFF-map: The red circle contains all SWGMMs within \(r_{s}\) distance to the sampling location. From these SWGMMs, the SWGMM with the highest motion ratio is selected (marked with a blue circle). **(c)** The SWGMM distribution in the selected location wrapped on a unit cylinder. The speed is represented by the position along the \(\rho\) axis and the direction is \(\theta\). The probability is represented by the distance from the surface of the cylinder. A velocity vector (marked with a red arrow) is sampled from this SWGMM. **(d)** The direction value \(\theta_{s}\) of the sampled velocity is shown in the sampled direction and marked with an orange circle. (line 5 of Alg. 1): \[\begin{split} x_{t}&=x_{t-1}+\rho_{t-1}\cos\theta_{t-1} \Delta t,\\ y_{t}&=y_{t-1}+\rho_{t-1}\sin\theta_{t-1}\Delta t, \end{split} \tag{1}\] Afterwards, we estimate the new speed and direction using constant velocity prediction biased by the CLiFF-map. The bias impacts only the estimated direction of motion, speed is assumed to be unchanging. To estimate direction at time \(t\), we sample a direction from the CLiFF-map at location \((x_{t},y_{t})\) in the function sampleDirectionFromCLiFFmap() (line 6 of Alg. 1). Alg. 2 outlines its implementation. The inputs of Alg. 2 are: the sample location \((x,y)\) and the CLiFF-map \(\Xi\) of the environment. The sampling process is illustrated in Fig. 2. To sample a direction at location \((x,y)\), from \(\Xi\), we first get the SWGMMs \(\Xi_{\rm near}\) whose distances to \((x,y)\) are less than the sampling radius \(r_{s}\) (line 1 of Alg. 2). In a CLiFF-map, each SWGMM is associated with a motion ratio. To sample from the location with the highest intensity of human motions, in line 2, from \(\Xi_{\rm near}\), we select the SWGMM \(\xi\) with highest motion ratio. In line 3 of Alg. 2, from \(\xi\), an SWND is sampled from the selected SWGMM, based on the mixing factor \(\pi\). A velocity is drawn randomly from the sampled SWND. Finally, the direction of the sampled velocity is returned and used for motion prediction. With the direction sampled from the CLiFF-map, we predict the velocity (\(\rho_{t}\), \(\theta_{t}\)) in line 7 of Alg. 1 assuming that a person tends to continue walking with the same speed as in the last time step, \(\rho_{t}=\rho_{t-1}\), and bias the direction of motion with the sampled direction \(\theta_{s}\) as: \[\theta_{t}=\theta_{t-1}+(\theta_{s}-\theta_{t-1})\cdot K(\theta_{s}-\theta_{t- 1}), \tag{2}\] where \(K(\cdot)\) is a kernel function that defines the degree of impact of the CLiFF-map. We use a Gaussian kernel with a parameter \(\beta\) that represents the kernel width: \[K(x)=e^{-\beta\left\|x\right\|^{2}}. \tag{3}\] An example of velocity prediction results is shown in Fig. 3. With kernel \(K\), we scale the CLiFF-map term by the difference between the direction sampled from the CLiFF-map and the current direction according to the CVM. The sampled direction is trusted less if it deviates more from the current direction. A larger value of \(\beta\) makes the proposed method behave more like a CVM, and with a smaller value of \(\beta\), the prediction will follow the CLiFF-map more closely. In the end of each iteration, we add \(s_{t}\) to the predicted trajectory \(\mathcal{T}\) (line 9 of Alg. 1) and update \(t\) for the next iteration. After iterating for \(T_{p}\) times, the output is a sequence \(\mathcal{T}\) of future states that represents the predicted trajectory. ## IV Experiments This section describes the experimental setup for qualitative and quantitative evaluation of our CLiFF-LHMP approach. Accurate map-aware long-term motion predictions are typically addressed with Markov Decision Process (MDP) based methods [17, 18, 37, 38, 4]. Among them, as the baseline for CLiFF-LHMP, we chose the recent IS-MDP approach [4]. We also compare our method with the constant velocity predictor [28, 6]. We evaluate the predictive performance using the following two real-world datasets: 1. **THOR**[39]: This dataset captures human motion in a room with static obstacles. It includes two settings: with one obstacle (denoted as THOR1, see the top row in Fig. 9) and with three obstacles (denoted as THOR3, see the bottom row in Fig. 9). The size of the room for data collection is 8.4\(\times\)18.8 \(\,\mathrm{m}\). 2. **ATC**[40]: This dataset contains trajectories recorded in a shopping mall in Japan. The dataset covers a large indoor environment with total area of around \(900\,\mathrm{m}^{2}\). The map of the environment is shown in Fig. 1. THOR1 and THOR3 both include four rounds of collected data. We use the first round to build the CLiFF-map and use the remaining three rounds for evaluation. After filtering out short trajectories (shorter than the observation horizon \(O_{s}\)) for evaluation, there are in total 247 trajectories in the THOR1 dataset and 327 trajectories in the THOR3 dataset. This gives us the train-to-test ratio of about 1 to 3 in both THOR1 and THOR3. The ATC dataset consists of 92 days in total. For building the CLiFF-map, we used the data from the first day (Oct. 24th, 2012). From the remaining 91 days, again after filtering Fig. 3: Example predictions that visualize the adaptive influence of the CLiFF-map and the constant velocity model on the prediction, based on the sampled direction. **Green** dots show the observed past states \(\mathcal{H}\), **red** dots show the ground truth future states and **blue** dots show the predicted states \(\mathcal{T}\). In each predicted state, the **orange** arrow shows the sampled direction from the CLiFF-map \(\theta_{s}\) and the **green** arrow shows the direction from the last time step \(\theta_{t-1}\). **Blue** arrows between predicted states show the direction of the predicted trajectory. In locations like (**a**) where the sampled CLiFF-map direction greatly opposes the CVM prediction, the CVM prediction is trusted more. In locations like (**b**) where the sampled CLiFF-map direction is close to the CVM prediction, the CVM prediction is biased more towards the CLiFF-map direction. out trajectories shorter than the observation horizon \(O_{s}\), we use 1 803 303 trajectories that have continuous motion. We downsampled both datasets to \(2.5\,\mathrm{Hz}\). For observation, we take \(3.2\,\mathrm{s}\) (the first 8 positions) of the trajectory and use the remaining (up to \(50\,\mathrm{s}\) or 125 positions) as the prediction ground truth. In the parameter analysis, we also evaluate the effect of setting the observation horizon to different values. Given the area covered by the ATC dataset (\(\sim\)\(900\,\mathrm{m}^{2}\)) and the THOR dataset (\(\sim\)\(150\,\mathrm{m}^{2}\)), the size and number of obstacles in THOR dataset, and the trajectory lengths available in the datasets, we selected the parameters shown in Table I for our quantitative and qualitative experiments. Because the size of obstacles in the THOR setting is less than \(1\,\mathrm{m}\), we set the grid resolution to \(0.5\,\mathrm{m}\) when building the CLIFF-map from the THOR dataset, in contrast to \(1\,\mathrm{m}\) in the ATC dataset. Also, we set the prediction time step \(\Delta t\) to \(0.4\,\mathrm{s}\) for the cluttered THOR dataset, in contrast to \(1\,\mathrm{s}\) for the ATC dataset. In the parameter analysis we evaluate the impact of selecting \(\Delta t\) on prediction accuracy. Sampling radius \(r_{s}\) and kernel \(\beta\) are the main parameters in CLIFF-LHMP. The value of \(r_{s}\) is set to a multiple of the CLIFF-map grid resolution. For biasing the current direction with the sampled one, we use the default value of \(\beta=1\) for both datasets. The impact of both parameters is evaluated in the experiments. Using the ATC dataset, we specifically evaluate the influence of the three parameters (see Fig. 6): observation horizon \(O_{s}\in[1.2,3.2]\) s, sampling radius \(r_{s}\in[1,3]\)\(\,\mathrm{m}\), and kernel parameter \(\beta\in[0.5,10]\). We also evaluated the influence of the prediction time step \(\Delta t\in[0.4,1.0]\) s using the THOR dataset (see Fig. 7). For the evaluation of the predictive performance we used the following metrics: _Average_ and _Final Displacement Errors_ (ADE and FDE) and _Top-k ADE/FDE_. ADE describes the error between points on the predicted trajectories and the ground truth at the same time step. FDE describes the error at the last prediction time step. _Top-k ADE/FDE_ compute the displacements between the ground truth position and the closest of the \(k\) predicted trajectories. For each ground truth trajectory we predict \(k\) = 20 trajectories. We stop prediction according to Alg. 1 when no dynamics data (i.e. SWGMMs) is available within the radius \(r_{s}\) from the sampled location (line 6). If one predicted trajectory stops before \(T_{s}\), it will only be included in the ADE/FDE evaluation up to the last available predicted point. When predicting for each ground truth trajectory, the prediction horizon \(T_{s}\) is either equal to its length or \(50\,\mathrm{s}\) for longer trajectories. ## V Results In this section, we present the results obtained in ATC and THOR with our approach compared to two baselines. The performance evaluation is conducted using both quantitative and qualitative analysis, and we further investigate the approach's performance through a parameter analysis. ### _Quantitative Results_ Figs. 4 and 5 show the quantitative results obtained in the ATC and THOR datasets. We compare our CLIFF-LHMP approach with IS-MDP [4] and CVM. In the short-term perspective all approaches perform on par. The mean ADE is marginally lower for CVM compared to the other predictors below \(6\,\mathrm{s}\) in ATC, below \(10\,\mathrm{s}\) in THOR1, and below \(4\,\mathrm{s}\) in THOR3. In THOR3 there are more obstacles that people need to avoid, while THOR1 and ATC include more open spaces. In open spaces without obstacles, a constant velocity prediction is often a very good short-term predictor [6]. For our approach which accounts for possible deviations from straight trajectories the ADE for short-term predictions is slightly higher. For prediction horizons less than \(10\,\mathrm{s}\), IS-MDP performs better than CLIFF-LHMP. However, the IS-MDP method requires additional input (goal points and the obstacle map) and its performance strongly depends on both. In contrast, our approach makes predictions without explicit knowledge about goals and implicitly accounts for the obstacle layout, as well as the specific ways people navigate in the environment. In long-term predictions above \(10\,\mathrm{s}\), both CLIFF-LHMP and IS-MDP outperform the CVM method. Our approach is substantially better than IS-MDP when the prediction horizon is above \(20\,\mathrm{s}\) since it implicitly exploits location-specific motion patterns, thus overcoming a known limitation of MDP-based methods [4]. Table II summarises the performance results of our method against the baseline approaches at the maximum prediction horizon. Our CLIFF-LHMP approach accurately predicts human motion up to \(50\,\mathrm{s}\) with a mean ADE of \(5\,\mathrm{m}\). At \(50\,\mathrm{s}\) in the ATC dataset, our method achieves a 45% ADE and 55% FDE improvement in performance compared to IS-MDP. At \(12\,\mathrm{s}\) in THOR1 and THOR3, our method achieves an improvement of 6.3% and 13.3% ADE (25.7%, 27.8% FDE) over IS-MDP, respectively. Figs. 4 and 5 also show that the standard deviation of ADE and FDE is generally lower for CLIFF-LHMP predictions, compared to CVM and IS-MDP. This indicates that our approach makes more consistent predictions, both in the short- and long-term perspective. predictor trust the CLiFF-map more, which can lead to jumps between distinct motion patterns. Setting \(\beta\) to a high value such as 10 slightly improves the performance in short-term predictions, however, as for the CVM model, the CLiFF-LHMP predictor with high values of \(\beta\) is prone to fail delivering long-term predictions. The reason is that we stop predicting when the CLiFF-map is not any longer available close to the predicted location. So, if more trust is put on the CVM component, many ground truth trajectories cannot be predicted successfully for long prediction times. When the planning horizon is set to \(50\,\mathrm{s}\), 84% of ground truth trajectories can be predicted successfully with \(\beta=1\), while with \(\beta=10\), the ratio drops to 52.3%. Also when the prediction is dominated by the CVM component, the top k-ADE/FDE scores are worse due to a reduced diversity of the predictions. In the experiments with different values of the sampling radius \(r_{s}\) (see Fig. 6, right), we observed a stable prediction performance. Therefore, it is reasonable to set \(r_{s}=1\) in order to reduce the computation cost. In our experiments with the prediction time step \(\Delta t\), we observe robust performance with slight improvement when making higher frequency predictions (\(\Delta t=\)\(0.4\,\mathrm{s}\) vs. \(1.0\,\mathrm{s}\), see Fig. 7). Smaller \(\Delta t\) is recommended in cluttered environments, such as in the THOR dataset. Making iterative predictions with a smaller time step naturally comes at the expense of computational cost increasing linearly for CLiFF-LHMP. Selecting a larger prediction time step \(\Delta t=\)\(1.0\,\mathrm{s}\) drops the performance in THOR by only approx. 5% at the maximum prediction horizon, as compared to \(\Delta t~{}=~{}0.4\,\mathrm{s}\). ### _Qualitative Results_ Figures 8 and 9 show qualitative results with example predictions. Our approach correctly captures the motion patterns in each scenario, utilizing the environment information during the prediction. Figure 9 shows that the predicted trajectories avoid the obstacles, even though an obstacle map is not used for predictions. Furthermore, using maps of dynamics built from the observations of human motion makes it possible to predict motion through regions which appear as obstacles in an occupancy map, for example across stairs and through narrow passages (see Fig. 8). Similarly, using the MoD input keeps predictions in more intensively used areas of the environment, avoiding semantically-insignificant and empty regions, e.g., corners of the room (see Fig. 9). ## VI Conclusions In this paper we present the idea to use _Maps of Dynamics_ (MoDs) for long-term human motion prediction. By using MoDs, motion prediction can utilize previously observed spatial motion patterns that encode important information about spatial motion patterns in a given environment. We present the CLiFF-LHMP approach to predict long-term motion using a CLiFF-map - a probabilistic representation of a velocity field from isolated and possibly sparse flow information (i.e. complete trajectories are not required as input). In our approach, we sample directional information from a CLiFF-map to bias a constant velocity prediction. We evaluate CLiFF-LHMP with two publicly available real-world datasets, comparing it to several baseline approaches. The results demonstrate that our approach can predict human motion in complex environments over very long time horizons. Our approach performs on-par with the state of the art for shorter periods (\(10\,\mathrm{s}\)) and significantly outperforms it in terms of ADE and FDE for longer periods of up to \(50\,\mathrm{s}\). We also showed that our method makes more consistent predictions and is not strongly sensitive to the observation horizon. By exploiting the learned motion patterns encoded in the CLiFF MoD, our method can implicitly infer common goal points and correctly predict trajectories that follow the complex topology of the environment, e.g., navigating around corners or obstacles, or passing through narrow passages such as doors. Future work will include experimenting with other types of MoDs and motion prediction methods, sampling speed in addition to direction from the MoD, extending CLiFF-LHMP to multi-agent prediction, extending the evaluation to Fig. 4: ADE/FDE (mean \(\pm\) one std. dev.) in the ATC dataset with prediction horizon 1-\(50\,\mathrm{s}\). Fig. 5: ADE/FDE (mean \(\pm\) one std. dev.) in the THOR1 **(top)** and THOR3 **(bottom)** dataset with prediction horizon 0.4-\(12\,\mathrm{s}\). outdoor datasets, as well as estimating confidence values for the predicted trajectories.
2310.01574
Potential Ways to Detect Unfairness in HRI and to Re-establish Positive Group Dynamics
This paper focuses on the identification of different algorithm-based biases in robotic behaviour and their consequences in human-robot mixed groups. We propose to develop computational models to detect episodes of microaggression, discrimination, and social exclusion informed by a) observing human coping behaviours that are used to regain social inclusion and b) using system inherent information that reveal unequal treatment of human interactants. Based on this information we can start to develop regulatory mechanisms to promote fairness and social inclusion in HRI.
Astrid Rosenthal-von der Pütten, Stefan Schiffer
2023-09-27T09:42:52Z
http://arxiv.org/abs/2310.01574v1
# Potential Ways to Detect Unfairness in HRI and to Re-establish Positive Group Dynamics ###### Abstract This paper focuses on the identification of different algorithm-based biases in robotic behaviour and their consequences in human-robot mixed groups. We propose to develop computational models to detect episodes of microaegression, discrimination, and social exclusion informed by a) observing human coping behaviours that are used to regain social inclusion and b) using system inherent information that reveal unequal treatment of human interactants. Based on this information we can start to develop regulatory mechanisms to promote fairness and social inclusion in HRI. human-robot interaction, group dynamics, social rejection, bias, inclusion ## I Introduction Social robots are envisioned to be part of our lives as service providers, team members, companions. Depending on the robots' tasks and purpose they will be playing a more or less active role in our social groups and potentially shape group dynamics - for the better and the worse. Previous research demonstrated that robots can positively influence social dynamics in small groups. In free play situations a robot was able to mitigate conflict between children over toys by providing information on how to compromise [1]. A robotic microphone positively influenced group discussions by encouraging those discussion partners to participate more who were more silent than others [2]. Similarly, a robot giving self-disclosure statements could facilitate to speak up in a support group session for stressed students and it improved perceptions of trust among the members of the support group [3]. However, robots can also cause feelings of social exclusion by leaving humans out of interactions (e.g. not tossing a ball to the human, [4]) or communication (e.g. speaking in "robotic language, [5], or bluntly rejecting the human, [6]), causing negative consequences such as experiencing negative emotions, the feeling of being ignored or being meaningless, and lowered self-esteem. While we assume that developer teams of social robots do not intend to create robots that socially exclude individuals, social exclusion can still arise in interactions in human-robot groups because robots may have software components that are biased against certain groups of humans (e.g., women, PoC) or because it is unaware of the social situation the robot finds itself in and unknowingly behaves socially inadequate. In this paper we want to briefly revisit i) the role groups play in our (human) lives and how group membership can lead to inter-group bias, ii) the psychological consequences of social rejection (caused by biased behavior), iii) sources of algorithmic bias, and iv) how to use system information to detect bias and start repair mechanisms. ## II Related work on HRI groups and algorithmic bias ### _What groups mean to us_ Groups are highly important to individuals [7]. Since the membership in groups is one defining part of an individual's self-concept and consequently an individual's self-esteem is partly dependent upon group membership, strategies to protect the group and differentiate it from other groups are important for the individual. Positive distinctiveness of the in-group from other groups can be achieved by simply evaluating groups differently in favour of the in-group - also referred to as inter-group bias which is "the systematic tendency to evaluate one's own membership group (the in-group) or its members more favourably than a non-membership group (the out-group) or its members" [8]. It manifests as favouring the in-group (in-group favouritism) or derogating the out-group (out-group derogation), or both. In-group favouritism entails the extension of trust, positive regard, cooperation, and empathy to in-group members, but not to members of the out-group and thus is an initial form of discrimination. Inter-group bias extends to robots. For instance, humans show in-group favouritism for an in-group robot in online studies [9, 10] and assigned "painful" noise blasts to out-group humans to spare in-group robots in scenarios were interactants were in different rooms [11]. Since humans show inter-group bias in human-robot mixed groups negative emotional and social consequences potentially arise for other humans when a robot is favoured instead of them. Moreover, the robots could also be the source of social rejection due to algorithmic biases which will be discussed further below. ### _What happens when we feel excluded from a group_ Inter-group bias can be perceived as a sign of social exclusion or social rejection. According to the Temporal-Need-Threat-Model by Williams [12], social exclusion causes a reflexive pain response accompanied with negative affect (e.g., sadness, anger) and triggers threats to four fundamental needs: belonging, self-esteem, control over one's social environment and meaningful existence. In a reflective stage, individuals' attention is directed to the exclusion episode and they reflect on its meaning and relevance. This may lead to coping responses such as compliance and conformity or attracting attention, provoking, and attempts of controlling others to fortify the threatened needs. Persistent exposure to ostracism over time consumes the resources necessary to motivate the individual to fortify threatened needs. Eventually, this leads to resignation, alienation, helplessness, and depression. Since humans are hypersensitive to ostracism and tend to over-detect ostracism [13], it is extremely likely that humans detect ostracism in interactions with robots as well and experience and engage in the described reflexive and reflective processes. Indeed, recent studies have explored this and found that participants felt excluded when robots talked in a "robot language" [5] or when a robot stated it did not want to interact with the human again [6]. Although the need for a paradigm shift from studying dyadic human-robot interactions in laboratory settings to studying group interactions in complex environments has been identified and advocated for [14] research in human-robot mixed groups is still scarce. Social psychological phenomena such as social exclusion, and ostracism as negative consequences of a robot's unequal adaptation to group members through machine learning is yet a new perspective in research on HRI groups that the community just recently has identified to be important. ### _Sources of algorithmic bias in HRI_ The general notion of unfair or fair AI has been discussed intensively in recent years. In our modern, digitalized world, we engage more and more in interactions with algorithms and artificially intelligent systems that learn and adapt based on these interactions. Our visits, views, clicks, and buying decisions provide training data for recommender systems on shopping websites (e.g., Amazon) or video streaming applications (e.g., Netflix). Recently, voice agents have entered our homes providing us with helpful information, and services while using these interactions as training data to learn and adapt to us and generalizing this knowledge to predict preferences and intentions of groups of users. Especially in the latter area of voice agents, similar biases may emerge when algorithms try to categorize users into groups and provide these groups with personalized interactions. Recent research demonstrated in many application fields (e.g., financial credit, job application management) that algorithms often discriminate certain groups of people, for instance, based on gender or skin tone and thereby exhibit unintended and unexpected biases usually originated in biased training data. While the lack of diversity in the training data sets that are being used in machine learning originates from different sources, it unequivocally causes a bias towards certain types of users at the cost of others. A new topic that has been recently identified [15] are potential negative consequences arising in HRI by robots that show unintended biases in favour of certain group members and thereby discriminating others. Under the term Fair AI, researchers call out the computer science community to "identify sources of bias, [to] de-bias training data and [to] develop artificial-intelligence algorithms that are robust to skews in the data" [16]. Since computer vision and machine-learning are core technologies for robotic systems, it has been proposed that a similar threat is posed to HRI [17]. Interestingly, concerns about the negative effects of biased robotics systems are often seen from a more global societal perspective. For instance, autonomous cars could put people of colour to a greater risk due to biased person recognition and medical or service robots might reinforce certain discriminatory practices due to biased algorithmic decision-making [15]. However, besides the issues already identified, new forms of biases are likely to emerge when the training data base for machine learning are interactions with multiple humans over a longer time as we have discussed in previous work [18]. Robots are expected to learn and adapt to their users, ideally while in operation during run-time. Hence, robots learning from humans means that robots learn from interactions and the more interactions the better the learning outcome. But humans might have more or less time or might be more or less motivated to provide these interactions that are needed for learning. Thus, training data sets differ in quantity and quality which has consequences for the learning outcome (e.g., knowing the user's preferences) and the robot's quality to adapt to different users. Let is consider the following family scenario, in which the user who spends more time at home potentially provides the largest training data base for the robot, is best known to the system, and his/her preferences can be easily determined and served. A user who spends less time at home might receive recommendations and interactions matching his/her preferences less often. Or let us consider a working environment, in which the robot's implemented goal is to maximize team performance. The robot will monitor the performance of every single team member and their contribution to team performance. Based on the maximization goal, the robot might decide to distribute more resources to those team members who are high performers in the task, thereby discriminating low performers. Very likely, low performers will experience negative emotions, feel threatened in their self-esteem and their need to belong to the group, and will try to regain social inclusion. Recent work tapped into this issue of unequal adaptation to users based on performance and algorithm goals in experimental studies. For instance, in a collaborative tower construction task, a robot distributed building blocks unequally between two participants which led to lower satisfaction of the human team members with the team relationship [19]. In a collaborative Tetris Game, fair distribution (in contrast to unfair distribution) of resources led participants to trust the system more and resulted in higher overall team performance [20]. However, emotional responses and consequences for the self-perception and self-esteem of the neglected participant were not assessed. These first results and the scenarios described above demonstrate that besides the now commonly known problems of biases in natural language processing or face recognition also interaction-based algorithmic learning can result in, for instance, perceived (inter-group) bias and social exclusion of individuals with severe negative outcomes for the emotional state of the individual and the social dynamics of the group. ## III How to overcome biased HRI and reach better inclusion ### _First Step - Recognizing the potential for biases in your own work_ Researchers in the field of HRI have become more aware of the potential that their developments and systems might be affected by biases. Earlier this year a group of HRI scholars discussed "how pursuing a very typical, data-driven approach to the development of a robot listener behavior (production of backchannels, which can serve to indicate attentiveness) resulted in models that acted differently with participants with different gender identities" [21]. In their paper the authors discuss design guidelines that may be applied to avoid embedding gender biases into robot social behavior such as carefully examining training data sets before using them for modelling. According to Ntoutsi et al. [22] this recommendation would fall under preprocessing methods focusing on the data to mitigate bias which focus on creating so-called balanced data sets. This can be done using different approaches such as equal sampling from different groups or altering the given data in its classification, i.e., adapting training sample weights [23]. ### _Second Step - Mitigating bias in machine learning before system deployment_ Besides the pre-processing methods to mitigate bias as mentioned before, Ntoutsi et al. [22] also consider so-called in-processing methods focusing on the ML algorithm, and post-processing methods focusing on the ML model. Both types of approaches concentrate on the machine learning process and/or the inspection and adaptation of the resulting model. For instance, in the latter case Ntoutsi et al. refer to previous work that post-hoc changed the confidence of CPAR classification rules [24] or the probabilities in Naive Bayes models [25]. _Third Step - How to use system information to detect bias during interaction and start repair mechanisms_ All the specified approaches above have in common that developers or researchers are actively involved in curating either data or changing the algorithm's specifications which cannot be done during run-time. Moreover, if the system is further learning based on continuous interactions it can be that a "de-biased" algorithm becomes biased again, for instance, because human interactants behave in stereotypical ways. We have proposed that i) information on biased components and ii) certain system information that is produced during ongoing interactions with humans can be used to inform the system about potential biases emerging. For instance, it is commonly known that biased speech recognition performance is biased in favour for people speaking accent-free standard languages due to better training to that user type. This known pre-existing bias can be used during the development of interactions with humans. The system should also be enabled to draw conclusions of internal data to detect bias. For instance, recognition for human faces or behaviours as well as predictions about human behaviour usually are hypothesis-based with specifications about the likelihood and confidence that this hypothesis is true or false. Consistent lower likelihoods connected with one user could be used (together with other information) as an indicator for bias. As described above most systems are biased in their speech recognition performance in favour of people speaking accent-free High German due to better training to that user type in contrast to people speaking local dialects or foreign accents. A robot could use system information that is an indicator for this bias occurring, for instance, when a higher number of hypotheses exists (cf. n-best lists, [26]; [27]) and/or lower confidence (cf. [28]) in speech recognition or computer vision for a specific user (e.g., interactant with local dialect or foreign language accent). Based on this the robot would initiate a regulatory mechanism such as apologizing for misunderstandings and asking the user to speak more slowly. _Fourth Step - How to use user behavior to detect bias during interaction and start repair mechanisms_ As explained above there is empirical evidence that when humans detect signs of social rejection or social exclusion by a robot they will experience negative emotions and fundamental needs are threatened [29, 4, 5, 6]. This may lead to coping responses such as compliance and conformity, attracting attention, provoking, or attempts of controlling others to fortify the threatened needs. Current studies on social exclusion in HRI scenarios predominantly look into self-reported experiences. Future work should systematically investigate which coping behaviours excluded humans exert in trying to regain social inclusion. One approach is to use behaviour analysis to identify patterns of verbal and nonverbal behaviour, and interactional strategies a robot might detect as a sign that social exclusion occurred. Based on this classification a computational model could be implemented to detect episodes of social exclusion informed by observing human coping behaviours that are used to regain social inclusion as well as the aforementioned system information. _Fifth Step - Develop Socially Interactive Agents with Capacity to Re-establish positive Group Dynamics_ The work is not done when we managed to detect biases. We further need a good concept how to resolve social exclusion episodes for the human and re-establish positive group dynamics. This means that there is a need to i) develop conversational or interactional strategies to maintain positive social group dynamics that can be triggered when potential bias is detected, and ii) research which conversational and interactional strategies are effective and regarded as socially adequate in different situations and group constellations. ## IV Conclusion In this paper we outlined why social robots should take into account the social dynamics in a human-robot mixed group as well as the (negative) social consequences of its own behaviour in these groups. We discussed why and in which ways biases can arise in HRI and how we can either de-bias systems or enable the system to automatically detect bias and engage in repair mechanisms. We advocate for considering this perspective throughout the development process of a new system.
2309.15462
DTC: Deep Tracking Control
Legged locomotion is a complex control problem that requires both accuracy and robustness to cope with real-world challenges. Legged systems have traditionally been controlled using trajectory optimization with inverse dynamics. Such hierarchical model-based methods are appealing due to intuitive cost function tuning, accurate planning, generalization, and most importantly, the insightful understanding gained from more than one decade of extensive research. However, model mismatch and violation of assumptions are common sources of faulty operation. Simulation-based reinforcement learning, on the other hand, results in locomotion policies with unprecedented robustness and recovery skills. Yet, all learning algorithms struggle with sparse rewards emerging from environments where valid footholds are rare, such as gaps or stepping stones. In this work, we propose a hybrid control architecture that combines the advantages of both worlds to simultaneously achieve greater robustness, foot-placement accuracy, and terrain generalization. Our approach utilizes a model-based planner to roll out a reference motion during training. A deep neural network policy is trained in simulation, aiming to track the optimized footholds. We evaluate the accuracy of our locomotion pipeline on sparse terrains, where pure data-driven methods are prone to fail. Furthermore, we demonstrate superior robustness in the presence of slippery or deformable ground when compared to model-based counterparts. Finally, we show that our proposed tracking controller generalizes across different trajectory optimization methods not seen during training. In conclusion, our work unites the predictive capabilities and optimality guarantees of online planning with the inherent robustness attributed to offline learning.
Fabian Jenelten, Junzhe He, Farbod Farshidian, Marco Hutter
2023-09-27T07:57:37Z
http://arxiv.org/abs/2309.15462v2
DTC: Deep Tracking Control - A Unifying Approach to Model-Based Planning and Reinforcement-Learning for Versatile and Robust Locomotion ###### Abstract **Legged locomotion is a complex control problem that requires both accuracy and robustness to cope with real-world challenges. Legged systems have traditionally been controlled using trajectory optimization with inverse dynamics. Such hierarchical model-based methods are appealing due to intuitive cost function tuning, accurate planning, and most importantly, the insightful understanding gained from more than one decade of extensive research. However, model mismatch and violation of assumptions are common sources of faulty operation and may hinder successful sim-to-real transfer. Simulation-based reinforcement learning, on the other hand, results in locomotion policies with unprecedented robustness and recovery skills. Yet, all learning algorithms struggle with sparse rewards emerging from environments where valid footholds are rare, such as gaps or stepping stones. In this work, we propose a hybrid control architecture that combines the advantages of both worlds to simultaneously achieve greater robustness, foot-placement accuracy, and terrain generalization. Our approach utilizes a model-based planner to roll out a reference motion during training. A deep neural network policy is trained in simulation, aiming to track the optimized footholds. We evaluate the accuracy of our locomotion pipeline on sparse terrains, where pure data-driven methods are prone to fail. Furthermore, we demonstrate superior robustness in the presence of slippery or deformable ground when compared to model-based counterparts. Finally, we show that our proposed tracking controller generalizes across different trajectory optimization methods not seen during training. In conclusion, our work unites the predictive capabilities and optimality guarantees of online planning with the inherent robustness attributed to offline learning.** ## Introduction Trajectory optimization (TO) is a commonly deployed instance of optimal control for designing motions of legged systems and has a long history of successful applications in rough environments since the early 2010s [1, 2]. These methods require a model ## 1 Introduction Figure 1: **Examples for deployment.** The proposed control pipeline combines robustness properties inherent to learning-based approaches with accurate foothold planning attributed to model-based methods. This marriage allows legged robots to be deployed in environments where steppable contact surfaces are sparse (bottom left) and environmental uncertainties are high (top right). of the robot's kinematics and dynamics during runtime, along with a parametrization of the terrain. Until recently, most approaches have used simple dynamics models such as single rigid body [3] or inverted pendulum dynamics [4, 5], or have ignored the dynamics altogether [6]. Research has shifted towards more complex formulations, including centroidal [7] or full-body dynamics [8]. The resulting trajectories are tracked by a whole-body control (WBC) module, which operates at the control frequency and utilizes full-body dynamics [9]. Despite the diversity and agility of the resulting motions, there remains a significant gap between simulation and reality due to unrealistic assumptions. Most problematic assumptions include perfect state estimation, occlusion-free vision, known contact states, zero foot-slip, and perfect realization of the planned motions. Sophisticated hand-engineered state machines are required to detect and respond to various special cases not accounted for in the modeling process. Nevertheless, highly dynamic jumping maneuvers performed by Boston Dynamics' bipedal robot Atlas demonstrate the potential power of TO. Reinforcement learning (RL) has emerged as a powerful tool in recent years for synthesizing robust legged locomotion. Unlike model-based control, RL does not rely on explicit models. Instead, behaviors are learned, most often in simulation, through random interactions of agents with the environment. The result is a closed-loop control policy, typically represented by a deep neural network, that maps raw observations to actions. Handcrafted state-machines become obsolete because all relevant corner cases are eventually visited during training. End-to-end policies, trained from user commands to joint target positions, have been deployed successfully on quadrupedal robots such as ANYmal [10, 11]. More advanced teacher-student structures have significantly improved the robustness, enabling legged robots to overcome obstacles through touch [12] and perception [13]. While locomotion on gaps and stepping stones is theoretically possible, good exploration strategies are required to learn from the emerging sparse reward signals. So far, these terrains could only be handled by specialized policies, which intentionally overfit to one particular scenario [14] or a selection of similar terrain types [15, 16, 17, 18]. Despite promising results, distilling a unifying locomotion policy may be difficult and has only been shown with limited success [19]. Some of the shortcomings that appear in RL can be mitigated using optimization-based methods. While the problem of sparse gradients still exists, two important advantages can be exploited: First, cost-function and constraint gradients can be computed with a small number of samples. Second, poor local optima can be avoided by pre-computing footholds [5, 8], pre-segmenting the terrain into step-pable areas [7, 20], or by smoothing out the entire gradient landscape [21]. Another advantage of TO is the ability to plan actions ahead and predict future interactions with the environment. If model assumptions are generic enough, this allows for great generalization across diverse terrain geometries [7, 21]. The sparse gradient problem has been addressed extensively in the learning community. A notable line of research has focused on learning a specific task while imitating expert behavior. The expert provides a direct demonstration for solving the task [22, 23], or is used to impose a style while discovering the task [24, 25, 26]. These approaches require collecting expert data, commonly done offline, either through re-targeted motion capture data [24, 25, 26] or a TO technique [22, 23]. The reward function can now be formulated to be dense, meaning that agents can collect non-trivial rewards even if they do not initially solve the task. Nonetheless, the goal is not to preserve the expert's accuracy but rather to lower the sample and reward complexity by leveraging existing knowledge. To further decrease the gap between the expert and the policy performance, we speculate that the latter should have insight into the expert's intentions. This requires online generation of expert data, which can be conveniently achieved using any model-based controller. Unfortunately, rolling out trajectories is often orders of magnitude more expensive than a complete learning iteration. To circumvent this problem, one possible alternative is to approximate the expert with a generative model, e.g., by sampling footholds from a uniform distribution [15, 16], or from a neural network [17, 27, 28]. However, for the former group, it might be challenging to capture the distribution of an actual model-based controller, while the latter group still does not solve the exploration problem itself. In this work, we propose to guide exploration through the solution of TO. As such data will be available both on- and offline, we refer to it as "reference" and not expert motion. We utilize a hierarchical structure introduced in deep loco [28], where a high-level planner proposes footholds at a lower rate, and a low-level controller follows the footholds at a higher rate. Instead of using a neural network to generate the foothold plan, we leverage TO. Moreover, we do not only use the target footholds as an indicator for a rough high-level direction but as a demonstration of optimal foot placement. The idea of combining model-based and model-free control is not new in the literature. For instance, supervised [29] and unsupervised [30, 31] learning has been used to warm-start nonlinear solvers. RL has been used to imitate [22, 23] or correct [32] motions obtained by solving TO problems. Conversely, model-based methods have been used to check the feasibility of learned high-level commands [27] or to track learned acceleration profiles [33]. Compared to [32], we do not learn corrective joint torques around an existing WBC, but instead, learn the mapping from reference signals to joint positions in an end-to-end fashion. To the author's best knowledge, our approach constitutes the first proposition for a tracking controller fully learned in simulation. To generate the reference data, we rely on an efficient TO method called terrain-aware motion generation for legged systems (TAMOLS) [21]. It optimizes over footholds and base pose simultaneously, thereby enabling the robot to operate at its kinematic limits. We let the policy observe only a small subset of the solution, namely planar footholds, desired joint positions, and the contact schedule. We found that these observations are more robust under the common pitfalls of model-based control, while still providing enough information to solve the locomotion task. In addition, we limit computational costs arising from solving the optimization problems by utilizing a variable update rate. During deployment, the optimizer runs at the fastest possible rate to account for model uncertainties and disturbances. Our approach incorporates elements introduced in [14], such as time-based rewards and position-based goal tracking. However, we reward desired foothold positions at planned touch-down instead of rewarding a desired base pose at an arbitrarily chosen time. Finally, we use an asymmetric actor-critic structure similar to [22], where we provide privileged ground truth information to the value function and noisified measurements to the network policy. We trained more than \(4000\) robots in parallel for two weeks on challenging ground covering a surface area of more than \(76000\,\mathrm{m}^{2}\). Throughout the entire training process, we generated and learned from about \(23\) years of optimized trajectories. The combination of offline training and online re-planing results in an accurate and agile tracking controller with exceptional robustness properties. As showcased in Fig. 1 and movie 1, with our hybrid control pipeline, ANYmal [34] can skillfully traverse parkours with high precision, and confidently overcome uncertain environments with high robustness. Remarkably, one policy can solve several categories of terrain types, such as gaps, stepping stones, stairs, boxes, and hills. Moreover, without the need for any post-training, the tracking policy can be deployed zero-shot with different TO methods at different update rates. The contributions of our work are therefore twofold: Firstly, we enable the deployment of model-based planners in rough and uncertain real-world environments, while, secondly, creating a single unifying locomotion policy that generalizes beyond the limitations imposed by state-of-the-art RL methods. ## Results In order to evaluate the effectiveness of our proposed pipeline, hereby referred to as Deep Tracking Control (DTC), we compared it with four different approaches: two model-based controllers, TAMOLS [21] and a nonlinear model predictive control (MPC) presented in [7], and two data-driven methods, as introduced in [13] and [11]. We refer to those as baseline-to-1 (TAMOLS), baseline-to-2 (MPC), baseline-rl-1 (teacher/student policy), and baseline-rl-2 (RL policy), respectively. These baselines mark the state-of-the-art in MPC and RL prior to this work and they have been tested and deployed under various conditions. If not noted differently, all experiments were conducted in the real world. ### Evaluation of Robustness We conducted three experiments to evaluate the robustness of our hybrid control pipeline. The intent is to demonstrate survival skills on slippery ground, and recovery reflexes when visual data is not consistent with proprioception or is absent altogether. We rebuild harsh environments that are likely to be encountered on sites of natural disasters, where debris might further break down when stepped onto, and construction sites, where oil patches create slippery surfaces. In the first experiment, we placed a rectangular cover plate with an area of \(0.78\times 1.19\,\mathrm{m}^{2}\) on top of a box with the same length and width, and height \(0.37\,\mathrm{m}\) (Fig. 2 A). The cover plate was shifted to the front, half of the box's length. ANYmal was then steered over the cover plate, which pitched down as soon as its center of mass passed beyond the edge of the box. Facing only forward and backward, the plate's movement was not detected through the depth cameras, and could only be perceived through proprioceptive sensors. Despite the error between map and odometry reaching up to \(0.4\,\mathrm{m}\), the robot managed to successfully balance itself. This experiment was repeated three times with consistent outcomes. In our second experiment (Fig. 2 B) we created an obstacle parkour with challenging physical properties. A large wooden box with a slopped front face was placed next to a wet and slippery whiteboard. We increased the difficulty by placing a soft foam box in front, and a rolling transport cart on top of the wooden box. The robot was commanded to walk over the objects with random reference velocities for approximately \(45\) seconds, after which the objects were re-distributed to their original locations to account for any potential displacement. This experiment was repeated five times. Despite not being trained on movable or deforming obstacles, the robot demonstrated its recovery skills in all five trials without any falls. The tracking policy was trained with perceptive feedback, meaning that the policy and the motion planner had partial or complete insight into the local geometrical landscape. Nevertheless, the locomotion policy was still capable of overcoming many obstacles completely blind. To simulate a scenario with damaged depth sensors, we let ANYmal blindly walk over a stair with two treads, each \(0.18\,\mathrm{m}\) high and \(0.29\,\mathrm{m}\) wide (Fig. 2 C). The experiment was repeated three times up and down, with an increasing heading velocity selected from \(\{\pm 0.5,\pm 0.75,\pm 1.0\}\,\mathrm{m}/\mathrm{s}\). In some cases, a stair tread was higher than the swing motion of a foot. Thanks to a learned swing reflex, the stair set could be successfully cleared in all trials. We note that the same stair set was passed by a blindfolded version of baseline-rl-1 [13], which was trained in a complex teacher/student environment. In contrast, our method relies on an asymmetric actor/critics structure, achieving a similar level of robustness. Accompanying video clips can be found in the supplementary movie S1. ### Evaluation of Accuracy We demonstrate the precision of foothold tracking by devising a complex motion that required the robot to perform a turn-in-place maneuver on a small surface of \(0.94\times 0.44\,\mathrm{m}^{2}\). The robot was commanded to walk up a slope onto a narrow table, then execute a complete \(360\,\mathrm{deg}\) turn, and finally descend onto a pal Figure 2: **Evaluation of robustness.****(A)** ANYmal walks along a loose cover plate that eventually pitches forward (left to right, top to bottom). The third row shows ANYmal’s perception of the surroundings during the transition and recovery phase. **(B)** The snapshots are taken at critical time instances when walking on slippery ground, just before complete recovery. The transport cart is visible in the second image. **(C)** ANYmal climbs upstairs with disabled perception (top to bottom). The collision of the right-front end-effector with the stair tread triggers a swing reflex, visualized in orange. Figure 3: **Evaluation of tracking performance.****(A)** ANYmal climbs up a narrow table, turns, and descends back down to a box. The second image in the second row shows the robot’s perception of the environment. **(B)** Euclidean norm of the planar foothold error, averaged over \(20\,\mathrm{s}\) of operation using a constant heading velocity. The solid/dashed curves represent the average/maximum tracking errors. **(C)** Same representation as in (B), but the data was collected with baseline-to-2. **(D)** DTC deployed with baseline-to-2, enabling ANYMal to climb up a box of \(0.48\,\mathrm{m}\). let. Some snapshots of the experiment are provided in Fig. 3 A, while the full video is contained in movie S2. To evaluate the quality of the foothold tracking, we collected data while ANYmal walked on flat ground. Each experiment lasted for approximately \(20\,\mathrm{s}\) and was repeated with eight different heading velocities selected from \(\{\pm 1.0,\pm 0.8,\pm 0.6,\pm 0.4\}\,\mathrm{m}\mathrm{/}\mathrm{s}\). We measured the tracking error as the smallest horizontal distance between a foot and its associated foothold during a stance phase. As shown in Fig. 3 B, the footholds could be tracked with very high precision of \(2.3\,\mathrm{c}\mathrm{m}\) and standard deviation \(0.48\,\mathrm{c}\mathrm{m}\) when averaged over the broad spectrum of heading velocity commands. ### Deployment with MPC The maximum height that DTC in combination with TAMOLS can reliably overcome is about \(0.40\,\mathrm{m}\). The policy might hesitate to climb up taller objects due to the risk of potential knee joint collisions with the environment. This limitation is inherent to the chosen TO method, which only considers simplified kinematic constraints. We, therefore, deployed DTC with the planner of baseline-to-2, a method that takes into account the full kinematics of the system. To allow for zero-short generalization, we implemented the same trotting gait as experienced during training. With this enhanced setup, ANYMal could climb up a box of height of \(0.48\,\mathrm{m}\). This is \(50\,\mathrm{\char 37}\) higher than what baseline-rl-1 can climb up, and \(380\,\mathrm{\char 37}\) more than what was reported for baseline-rl-2. The box climbing experiment was successfully repeated five times. The results are shown in movie S2, and for one selected trial in Fig. 3 D. Furthermore, we measured the tracking error on flat ground. Despite the wider stance configuration of baseline-to-2, the error was found to be only \(0.03\,\mathrm{m}\) on average (Fig. 3 C). The above two results seem to be surprising at first glance but are easy to explain when considering the observation space and the training environment. While the base-pose trajectory is considerably more detailed for baseline-to-2 due to frequency-loop shaping and increased system complexity, the foothold patterns are nevertheless quite similar. Thus, good generalization is facilitated by the specific choice of observations, which hides the optimized base pose from the policy. Some terrains (type l as shown in Fig. 8 D) can be seen as a combination of gaps and boxes, where each box is surrounded by a gap. During training, TAMOLS placed the footholds sufficiently far away from the box to avoid stepping into the gap. This allowed the policy to learn climbing maneuvers without knee joint collisions. Baseline-to-2, being aware of the spatial coordinates of the knees, naturally produces a similar foothold pattern, even in the absence of the gap. Figure 4: **Benchmark against model-based control.****(A)** DTC successfully traverses an obstacle parkour (left to right) in simulation with a heading velocity of \(1\,\mathrm{m/s}\). Prior to our work, this parkour has been crossed by baseline-to-2 with a heading velocity of \(0.8\,\mathrm{m/s}\). **(B)** Baseline-to-1 falls after stepping into a gap hidden from the perception (left to right). **(C)** ANYmal successfully overcomes a trapped floor using our hybrid control architecture (left to right). lowing the robot to successfully navigate through the trap. The robustness roots in the ability to ignore both perception and reference motion while relying only on proprioception. Such behavior is learned in simulation by experiencing simulated map drift. The experiment was repeated five times with baseline-to-1, five times with baseline-to-2, and five times with our method, consistently leading to similar results. The video clips corresponding to the above experiments can be found in movie S3. The movie is further enriched with a comparison of baseline-to-2 against DTC on soft materials, which impose very similar challenges. ### Benchmark Against RL Control While RL policies are known for their robustness, they may struggle in environments with limited interaction points. We demonstrate typical failure cases in two experiments utilizing baseline-rl-1. In the first experiment (Fig. 5 A), ANYmal was tasked to cross a small gap of \(0.1\,\mathrm{m}\) with a reference heading velocity of \(0.2\,\mathrm{m}\mathrm{/}\mathrm{s}\). The model-free controller did not avoid the gap, and thus could not reach the other side of the platform. In the second experiment, we connected two elevated boxes with a \(1.0\,\mathrm{m}\)-long beam of height \(0.2\,\mathrm{m}\) (Fig. 5 B). The robot was commanded to walk from the left to the right box but failed to make use of the beam. In comparison, our hybrid policy achieves a \(100\,\mathrm{\char 37}\) success rate for the same gap size over ten repetitions. To further demonstrate the locomotion skills of DTC, we made the experiments more challenging. We replaced the small gap with four larger gaps, each \(0.6\,\mathrm{m}\) wide and evenly distributed along the path (Fig. 5 C). Similarly, we increased the length of the beam to a total of \(1.8\,\mathrm{m}\) (Fig. 5 D). Despite the increased difficulty, our approach maintained a \(100\,\mathrm{\char 37}\) success rate across four repetitions of each experiment. Video clips of those experiments can be found in movie S4. By using a specialized policy, ANYmal crossed already a \(0.6\,\mathrm{m}\) wide gap within a pre-mapped environment [14]. Most notably, our locomotion controller, not being specialized nor fine-tuned for this terrain type, crossed a sequence of four gaps with the same width, while, relying on online generated maps only. The limitations of baseline-rl-1 were previously demonstrated [7] on the obstacle parkour of Fig. 4 A, showing its inability to cross the stepping stones. We showcase the generality of our proposed control framework by conducting three experiments on stepping stones in the real world, each with an increased level of difficulty. The first experiment (Fig. 6 A) required ANYmal traversing a field of equally sized stepping stones, providing a contact surface of \(0.2\times 0.2\,\mathrm{m}^{2}\) each. The robot passed the \(2.0\,\mathrm{m}\) long field \(10\) times. Despite the varying heading velocity commands, the robot accurately hit always the correct stepping stones as indicated by the solution of the TO. For the second experiment (Fig. 6 B), we increased the height of two randomly selected stones. The parkour was successfully crossed four out of four times. In the final experiment (Fig. 6 C), we distribute three elevated platforms \(a\), \(b\), and \(c\), connected by loose wooden blocks of sizes \(0.31\times 0.2\times 0.2\,\mathrm{m}^{3}\) and \(0.51\times 0.2\times 0.2\,\mathrm{m}^{3}\). This environment poses significant challenges as the blocks may move and flip over when stepped on. Following the path \(a\to b\to a\to b\to c\to a\), the robot missed only one single stepping stone, which, however, did not lead to failure. Video clips of the stepping stones experiments are provided in movie S5. ### Simulation-Based Ablation Study During training, we compute a new solution to the TO problem after variable time intervals, but mainly after each foot touch-down. While such a throttled rate greatly reduces computational costs, it also leads to poor reactive behavior in the presence of quickly changing external disturbances, dynamic obstacles, or map occlusion. Moreover, the optimizer was updated using privileged observations, whereas, in reality, the optimizer is subject to elevation map drift, wrongly estimated friction coefficients, and unpredicted exter ## 5 Conclusion Figure 5: **Benchmark against RL.****(A)** Baseline-rl-1 attempts to cross a small gap. ANYmal initially manages to recover from miss-stepping with its front legs but subsequently gets stuck as its hind legs fall inside the gap. **(B)** Using baseline-rl-1, the robot stumbles along a narrow beam. **(C)** With DTC, the robot is able to pass four consecutive large gaps (left to right) without getting stuck or falling. **(D)** ANYmal is crossing a long beam using our proposed control framework. Figure 6: **Evaluation of the locomotion performance on stepping stones.****(A)** ANYmal reliably crosses a field of flat stepping stones (left to right). **(B)** The robot crosses stepping stones of varying heights (left to right). The two tall blocks are highlighted in blue. **(C)** ANYmal navigates through a field of loosely connected stepping stones, following the path \(a\to b\to a\to b\to c\to a\). ## 6 Conclusion Figure 7: **Simulation results and ablation studies**. **(A)** Success and failure rates of DTC, recorded for different update rates of the optimizer. The upper limit of \(50\,\mathrm{Hz}\) is imposed by the policy frequency. **(B)** Comparison against baseline policies. Left: Evaluation on all \(120\) terrains. Right: Evaluation on terrains where valid footholds are dense (white background) and sparse (gray background). **(C)** Impact of elevation map drift on the locomotion performance, quantified by tracking error (left), success rate on rough (middle), and on flat ground (right). **(D)** Average terrain level (left) and average foothold reward (right) scored during training. nal forces. To compensate for such modeling errors, we deploy the optimizer in MPC-fashion. In the following, we investigate the locomotion performance as a function of the optimizer update rate. Using the experimental setup outlined in supplementary section S4, we collected a total of six days of data in simulation. A robot was deemed "successful" if it could walk from the center to the border of its assigned terrain patch, "failed" if its torso made contact with the environment within its patch, and "stuck" otherwise. We report success and failure rates in Fig. 7 A. Accordingly, when increasing the update rate from \(1\,\mathrm{Hz}\) to \(50\,\mathrm{Hz}\), the failure rate dropped by \(7.11\,\mathrm{\char 37}\) while the success rate increased by \(4.25\,\mathrm{\char 37}\). In the second set of experiments, we compared our approach to baseline-rl-2 as well as the same policy trained within our training environment. We refer to the emerging policy as baseline-rl-3. More details regarding the experimental setup can be found in supplementary section S5. As depicted in Fig. 7 B (left), our approach exhibits a substantially higher success rate than baseline-rl-2. By learning on the same terrains, baseline-rl-3 can catch up but still does not match our performance. The difference mainly comes from the fact that the retrained baseline still fails to solve sparse-structured terrains. To highlight this observation, we evaluated the performance on four terrain types with sparse ("stepping stones", "beams", "gaps", and "pallets"), and on four types with dense stepping locations ("stairs", "pit", "rough slope", and "rings"). On all considered terrain types, our approach clearly outperforms baseline-rl-2 by a huge margin (Fig. 7 B, right), thereby demonstrating that learned locomotion generally does not extrapolate well to unseen scenarios. We perform equally well as baseline-rl-3 on dense terrains, but score significantly higher on sparse-structured terrains. This result suggests that the proposed approach itself is effective and that favorable locomotion skills are not encouraged by the specific training environment. In an additional experiment, we investigated the impact of erroneous predictions of the high-level planner on locomotion performance. We did so by adding a drift value to the elevation map, sampled uniformly from the interval \(\in(0,0.5)\,\mathrm{m}\). Contrary to training, the motion was optimized over the perturbed height map. Other changes to the experimental setup are described in the supplementary section S6. As visualized in Fig. 7 C, we collected tracking error, success, and failure rates with simulated drift on flat and rough ground. The tracking error grows mostly linearly with the drift value. On flat ground, the slope of the error curve decreases at around \(0.1\,\mathrm{m}\) of drift. On rough terrains, the success rate remains constant for drift values smaller than \(0.1\,\mathrm{m}\), and decreases linearly for larger values. On the other hand, success and failure rates are not impacted by drift on flat ground. We found that providing joint positions computed for the upcoming touch-down event greatly improves convergence time and foothold tracking performance. This signal encodes the foothold location in joint space, thus, providing a useful hint for foothold tracking. It also simplifies the learning process, as the network is no more required to implicitly learn the inverse kinematics (IK). Evidence for our claims is given in Fig. 7 D, showing two relevant learning curves. Tracking accuracy is represented by the foothold rewards, while technical skills are quantified using the average terrain level [11]. Both scores are substantially higher if the footholds can be observed in both task and joint space. ## Discussion This work demonstrates the potential of a hybrid locomotion pipeline that combines accurate foot placement and dynamic agility of state-of-the-art TO with the inherent robustness and reflex behaviors of novel RL control strategies. Our approach enables legged robots to overcome complex environments that either method alone would struggle with. As such terrains are commonly found in construction sites, mines, and collapsed buildings, our work could help advance the deployment of autonomous legged machines in the fields of construction, maintenance, and search-and rescue. We have rigorously evaluated the performance in extensive real-world experiments over the course of about half a year. We included gaps, stepping stones, narrow beams, and tall boxes in our tests, and demonstrated that our method outperformed the RL baseline controller on every single terrain. Next, we evaluated the robustness on slippery and soft ground, each time outperforming two model-based controllers. Furthermore, we have shown that the emerging policy can track the motion of two different planners utilizing the same trotting gait. This was possible because the observed footholds seem to be mostly invariant under the choice of the optimizer. However, certain obstacles may encourage the deployed planner to produce footprint patterns that otherwise do not emerge during training. In this case, we would expect a degraded tracking performance. In addition to our main contribution, we have demonstrated several other notable results. (1) our policy, which was trained exclusively with visual perception, is still able to generalize to blind locomotion. (2) A simple multilayer perceptron (MLP) trained with an asymmetric actor/critics setup achieves similar robust behaviors as much more complex teacher/student trainings [12, 13]. (3) Our locomotion policy can handle a lot of noise and drift in the visual data without relying on complicated gaited networks, which might be difficult to tune and train [13]. Contrary to our expectations, the proposed training environment was found to be not more sample efficient than similar unifying RL approaches [11, 13]. The large number of epochs required for convergence suggests that foothold accuracy is something intrinsically complicated to learn. We see several promising avenues for future research. (1) Many successful data-driven controllers have the ability to alter the stride duration of the trotting gait. We expect a further increase in survival rate and technical skills if the network policy could suggest an arbitrary contact schedule to the motion optimizer. Moreover, a truly hybrid method, in which the policy can directly modify the cost function of the planner, may be able to generate more diversified motions. (2) Our results indicate that IK is difficult to learn. To increase the sample efficiency and improve generalization across different platforms, a more sophisticated network structure could exploit prior knowledge of analytical IK. (3) Another potential research direction may focus on leveraging the benefits of sampling trajectories from an offline buffer. This could significantly reduce the training time and allow for the substitution of TAMOLS with a more accurate TO method, or even expert data gathered from real animals. ## Materials and Methods To motivate the specific architectural design, we first identify the strengths and weaknesses of the two most commonly used control paradigms in legged locomotion. TO amounts to open-loop control, which produces suboptimal solutions in the presence of stochasticity, modeling errors, and small prediction windows. Unfortunately, these methods introduce many assumptions, mostly to reduce computation time or achieve favorable numerical properties. For instance, the feet are almost always pre-selected interaction points to prevent complex collision constraints, contact and actuator dynamics are usually omitted or smoothed out to circumvent stiff optimization problems, and the contact schedule is often pre-specified to avoid the combinatorial problem imposed by the switched system dynamics. Despite a large set of strong assumptions, real-time capable planners are not always truly real-time. The reference trajectories are updated around \(5\,\mathrm{Hz}\)[31] to \(100\,\mathrm{Hz}\)[7] and realized between \(400\,\mathrm{Hz}\) to 1000 Hz. In other words, these methods do not plan fast enough to catch up with the errors they are doing. While structural [2] or environmental [7, 20] decomposition may further contribute to the overall suboptimality, they were found useful for extracting good local solutions on sparse terrains. Because the concept of planning is not restricted to the tuning domain, model-based approaches tend to generalize well across different terrain geometries [7, 21]. Moreover, since numerical solvers perform very cheap and sparse operations on the elevation map, the map resolution can be arbitrarily small, facilitating accurate foothold planning. RL leads to policies that represent global closed-loop control strategies. Deep neural networks are large capacity models, and as such, can represent locomotion policies without introducing any assumption about the terrain or the system. They exhibit good interpolation in-between visited states but do not extrapolate well to unseen environments. Despite their large size, the inference time is usually relatively small. The integration of an actuator model has been demonstrated to improve sim-to-real-transfer [10], while the stochasticity in the system dynamics and training environment can effectively be utilized to synthesize robust behaviors [12, 13]. Contrary to model-based controllers, the elevation map is typically chosen to be small and sparse [11, 13] to avoid immense memory consumption during training. In summary, TO might be better suited if good generalization and high accuracy are required, whereas RL is the preferred method if robustness is of concern or onboard computational power is limited. As locomotion combines challenges from both of these fields, we formulate the goal of this work as follows: RL shall be used to train a low-level tracking controller that provides significantly more robustness than classical inverse dynamics, while the accuracy and planning capabilities of model-based TO shall be leveraged on a low-level to synthesize a unifying locomotion strategy that supports diverse and generalizing motions. ## Reference Motions Designing a TO problem for control always involves a compromise, that trades off physical accuracy and generalization against good numerical conditioning, low computation time, convexity, smoothness, availability of derivatives, and the necessity of a high-quality initial guess. In our work, we generate the trajectories using TAMOLS [21]. Unlike other similar methods, it does not require terrain segmentation nor pre-computation of footholds, and its solutions are robust under varying initial guesses. The system dynamics and kinematics are simplified, allowing for fast updates. During deployment, we also compare against baseline-to-2, which builds up on more complex kinodynamics. Due to the increased computation time and in particular the computationally demanding map-processing pipeline, this method is not well-suited to be used directly within the learning process.1 Footnote 1: The training time is expected to be about eight times larger. We added three crucial features to TAMOLS: First, we enable parallelization on CPU, which allows multiple optimization problems to be solved simultaneously. Second, we created a python interface using pybind11[35], enabling it to run in a python-based environment. Finally, we assume that the measured contact state always matches the desired contact state. This renders the TO independent of contact estimation, which typically is the most fragile module in a model-based controller. The optimizer requires a discretized \(2.5\)d representation of its environment, a so-called elevation map, as input. We extract the map directly from the simulator by sampling the height across a fixed grid. For both training and deployment, we use a fixed trotting gait with a stride duration of \(0.93\,\mathrm{s}\) and swing phase of \(0.465\,\mathrm{s}\), and set the resolution of the grid map to \(0.04\times 0.04\,\mathrm{m}^{2}\). ## Overview of the Training Environment The locomotion policy \(\pi(\mathbf{a}\mid\mathbf{o})\) is a stochastic distribution of actions \(\mathbf{a}\in\mathcal{A}\) that are conditioned on observations \(\mathbf{o}\in\mathcal{O}\), parametrized by an MLP. The action space comprises target joint positions that are tracked using a PD controller, following the approach in [10] and related works [12, 13, 14]. Given the state \(\mathbf{s}\in\mathcal{S}\), we extract the solution at the next time step \(\mathbf{x}^{\prime}(\mathbf{s})\in\mathcal{X}\subseteq\mathcal{S}\) from the optimizer, Figure 8: **Method.****(A)** The optimized solution provides footholds \(\mathbf{p}_{i}^{*}\), desired base pose \(\mathbf{b}^{*}\), twist \(\dot{\mathbf{b}}^{*}\), and acceleration \(\ddot{\mathbf{b}}^{*}\) (extracted one policy step \(\Delta t\) ahead), as well as desired joint positions \(\mathbf{q}^{*}\). Additionally, a height scan \(h\) is sampled between the foot position \(\mathbf{p}_{i}\) and the corresponding foothold \(\mathbf{p}_{i}^{*}\). **(B)** Training environment: The optimizer runs in parallel with the simulation. At each leg touch-down, a new solution \(\mathbf{x}^{\prime}\) is generated. The policy \(\pi\) drives the system response \(\mathbf{s}^{\prime}\) toward the optimized solution \(\mathbf{x}^{\prime}(\mathbf{s})\), which is encouraged using the reward function \(r\). Actor observations are perturbed with the noise vector \(\mathbf{n}\), while critics and the TO receive ground truth data. **(C)** Deployment: Given the optimized footholds, the network computes target joint positions that are tracked using a PD control law. The state estimator (state) returns the estimated robot state, which is fed back into the policy and the optimizer. **(D)** The list of terrain types includes a) stairs, b) combinations of slopes and gaps, c) pyramids, d) slopped rough terrain, e) stepping stones, f) objects with randomized poses, g) boxes with tilted surfaces, h) rings, i) pits, j) beams, k) hovering objects with randomized poses, and l) pallets. which includes four footholds \(\mathbf{p}_{i=0,\ldots,3}^{*}\), joint positions \(\mathbf{q}^{*}\) at touch-down time, and the base trajectory evaluated at the next time step. The base trajectory consists of of base pose \(\mathbf{b}^{*}(\Delta t)\), twist \(\dot{\mathbf{b}}^{*}(\Delta t)\), and linear and angular acceleration \(\ddot{\mathbf{b}}^{*}(\Delta t)\). More details can be found in Fig. 8 A. We then sample an action from the policy. It is used to forward simulate the system dynamics, yielding a new state \(\mathbf{s}^{\prime}\in\mathcal{S}\), as illustrated in Fig.8 B. To define a scalar reward \(r(\mathbf{s},\mathbf{s}^{\prime},\mathbf{x}^{\prime},\mathbf{a})\), we use a monotonically decreasing function of the error between the optimized and measured states, i.e., \(r\propto\mathbf{x}^{\prime}(\mathbf{s})\ominus\mathbf{x}(\mathbf{s}^{\prime})\). The minus operator \(\ominus\) is defined on the set \(\mathcal{X}\), the vector \(\mathbf{x}^{\prime}(\mathbf{s})\) is the optimized state, and \(\mathbf{x}(\mathbf{s}^{\prime})\) is the state of the simulator after extracting it on the corresponding subset. The policy network can also be understood as a learned model reference adaptive controller with the optimizer being the reference model. In this work, we use an asymmetrical actor/critic method for training. The value function approximation \(V(\mathbf{o},\tilde{\mathbf{o}})\) uses privileged \(\tilde{\mathbf{o}}\in\tilde{\mathcal{O}}\) as well as policy observations \(\mathbf{o}\). ## Observation Space The value function is trained on policy observations and privileged observations, while the policy network is trained on the former only [22]. All observations are given in the robot-centric base frame. The definition of the observation vector is given below, while noise distributions and dimensionalities of the observation vectors can be found in supplementary sections S2 and S3, respectively. ### Policy Observations The policy observations comprise proprioceptive measurements such as base twist, gravity vector, joint positions, and joint velocities. The history only includes previous actions [11]. Additional observations are extracted from the model-based planner, including planar coordinates of foothold positions (\(xy\) coordinates), desired joint positions at touch-down time, desired contact state, and time left in the current phase. The latter two are per-leg quantities that fully describe the gait pattern. Footholds only contain planner coordinates since the height can be extracted from the height scan. The height scan, which is an additional part of the observation space, enables the network to anticipate a collision-free swing leg trajectory. In contrast to similar works, we do not construct a sparse elevation map around the base [11, 27] or the feet [13]. Instead, we sample along a line connecting the current foot position with the desired foothold (Fig. 8 A). This approach has several advantages: (1) The samples can be denser by only scanning terrain patches that are most relevant for the swing leg, (2) it prevents the network from extracting other information from the map, which is typically exposed to most uncertainty (e.g., occlusion, reflection, odometry drift, discretization error, etc.), and (3) it allows us to conveniently model elevation map drift as a per-foot quantity, i.e., each leg can have its own drift value. We use analytical IK to compute the desired joint positions. As the motion optimizer may not provide a swing trajectory, as is the case for TAMOLS, we completely skip the swing phase. This means that the IK is computed with the desired base pose and the measured foot position for a stance leg, and the target foothold for a swing leg. It is worth noting that we do not provide the base pose reference as observation. As shown in the results chapter, this was found to reduce sensitivity to mapping errors and renders the policy independent of the utilized planner. Finally, to allow the network to infer the desired walking direction, we add the reference twist (before optimization) to the observation space. ### Privileged Observations The privileged observations contain the optimized base pose, base twist, and base linear and angular acceleration, extracted on time step ahead. In addition, the critics can observe signals confined to the simula tor, such as the external base wrench, external foot forces, the measured contact forces, friction coefficients, and elevation map drift. ### Reward Functions The total reward is computed as a weighted combination of several individual components, which can be categorized as follows: (1) "tracking" of reference motions, (2) encouraging of "consistent" behavior, and (3) other "regularization" terms necessary for successfully sim-to-real transfer. The reward functions are explained below whereas weights and parameters are reported in Table S3. #### Base Pose Tracking To achieve tracking of the reference base pose trajectory, we use \[r_{Bn}=e^{-\sigma_{Bn}\cdot\|\mathbf{b}^{*}(t+\Delta t)^{(n)}\ominus\mathbf{b}(t)^{(n)} \|^{2}}, \tag{1}\] where \(n=\{0,1,2\}\) is the derivative order, \(\mathbf{b}(t)\) is the measured base pose, \(\mathbf{b}^{*}(t+\Delta t)\) is the desired base pose sampled from the reference trajectory one policy step \(\Delta t\) ahead, and \(\ominus\) denotes the quaternion difference for base orientation, and the vector difference otherwise. We refer to the above reward function as a "soft" tracking task because large values can be scored even if the tracking error does not perfectly vanish. To further analyze the reward function, we decompose the base trajectory into three segments. The "head" starts at time zero, the "tail" stops at the prediction horizon, and the "middle" connects these two segments with each other. A logarithmic reward function would prioritize the tracking of the trajectory head, while a linear penalty would focus on making progress along the whole trajectory at once. Contrary, the exponential shape of the reward function splits the tracking task into several steps. During the initial epochs, the tracking error of the trajectory middle and tail will likely be relatively large, and thus, do not contribute significantly to the reward gradient. As a result, the network will minimize the tracking error of the trajectory head. Once its impact on the gradient diminishes, the errors corresponding to the trajectory middle will dominate the gradient landscape. In the final training stages, tracking is mostly improved around the trajectory tail. #### Football Tracking We choose a logarithmic function \[r_{pi}=-\ln(||\mathbf{p}_{i}^{*}-\mathbf{p}_{i}||^{2}+\epsilon), \tag{2}\] to learn foothold tracking, where \(\mathbf{p}_{i}\) is the current foot position of leg \(i\in\{0,\dots,3\}\), \(\mathbf{p}_{i}^{*}\) is the corresponding desired foothold, and \(0<\epsilon\ll 1\) is small number ensuring the function is well defined. The above reward function may be termed "hard" tracking task, as the maximum value can only be scored if the error reaches zero. As the tracking improves, the gradients will become larger, resulting in even tighter tracking toward the later training stages. A dense reward structure typically encourages a stance foot to be dragged along the ground to further minimize the tracking error. To prevent such drag motions to emerge, the above reward is given for each foot at most once during one complete gait cycle: more specifically, if and only if the leg is intended to be in contact and the norm of the contact force indicates a contact, i.e., if \(||\mathbf{f}_{i}||>1\), then the reward is given to the agent. #### Consistency In RL for legged locomotion, hesitating to move over challenging terrains is a commonly observed phenomenon that prevents informative samples from being gathered and thus impedes the agent's performance. This behavior can be explained by insufficient exploration: The majority of agents fail to solve a task while a small number of agents achieve higher average rewards by refusing to act. To overcome this local optimum, we propose to encourage consistency by rewarding actions that are intended by previous actions. In our case, we measure consistency as the similarity between two consecutive motion optimizations. If the solutions are similar, the agent is considered to be "consistent". We measure similarity as the Euclidean distance between two adjacent solutions and write \[r_{c}=\\ \sum_{\delta tj+t_{0}\in(T_{a}\cap T_{b})}-\delta t||\mathbf{b}_{a}^{*} (\delta tj+t_{0,a})\ominus\mathbf{b}_{b}^{*}(\delta tj+t_{0,b})||\\ -w_{p}||\mathbf{p}_{a}^{*}-\mathbf{p}_{b}^{*}||. \tag{3}\] Here, \(\mathbf{p}_{t}^{*}\) with \(t=\{a,b\}\) is a vector of stacked footholds, \(w_{p}>0\) is a relative weight, \(\delta t=0.01\,\mathrm{s}\) is the discretization time of the base trajectory, and \(t_{0}\) is the time elapsed since the optimization was started. The index \(a\) refers to the most recent solution, while \(b\) refers to the previous solution. It is important to note that the two solution vectors \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\), from which we extract the base and footholds, are only defined on their respective time intervals given by the optimization horizon \(\tau_{h}\), i.e, \(t_{a}\in T_{a}=[0,\tau_{h,a}]\) and \(t_{b}\in T_{b}=[0,\tau_{h,b}]\). #### Regularization To ensure that the robot walks smoothly, we employ two different penalty terms enforcing complementary constraints. The first term, \(r_{r1}=-\sum_{i}|\mathbf{v}_{i}^{T}\mathbf{f}_{i}|\), discourages foot-scuffing and end-effector collisions by penalizing power measured at the feet. The second term, \(r_{r2}=-\sum_{i}(\dot{\mathbf{q}}_{i}^{T}\mathbf{\tau}_{i})^{2}\), penalizes joint power to prevent arbitrary motions, especially during the swing phase. Other regularization terms are stated in the supplementary section S3. ### Training Environment To train the locomotion policy, we employ a custom version of Proximal Policy Optimization (PPO) [36] and a training environment that is mostly identical to that one introduced in [11]. It is explained in more detail in supplementary section S2. Simulation and back-propagation are performed on GPU, while the optimization problems are solved on CPU. #### Termination We use a simple termination condition where an episode is terminated if the base of the robot makes contact with the terrain. #### Domain Randomization We inject noise into all observations except for those designated as privileged. At each policy step, a noise vector \(\mathbf{n}\) is sampled from a uniform distribution and added to the observation vector, with the only exceptions of the desired joint positions and the height scan. For the elevation map, we add noise before extracting the height scan. The noise is sampled from an approximate Laplace distribution where large values are less common than small ones. We perturb the height scan with a constant offset, which is sampled from another approximate Laplace distribution for each foot separately. Both perturbations discourage the network to rely extensively on perceptive feedback and help to generalize to various perceptive uncertainties caused by odometry drift, occlusion, and soft ground. All robots are artificially pushed by adding a twist offset to the measured twist at regular time instances. Friction coefficients are randomized per leg once at initialization time. To render the motion robust against disturbances, we perturb the base with an external wrench and the feet with external forces. The latter slightly stiffens up the swing motion but improves tracking performance in the presence of unmodeled joint frictions and link inertia. The reference twist is resampled in constant time intervals and then hold constant. The solutions for the TO problems are obtained using ground truth data, which include the true friction coefficients, the true external base wrench, and noise-free height map. In the presence of simulated noise, drift, and external disturbances, the policy network is therefore trained to reconstruct a base trajectory that the optimizer would produce given the ground truth data. However, there is a risk that the network learns to remove the drift from the height scan by analyzing the desired joint positions. During hardware deployment, such a reconstruction will fail because the optimizer is subject to the same height drift. To mitigate this issue, we introduce noise to the desired joint position observations, sampled from a uniform distribution with boundaries proportional to the drift value. ### Terrain Curriculum We use a terrain curriculum as introduced in [11]. Before the training process, terrain patches of varying types and difficulties are generated. As an agent acquires more skills and can navigate the current terrain, its level is upgraded, i.e., it will be re-spawned on the same terrain type, but with a harder difficulty. We have observed that the variety of terrains encountered during training heavily influences the sim-to-real transfer. We thus have included a total of \(12\) different terrain types with configurable parameters (Fig.8 D), leading to a total of \(120\) distinguishable terrain patches. The terrain types classify different locomotion behaviors, s.a. climbing ("stairs", "pits", "boxes", "pyramids"), reflexing ("rough", "rings", "flying objects"), and walking with large steps ("gaps", "pallets", "stepping stones", "beams", "objects with randomized poses"). Our terrain curriculum consists of \(10\) levels, where one of the configurable parameters is modulated to increase or decrease its difficulty. This results in a total of \(1200\) terrain patches, each with a size of \(8\times 8\,\mathrm{m}^{2}\), summing up to a total area of \(76800\,\mathrm{m}^{2}\), which is approximately the size of \(14\) football fields or \(10\) soccer fields. ### Training Solving the TO problem at the policy frequency during training was found to provoke poor local optima. In such a case, the optimizer adapts the solution after each policy step. If the agent is not able to follow the reference trajectory, the optimizer will adapt to the new state s.t. the tracking problem becomes feasible again. This means that the agent can exhibit "lazy" behavior and still collect some rewards. We prevent such a local optimum by updating the optimizer only at a leg touch-down (i.e., after \(0.465\) seconds). This also greatly reduces learning time because computational costs are reduced by a factor of \(23\) compared to recomputing the trajectories at the policy frequency. After a robot fell (on average, once every \(18\) seconds), was pushed (after \(10\) seconds) or its twist commands changed (three times per episode), the optimized trajectories are not valid anymore. To guarantee that the locomotion policy generalizes across different update rates, we additionally recompute the solution in all those scenarios. We trained the policy with a massive parallelization of \(64^{2}=4096\) robots, for a total of \(90000\) epochs. Each epoch consisted of \(45\) learning iterations where each iteration covered a duration of \(0.02\) seconds. Considering the variable update rate explained previously, this resulted in a total of \(8295\) days (or \(23\) years) of optimized trajectories. The policy can be deployed after about one day of training (\(6000\) epochs), reaches \(90\,\mathrm{\char 37}\) of its peak performance after three days (\(20000\) epochs), and is fully converged after two weeks (\(90000\) epochs). In comparison, the baseline-rl-1 policy was trained for \(4000\) epochs with \(1000\) parallelized robots over \(5\) consecutive days. Each epoch lasted for \(5\) seconds, resulting in a throughput of \(46\) simulated seconds per second. Our policy was trained for \(14\) days, with each epoch lasting for \(0.9\) seconds, leading to a throughput of \(27\) simulated seconds per second. Thus, despite generating \(1.6\) years of desired motions per day, our approach has only a \(1.7\) times lower throughput than the baseline. ### Deployment We deploy the policy at a frequency of \(50\,\mathrm{Hz}\) zero-shot without any fine-tuning. The motion optimizer runs at the largest possible rate in a separate thread. For TAMOLS with a trotting gait, this is around \(400\,\mathrm{Hz}\) and for baseline-to-2 around \(100\,\mathrm{Hz}\) (which both are faster than the policy frequency). At each step, the policy quires the most recent solution from the thread pool and extracts it \(\Delta t=0.02\,\mathrm{s}\) ahead of the most recent time index. For our experiments, we used three different types of ANYmal robots [34], two version C and one version D, for which we trained different policies. ANYmal C is by default equipped with four Intel RealSense D435 depth cameras whereas ANYmal D has eight depth cameras of the same type. For the second Version C, the depth cameras were replaced with two identical Robosense Beparl dome LiDAR sensors. Motion optimization and the forward propagation of the network policy are done on a single Intel core-i7 8850H machine. Elevation mapping [37] runs on a dedicated onboard Nvidia Jetson. ## Concluding Statement In this work, we emphasized that TO and RL share complementary properties and that no single best method exists to address the open challenges in legged locomotion. The proposed control architecture leverages this observation by combining the planning capabilities of the former and the robustness properties of the latter. It does, by no means, constitute a universal recipe to integrate the two approaches in an optimal way for a generic problem. Moreover, one could even extend the discussion with self- and unsupervised learning, indirect optimal control, dynamic programming, and stochastic optimal control. Nevertheless, our results may motivate future research to incorporate the aspect of planning into the concept RL. ## Supplementary materials Sections S1 to S6 Tables S1 to S3
2309.05671
tSPM+; a high-performance algorithm for mining transitive sequential patterns from clinical data
The increasing availability of large clinical datasets collected from patients can enable new avenues for computational characterization of complex diseases using different analytic algorithms. One of the promising new methods for extracting knowledge from large clinical datasets involves temporal pattern mining integrated with machine learning workflows. However, mining these temporal patterns is a computational intensive task and has memory repercussions. Current algorithms, such as the temporal sequence pattern mining (tSPM) algorithm, are already providing promising outcomes, but still leave room for optimization. In this paper, we present the tSPM+ algorithm, a high-performance implementation of the tSPM algorithm, which adds a new dimension by adding the duration to the temporal patterns. We show that the tSPM+ algorithm provides a speed up to factor 980 and a up to 48 fold improvement in memory consumption. Moreover, we present a docker container with an R-package, We also provide vignettes for an easy integration into already existing machine learning workflows and use the mined temporal sequences to identify Post COVID-19 patients and their symptoms according to the WHO definition.
Jonas Hügel, Ulrich Sax, Shawn N. Murphy, Hossein Estiri
2023-09-08T17:47:31Z
http://arxiv.org/abs/2309.05671v1
tSPM+; a high-performance algorithm for mining transitive sequential patterns from clinical data ## Abstract The increasing availability of large clinical datasets collected from patients can enable new avenues for computational characterization of complex diseases using different analytic algorithms. One of the promising new methods for extracting knowledge from large clinical datasets involves temporal pattern mining integrated with machine learning workflows. However, mining these temporal patterns is a computational intensive task and has memory repercussions. Current algorithms, such as the temporal sequence pattern mining (tSPM) algorithm, are already providing promising outcomes, but still leave room for optimization. In this paper, we present the tSPM+ algorithm, a high-performance implementation of the tSPM algorithm, which adds a new dimension by adding the duration to the temporal patterns. We show that the tSPM+ algorithm provides a speed up to factor 980 and a up to 48 fold improvement in memory consumption. Moreover, we present a docker container with an R-package, We also provide vignettes for an easy integration into already existing machine learning workflows and use the mined temporal sequences to identify Post COVID-19 patients and their symptoms according to the WHO definition. ## Introduction While the primary functionality of Electronic health records (EHRs) is to capture patient data for billing and communication purposes, as research data source, EHRs can provide insights about patient journeys and understanding of complex diseases [1]. Leveraging this information has become feasible by the rapid growth in the availability of computational power and development of new analysis methods. This allows for new methods regarding disease prevention, control, population health management [2, 3], diagnosis of (rare) diseases [4, 5, 6], treatment options [7, 8, 9, 10, 11] and drug-development [8, 12] by harnessing big data analytics. There are a few challenges, such as harmonization and interoperability [13], noisiness [14, 15], availability of computational power, models and data [16, 15] and privacy and security [16, 17], that need to be addressed when working with big data in healthcare. Nevertheless, the large amount of healthcare data presents a valuable resource that, once properly utilized, has the potential to transform patient healthcare, research, and population health [18, 19]. While we have not yet fully tapped into the immense potential of big healthcare data, there are already successful approaches in place, such as machine learning, association rule mining and temporal pattern mining, that are making a significant impact. This paper presents multiple significant contributions. We introduce an optimized and enhanced implementation of the transitive sequential pattern mining (tSPM) algorithm [20, 21], referred to as tSPM+, for mining transitive sequential patterns from time-stamped clinical data. Estiri et al. [20, 21] introduced an innovative approach for mining transitive (temporal) sequence patterns (tSPM) from electronic health records, which proves beneficial for enhancing signal detection in various machine learning models [20, 21, 22]. In the year 2021, the tSPM algorithm was recognized as a significant contribution to the field of clinical informatics [23]. This implementation is based on a C++ library wrapped within an R-package, delivering notable improvements in both speed and memory consumption compared to the previous implementation. Specifically, tSPM+ exhibits a speedup up to factor \(\sim\)920 and \(\sim\)48-fold reduction in memory consumption. Additionally, the R-package provides a functionality to split the dbmart in chunks with an adaptive size to fit the available memory limitations. The substantial acceleration of the algorithm unlocks new potential use cases, particularly in leveraging temporal sequences and their durations to simplify complex tasks such as identifying patients with rare or complex diseases, including conditions like Post COVID-19, commonly known as long covid [24]. To demonstrate the application of tSPM+ in such scenarios, we provide a detailed vignette illustrating the implementation of one of these tasks. Specifically, we showcase how to identify patients with Post COVID-19 and associated symptoms within a synthetic database. Furthermore, we highlight the seamless integration of the tSPM+ algorithm into existing machine learning workflows. By outlining the steps required to incorporate tSPM+ effectively, we offer researchers a straightforward approach to harness the algorithm's capabilities within their established frameworks. To facilitate easy access and reproducibility of our work, we provide a Docker container encompassing an RStudio instance pre-installed with tSPM+, synthetic data, and the accompanying vignettes. This container grants researchers and readers an accessible entry point and ensures easy reproducibility of our findings. ## Background ### Association rule mining The field of data mining has witnessed significant advancements in extracting knowledge and patterns from extensive databases [25, 26, 27, 28, 29, 30]. One specific area within data mining is association rule mining (ARM), which aims to extract rules that capture associations, correlations, or frequent patterns among entries in a given database [29, 31]. Since its introduction by Agrawal et al. [29] in 1993, initially for analyzing market basket data, ARM has evolved into an active and extensive research area in data science, encompassing diverse data sources [25, 26, 27, 28, 30, 31, 32, 33, 34, 35]. Recently, Shahin et al. [25] conducted a systematic review and identified three commonly employed ARM algorithms: Apriori [29], FP-Growth [36] and Eclat [37]. Over the years, these algorithms have undergone numerous enhancements and adaptations [38, 39, 40, 41, 42, 43]. Although general association rule mining typically overlooks temporal relationships among individual data entries [44], EHR data inherently possesses temporal dependencies. Consequently, temporal pattern mining techniques are employed to account for such relationships. Sequential pattern mining (SPM) represents a subtype of temporal pattern mining that incorporates the order of entries in the database, including their temporal aspects, while extracting frequent patterns [45]. Within the healthcare domain, SPM serves as a prevalent technique for decision support systems and as input for machine learning algorithms. Leveraging sequential patterns, instead of considering individual entries, facilitates enhanced signal detection for certain machine learning algorithms, making it a widely adopted approach in healthcare [20, 21, 46, 47, 48, 49, 9]. In some cases, SPM algorithms account for the duration of the sequences. Notably, temporal pattern mining encompasses more than just sequential pattern mining and encompasses extensive subfields such as time series data analysis [50]. While ARM and SPM algorithms offer distinct perspectives on data analysis, they both suffer from shared drawbacks [51]. Their application to larger databases demands substantial computational resources due to their inherent complexity [51]. Moreover, the reliability and accuracy of their outcomes rely heavily on the quality of the input data, making the presence of noise and incomplete data, which are prevalent in medical datasets, particularly influential. Furthermore, the well-established challenge of safeguarding data privacy in the medical domain must be carefully considered when employing ARM and SPM algorithms for medical data analysis. However, overcoming these obstacles can yield valuable insights and enable the exploration of complex research inquiries, ultimately contributing to the enhancement of patient care and well-being [21, 22, 36, 40, 46, 48, 50, 51]. ### Transitive sequential pattern mining (tSPM) algorithm Implemented in the R programming language, the tSPM algorithm operates on patient data structured as a simple table, encompassing the patient number, date, and clinical representations from the database, each denoting the clinical feature space X, hence referred to as 'phenX' in abbreviation. This table adheres to the MLHO [52] format and is referred to as a dbmart. The tSPM algorithm [20, 21] compasses three key steps. First, it extracts all phenX entries for each patient, sorting them based on their dates to establish a temporal order. Second, tSPM iterates through the sorted phenX entries and generates sequences that initiate with the current phenX and conclude with another phenX having a later date. This process mines ((n-1)(n))/2 sequences per patient, where n represents the number of entries for the patient in the dbmart. Given an average of ~400 entries per patient and a cohort of 5000 patients, the tSPM algorithm generates a staggering 399,000,000 sequences. Consequently, the inclusion of a third optional step becomes highly recommended, involving sparsity screening to mitigate the sequence count. Estiri et al. utilize the Minimize Sparsity Maximize Relevance (MSMR) algorithm [20], which employs a straightforward sparsity screening and employs joint mutual information to discard sparse sequences prevalent in small patient subsets. Fig 1. shows the pseudocode for the tSPM algorithm. Subsequently, Estiri et al. employ the extracted sequences as input for various machine learning tasks [20, 21, 22], consistently outperforming alternative approaches. While the combination of tSPM and machine learning tasks yields superior signal detection compared to the conventional approach of using phenX as direct input for machine learning [22], the tSPM algorithm leaves potential for improvement concerning memory consumption and runtime. Furthermore, it is important to note that the tSPM algorithm does not provide information regarding the duration of a sequence, specifically the time difference between the dates of the two phenX entries. In the following sections we present tSPM+, an optimized implementation of the tSPM algorithm as a C++ library available as an R package. This yields substantial speed and memory improvements compared to the original version. and allows for more complex uses-cases. These are described in two vignettes, where we highlight a seamless integration Figure 1: The pseudocode of the basic tSPM algorithm. in a machine learning workflow as well as a scenario to leverage the mined sequences for Post COVID-19 detection. For accessibility and reproducibility, we provide a Docker container with tSPM+, synthetic data and the aforementioned vignettes, ensuring easy access and replication. ## Methods While the original implementation of the tSPM algorithm achieved good results, we recognized the need for a more performant implementation. Optimizing its performance enables us to sequence more patient data allowing for more complex analyses revealing more useful and precise information for downstream phenotype modeling. Additionally, integrating the duration of the sequences adds new dimensions to our analyses and enables even more complex use cases, such as the implementation of the long covid definition. ### transitive Sequential Pattern Mining plus (tSPM+) algorithm The tSPM+ algorithm follows the same fundamental principles as the tSPM algorithm. It constructs sequences by combining each entry of a patient with all subsequent entries, as outlined in the tSPM section. Notably, the algorithm also captures the duration of these sequences, the time intervals between entry dates, expanding the potential of the generated sequences. Consequently, the data must adhere to the MLHO format to support these functionalities. To optimize memory efficiency, the tSPM algorithm either discards the description column in the preprocessing step or necessitates its removal in before. To facilitate an efficient implementation, we have developed the tSPM+ algorithm as a high-performance C++ library. This implementation can be directly integrated into low-level C++ programs or encapsulated within higher-level languages such as R or Python. The C++ library encompasses not only the tSPM+ algorithm itself but additional auxiliary functions that have demonstrated utility when working with sequences. By implementing the tSPM+ algorithm as a C++ library, we capitalize on the advantages of leveraging native data formats and performing faster and more efficient operations compared to higher-level languages. Consequently, we made the decision to store data as a numeric representation, albeit with the trade-off of requiring lookup tables for later translation to their original forms. During the creation of these lookup tables, we assign a running number, starting from 0, to each unique phenX and patient ID. This number is stored as a 32-bit unsigned integer, enabling us to use the patient ID as an index in arrays. Crucially, this numeric representation facilitates the storage of phenX pairs as a distinct hash function that is easily reversible. To construct a sequence, we append leading zeros to the end phenX, resulting in a 7-digit number. We then append this number to the first phenX, creating a unique numeric sequence for each phenX pair. This representation can be effortlessly reverted back to its original form and is interpretable by humans, provided the number of digits for the last phenX is known. Furthermore, it allows us to store the sequence as a 64-bit integer (long long). For a more detailed explanation of the sequence creation process, refer to Figure 2. The duration of a sequence can be stored in multiple ways. We decided to store the duration of a sequence in days as default, but the unit can be changed via a parameter. Using days allows us to incorporate the duration into the number that represents the sequence. Therefore, we utilize cheap bitshift operations to shift the duration on the last bits of the sequence. Nevertheless, we decided to store the duration in an extra variable to ease the program flow, but leverage this feature in some helper functions, e.g. when calculating duration sparsity. Since the duration is stored in days using unsigned 32 bit integers, it reduces the memory footprint further. While the numeric representation significantly contributes to the substantial memory reduction, its benefits extend to the use of numerical operators, which allows for fast comparison of the individual values. Nevertheless, the most acceleration arises from the parallelization with OpenMP [53]. The parallelization of the tSPM+ algorithm is straightforward by simultaneously creating the sequences for multiple patients in different threads. This requires sorting the dbmart after patient id as first and date as the second criterion to ensure that each patient is one chunk of entries. For an efficient parallel sorting we leverage the in-place super scalar sample sort (ips4o) algorithm from Axtman et al. [54]. Additionally, the entries in each chunk are chronologically arranged, enabling the creation of all sequences for a phenX by iterating over all subsequent phenX in the same chunk. Consequently, to harness parallelization, we distribute the patient chunks over multiple threads storing the created sequences in thread-specific vectors. This strategic design mitigates resource-intensive cache invalidations, thus optimizing performance. Merging these vectors results in a huge vector of sparse sequences. Sparse sequences occur only for a small number of patients. By removing them, keeping only significant sequences, we preempt overfitting in subsequent machine learning applications. The Figure 2: The workflow to mine the transitive sequences. At first, the data is extracted from the database and transformed into the MLHO format. After transforming it to numeric, the dbMart gets sorted and the sequences are created for each patient. Each phenX for a patient is coded in a different color. We highlighted the parts (substrings and duration) of the created sequence in the color of the corresponding phenX to visualize how the phenX can be easily extracted from the sequence. simplest way to identify sparse sequences is to count the occurrences of a sequence and remove it when the count is less than the threshold. To optimize performance in the parallel processing, we again leverage the ips4o algorithm from Axtman et al. [54] to sort the sequences by their id. Afterwards, a sophisticated approach is applied to methodically mark sparse sequences before removing them. We first determine the start positions within the vector for each sequence, allowing us to divide it in equal chunks for concurrent processing on multiple parallel threads. In each thread, we iteratively calculate the number of each sequence, by subtracting the start position of the next sequence from the current. If this number is less than the sparsity threshold, we label this sequence for removal by assigning the maximal possible value to the patient number. Once all sequences are labeled, we sort them by their patient id. Subsequently, we determine the first occurrence of the maximal integer value as patient id and erase all values after this entry. This strategy optimized the number of memory allocations by minimizing its frequency to one. Additionally, the sequence chunks are large enough to mitigate cache invalidations, when altering patients numbers. Finished through shrinking the vector to its new size, we retain only non-sparse sequences, effectively refining the sequences. ### R package: In order to enhance accessibility to the underlying low-level C++ library, we developed a user friendly R-package. It encapsulates the performant C++ functions making them easily available and usable in the R environment. Rcpp [55] and RcppParallel [56] are widely adopted R packages to interfacing C++ functionalities are often harnessed to speed up and parallelize functionalities in R packages. Consequently, we chose them to facilitate a seamless integration of the tSPM+ C++ library. Given that tSPM+ exclusively applicable to numeric data, the R package incorporates a utility function to convert alpha-numeric dbmarts to their numeric counterparts and the corresponding look-up tables. Furthermore, the R-package provides a utility function to enable the adaptive partitioning of the dbmart based on the available memory and the number of created sequences. Applying this approach segregates the data into manageable chunks, which can be sequenced separately. Thereby it enables the sequencing of phenotypes on ressource-restrained platforms, like laptops. This functionality is particularly relevant since the maximum number of entries in an R vector is limited to 2\({}^{**}\)31-1 entries [57]. This threshold can be swiftly reached when sequencing substantial patient cohorts with multiple tens of thousands patients. To enhance useability the R-package is accompanied by two instructive vignettes. Each vignette encompasses illustrated code and comprehensive explanations on significant use cases for transitive temporal sequences. These use cases are demonstrable either with provided synthetic example data or data from the linked dependency packages in the vignettes. The first vignette guides the user through integrating the sequencing process into the MLHO machine learning workflow [52, 58]. In contrast, the second vignettes showcased the synergistic utilization of sequences and utility functions to address current challenges, for example, implement complex disease definitions, such as the WHO definition of Post COVID-19. ## Benchmarks We performed multiple benchmarks to measure the performance of tSPM+. One to compare tSPM+ with tSPM and another one to analyze the possible performance. Since not only the data set characteristics, such as max or average number of phenX per patient and number of patients, might influence the performance of the algorithms, but also the scheduling of the operating system and background processes, we performed 10 iterations of all benchmarks and reported the average, as well as the min/max values for memory consumption and speed. All benchmarks were performed on a machine with Ubuntu 22.04.2 LT, 2 Intel(r) Xeon(r) Gold 5220R CPUs @ 2.2GHz, each with 24 Cores and 48 Threads, and 256 GB of available memory. We used R 4.1.2 and compiled the source code using gcc version 11.3.0. We used the time [59] program to measure runtime and maximal memory consumption for each iteration of the tSPM(+) calls. The benchmark is orchestrated through a bash script, which executes the different R scripts iteratively, for a total of 10 cycles. These scripts encompassed: 1. tSPM without sparsity screening 2. tSPM with sparsity screening 3. tSPM+ in-memory with sparsity screening 4. tSPM+ file-based with sparsity screening 5. tSPM+ in-memory without sparsity screening 6. tSPM+ file-based without sparsity screening. Within each R script the data was loaded and the corresponding algorithm invocated. The measurement protocol included total runtime and memory consumption as mentioned before, and additionally, the runtime measurement for data loading, sequencing and sparsity screening, if applicable, within the R scripts. On the one hand, we include the transformation into a numeric representation into the benchmark, because it is a preprocessing step that distinguishes the tSPM and tSPM+ algorithms. On the other hand, we excluded the transformation into the MLHO format from the measurements due to being required by both algorithms. The bash and R scripts are embedded in the available docker container, as well as in the corresponding GitHub Repo ([https://github.com/JonashHuegel/tSPMPlus_benchmarks](https://github.com/JonashHuegel/tSPMPlus_benchmarks)). Furthermore, this repository stores a detailed list of the used R packages, their dependencies including the corresponding version numbers Despite the potentially reduced runtime and memory demands of the C++ implementation, we benchmark the R version of tSPM+ algorithm to enhance the comparability with the original tSPM implementation. ### Comparison Benchmark We analyze the performance of the original tSPM with the tSPM+ algorithm on real world data that were already used together with the old tSPM algorithm in an older AD study [22] to evaluate the performance in a real world setting. We used the patient data from 4985 patients with an average of 471 entries per patient from the Mass General Brigham Biobank. The Mass General Brigham Institutional Review Board (protocol# 2017P000282) allows the use of the biobank data as per the Biobank Consent signed by all participants in the MGB Biobank. Following the protocol of the previous study [22], we only kept the first occurrence of a phenX per patient, e.g. when a discarded phenX occurs in the next sequence for a patient, we do not store that sequence. We did this to account for the number of created sequences and the required computational resources of the original tSPM algorithm. Deviating from the previous study, we employed only the sparsity screening from the MSMR function [20] with the tSPM algorithm, but excluded the Joint Mutual Information to select the most relevant features. The tSPM+ library provides a native sparsity function, hence we applied it in the benchmark. ### Performance Benchmark The second benchmark measures the achievable performance and is performed on the 100k Covid-19 synthetic data set from Synthea(tm)[60], [61]. After extracting data for ~125 000 synthetic patients and reducing it to 35 000 patients with an average of 318 entries, we stored it in the MLHO format. The reduction of the dataset deemed necessary as the C++ tSPM+ algorithm mined an excessive number of sequences causing failure during the transformation into an R dataframe. This arose from R limiting the number of elements per vector, capped at 2*{31}-1 elements [57]. While employing adaptive partitioning is a viable approach, we consciously opted against it. Implementing it would introduce extra iterations of the sequencing process without substantial benefits and increasing the runtime linear. ## Results ### Implementations of tSPM+ #### The C++ library The C++ library is implemented in C++17 and is published on GitHub ([https://github.com/JonasHuegel/tspm_cop_backend](https://github.com/JonasHuegel/tspm_cop_backend)) under the MIT license. While this implementation is not a direct usable command line tool, it is accompanied by an runnable example file to demonstrate how to include the library in other programs. Moreover, the library encompasses a native function for the sparsity screening and a broad array of additional utility functions allowing fast operations on the sequences. These functions facilitate tasks such as extracting functions with given start phenX, end phenX or specified minimum durations. Another function combines these functions and allows to extract all sequences that end with phenX, which is an end phenX of all sequences with a given start phenX. The tSPM+ implementation offers two distinct operational modes. The first mode is file based, creating a file storing all generated sequences for each patient. The second mode operates completely in memory, providing the sequences as one comprehensive vector. ### The R package The R-package is published on GitHub ([https://github.com/JonasHuegel/tSPMPlus_R](https://github.com/JonasHuegel/tSPMPlus_R)) under the MIT license and encompasses the C++ library as a git submodule. The R package is accompanied by two vignettes and a synthetic dbmart providing examples on how to leverage the outstanding opportunities of the tSPM+ algorithm. Integrating tSPM+ in the MLHO Machine Learning workflow Integrating the mined sequences into existing machine learning workflows is a necessity to leverage the full potential of the sequences. Consequently, the first vignette encompasses instructions to integrate tSPM+ into the MLHO Machine Learning framework. It builds onto the original MLHO vignette [58] and demonstrates how to leverage the sequences for the classification tasks instead of raw EHR entries. In the first step, we load the example data from the MLHO package, converting it to numeric and handing it over to the tSPM+ function call to extract the sequences and perform the sparsity screening. The created, non-sparse sequences are handed over to the MSMR algorithm extracting the 200 most significant sequences. Following the original vignette, we are using MLHO to train the classifier on the remaining relevant sequences. Finally, the vignette demonstrates how the sequences reported as significant for the classification task can be translated back to their descriptions to become fully human readable again. #### Leveraging tSPM+ to identify Post COVID-19 patients The second vignette encompasses a more complex use case of temporal sequences. We highlight in this vignette how the transitive sequences and their duration can be leveraged to identify which patient has which Post COVID-19 symptom according to the WHO definition. To be considered a Post COVID-19 symptom, a symptom must occur after a covid infection and is at least ongoing for two months, if it can not be excluded by another rationale from the patient. Usually the symptoms appear 3 months after the infection or later, but this is not a mandatory criteria for Post COVID-19 [24]. We utilize a modified version of the synthetic Synthea COVID-19 data set [60], which is included into the R package, as example data. At first, we demonstrate how to transform this alphanumeric dbmart to numeric. Afterwards, we leverage a util function of the tSPM+ library to extract all sequences that end with a phenX that is for at least one patient the end phenX of a sequence starting with covid. From this set we exclude all sequences that did not start with covid. Then we start to exclude candidate sequences on a patient level that either occur only once or where the maximal difference of the duration of the sequences with the same end phenX was less than 2. All remaining sequences are candidates which now need to be excluded by other sequences from a patient. Therefore, we sequence all sequences that end with a candidate phenX and compute pairwise correlations between the sequence and the end phenX duration bucket tupel. If a patient had a sequence with a high correlation, even if it is not casualisation, and the corresponding candidate phenX, we removed the candidate phenX for this patient. After we remove each candidate phenX, for which the patient has at least one other sequence that ends with this candidate phenX and has a high correlation and significance, the remaining candidates are Post COVID-19 symptoms for the corresponding patient. Finally, the vignette demonstrates how to convert the numeric sequences to human readable descriptions. ## Benchmark ### Comparison Benchmark The tSPM+ algorithm massively outperforms the old tSPM implementation in computation time as well as in memory consumption in the comparison benchmark. The tSPM+ implementation that just utilizes the files, achieved a speed up by factor ~920; from ~12 900 seconds, a little more than three and a half hours, to ~14 seconds and a memory reduction from ~62.62 GB to ~1.3GB, while the tSPM+ implementation working in memory need ~60 seconds and 43.34 GB of memory, a improvement by factor ~210 in speed and ~1.4 in memory usage respectively. We have to note that for the in-memory approach half of the memory was allocated during the transformation from the C++ data structure into an R-data frame and could be avoided when using it in a C++ program. The large difference between the file based and in-memory implementation of tSPM+ gets completely equalized when we consider the sparsity screening process. Both implementations require around 25Gb of memory and running in around 1 minute, ~56 and ~64 seconds respectively. Therefore they are clearly outperforming the old tSPM implementation with a runtime of ~19020 seconds and ~205 GB memory consumption providing a speed-up by factor ~297 and an eightfold improvement regarding memory consumption. ### Performance Benchmark When running the performance benchmark with 100k patients with an average of 318 entries, every run failed due to an error in the end when converting the used C++ data structure into an R dataframe. This happens because R has a limit of (2\({}^{\text{\tiny A}}\)31)-1 entries per vector and we sequenced 7 195 858 303 (close to 2\({}^{\text{\tiny A}}\)33) sparse sequences. Therefore, we rerun the benchmark with only 35k patients and reported the corresponding runtimes and memory consumption. As in the comparison benchmark, the file-based tSPM+ algorithm without sparsity is the fastest with an average runtime of ~37 seconds and a memory consumption of ~2 GB, outperforming the in memory approach, which required 109 GB of memory and had a runtime of ~ 214 seconds. Again this massive lead gets lost, when the sparsity screening is applied. The file-based tSPM algorithm as well as the in memory version with sparsity screening requires an \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Memory consumption (in GB)**} & \multicolumn{1}{c|}{**Runtime (hh:mm:ss)**} \\ \multicolumn{1}{|c|}{**Implementation**} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline **Algo** & **Sparsity Screening** & **In-Memory / File-Base** & & & & & & \\ \hline tSPM & without & In-Memory & 62,27 & 62,82 & 62,62 & 3:30:08 & 3:37:26 & 3:34:09 \\ \hline tSPM & included & In-Memory & 201,09 & 207,60 & 205,23 & 5:10:42 & 5:24:08 & 5:17:27 \\ \hline tSPM+ & without & In-Memory & 43,34 & 43,34 & 43,34 & 00:00:58 & 00:01:11 & 00:01:01 \\ \hline tSPM+ & included & In-Memory & 25,89 & 25,89 & 25,89 & 00:01:01 & 00:01:07 & 00:01:04 \\ \hline tSPM+ & included & File-based & 22,26 & 28,10 & 24,34 & 00:00:52 & 00:00:59 & 00:00:56 \\ \hline tSPM+ & without & File-based & 1,33 & 1,33 & 1,33 & 00:00:13 & 00:00:14 & 00:00:14 \\ \hline \end{tabular} \end{table} Table 1: shows the average, min and max values for the memory consumption and runtime for all the implementations during the comparison benchmark. We provide a more detailed enumeration for each run in the appendix. average of ~108 GB of memory. The speed advantage melts down to a difference of ~8 seconds with a runtime of ~288 seconds for the in-memory approach and ~280 seconds for the file-based approach. Table X shows the min, max and average runtime and memory consumption of the performance benchmark. We report the more detailed runtime in the appendix. ### Performance on End User devices Additionally, we run the tSPM+ algorithms on some end user devices (laptops or workstations). Even on devices with only 4 to 8 cores and less than 16GB of memory we were able to run the tSPM+ algorithm to sequence more than 1000 patients and ~400 entries per patient in less than 5 minutes. ### Reproducibility and availability of the source code and examples By integrating the source code from the above-mentioned GitHub repositories into a docker-container, we provide low level access to the tSPM+ algorithms as well ensure reproducibility of our benchmarks. The docker container is based on rocker:rstudio and provides an Rstudio instance, where tSPM+, tSPM, MLHO and all dependencies are already pre installed and ready to use. Furthermore, the docker container is encompassed by both vignettes and their required data. Therefore, it provides two examples demonstrating how to use tSPM+ on synthetic data, and additionally a straightforward approach to deploy tSPM+ and MLHO on their own data. The buildfile and the container is available in the following GitHub repository: [https://github.com/JonashIuegel/tSPMPlusDocker](https://github.com/JonashIuegel/tSPMPlusDocker). Additionally, we froze the versions of the code and the docker container and provide them online at [62]. ## Discussion In summary, the tSPM+ algorithm significantly outperforms the original tSPM. A fraction of the speedup is achieved by replacing slow string operations and comparisons with faster numeric ones. Consequently, we require 128 bit or 16 byte to store a sequence (8 for the \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & & \multicolumn{2}{c|}{**Memory consumption**} & \multicolumn{2}{c|}{**Runtime (hh:mm:ss)**} \\ \multicolumn{2}{|c|}{**Implementation**} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \hline **Algo** & **Sparsity** & **In-Memory /** & **Min** & **Max** & **Average** & **Min** & **Max** & **Average** \\ & **Screening** & **File-Base** & & & & & & \\ \hline tSPM+ & without & In-Memory & 109,63 & 109,63 & 109,63 & 00:03:10 & 00:04:53 & 00:03:34 \\ \hline tSPM+ & included & In-Memory & 106,61 & 108,16 & 108,01 & 00:04:07 & 00:05:12 & 00:04:48 \\ \hline tSPM+ & included & File-based & 108,17 & 108,20 & 108,18 & 00:03:56 & 00:04:59 & 00:04:40 \\ \hline tSPM+ & without & File-based & 2,01 & 2,19 & 2,12 & 00:00:31 & 00:31:00 & 00:03:40 \\ \hline \end{tabular} \end{table} Table 2: shows the average, min and max values for the memory consumption and runtime for all the implementations during the performance benchmark. We provide a more detailed enumeration for each run in the appendix. sequence, and 4 for the duration and patient id each. This is significantly smaller than when we use strings (characters) for storing all this information. To allow an efficient parallelization we added additional sorting steps, which also can be done performant in parallel [54]. After the sorting we can access and modify the data in a linear way avoiding costly cache invalidations and other (scheduling) operations, e.g. memory allocations and copying. This approach is commonly used by other high performance implementations [63, 64, 65]. A good example for this procedure is the sparsity screening, where we at first sorted the mined sequences by their sequence ID and then just needed to iterate over the sequences and count for how many patients they occur. According to the developers of the IPS*4o algorithm it is currently not possible to compile their algorithm on windows [66, 54]. Nevertheless, linking it against the RcppParallel library [56], which encompasses the Intel oneAPI Threading Building Blocks library [67], ensures the compilation. Nevertheless, the tSPM+ algorithm has some limitations. The largest one is that it is currently only working with discrete data. Non-discrete data such as weight can be used, if it gets discretized by creating a new phenX for different ranges. Moreover, since it is only working on numeric data, it requires that the original information is stored in look up tables, which either require memory or have to be written to files. Moreover, tSPM+ requires the transformation to numeric data as a preprocessing step, and the transformation back to human readable sequences, after the sequences were mined and processed in the use cases. While the integration into R provides several advantages, it adds additional overhead, especially when transforming the data from the C++ data structure into an R dataframe, which limits the maximum number of sequences that can be mined per run to 2*31 - 1. The tSPM+ implementation empowers researchers to perform high-throughput sequencing of phenotypes without requiring large scaled servers. By demonstrating that the tSPM+ algorithm performs on end user devices, we enable data scientists and other researchers to develop and test AI/ML pipelines with integrated sequencing on devices with less compute power. Another advantage of the low resource consumption is that it is possible to sequence large numbers of patients and provide the mined sequences to use them in AI models to examine complex diseases. For example, limited by computational efficiency Estiri et al. [22] were able to sequence the first occurrence of each phenotype in their Alzheimer's Classification task, tSPM+ would now allow to sequence all phenotypes instead of only the first occurrence of a phenX. Furthermore, tSPM+ provides the duration of these sequences adding a new dimension in the analyses. In their current review, Xie et al. [68] identify the integration of temporal dimension, especially of entries that might occur multiple times per patient, as a current challenge when using EHR data in deep learning. Using sequences mined with tSPM+ might provide an efficient approach to solve this challenge. Moreover, as we have shown with our Post COVID-19 vignette, we empower researchers to leverage the usage of transitive sequences to implement complex definitions of diseases without writing complex SQL queries to extract these informations from the databases.However, simplifying complex database queries by utilizing temporal sequences is not a novel approach, already in 2008 Wang et al. [69] worked with temporal sequences trying to avoid complex database queries. Chaichana et al. [70] analyzed how Post COVID-19 was defined in all the Post COVID-19 studies until the beginning of 2023. According to them there is an urgent need for an easy implementation of a uniform Post COVID-19 definition, since most of the studies were using diverging Post COVID-19 definitions. This might be due to the complexity of definition of Post COVID-19 by exclusion and the challenge of implementing this definition in algorithms. We showed in the vignette that there might be a simple way to fulfill this need. This approach still requires clinical validation, which is why we currently work on a larger multi-site study to evaluate this approach. This approach might also be applicable on other large covid data sets, such as the German NAPKON study [71, 72]. McDermott et al. [16] emphasize the need for reproducible models and implementations for Machine Learning approaches in healthcare. By not only providing example data, but as well as a docker container and two vignettes, we contribute to this need and make our work easily reproducible for others. Moreover, McDermott et al. [16] stress the danger of applying AI approaches only on inhouse data sets or the "same" public data sets when considering generalization. By providing the vignette on how to integrate tSPM+ with MLHO [52], we enable an easier transfer of the tSPM+ sequencing approach and the MLHO AI models to different data sets. The transfer requires the conversion of the data into the MLHO format. However, by providing the R-package with the synthetic data from Synthea [60, 61], we removed the barrier of having a not shareable data set, allowing others to reproduce most of our results. ## Conclusion In this work, we presented an efficient, extended high-throughput implementation of the original tSPM algorithm. We provide an R package and a docker-container as low level access to this algorithm and a high-performance C++ library which can be included in different languages. The massive performance boost of tSPM+ allows for new use-cases, like the aforementioned implementation of the Post Covid definitions. This library enables more researchers to analyze their patient data to solve complex research questions. By providing two vignettes and a docker container, with relevant use cases and sample data, we reduce the entry barrier for other scientists, especially clinicians, which might not be proficient in programming as data and computer scientists, and just desire an easy to use tool to analyze their EHR data using AI. Further enhancements to the algorithms, such as the integration of non discrete data, would enable additional dimensions of information, and is worth further investigations. Additionally, the Post COVID-19 use case requires thorough validation,e. g. by a complete study on its own and would grant urgently required insights in this complex disease. Finally, tSPM+ adds a new dimension with the sequence durations and is not limited to use only the first occurrence of a clinical record as a phenX for the sequencing. Therefore, it might be worth repeating previous analyses, e.g. regarding Alzheimer's Disease, from older publications to extract more knowledge and get more detailed information about the diseases. The application of tSPM+ is not limited to Alzheimer's Diseases and covid, but is also applicable to data from other disease trajectories with a temporal component, e,g, cancer and cardiovascular diseases. ## Acknowledgment J.Hugel's work was partially funded by a fellowship within the IFI programme of the German Academic Exchange Service (DAAD) and by the Federal Ministry of Education and Research (BMBF). This work is partially funded by the National Institute on Aging (RF1AG074372) and National Institute of Allergy and Infectious Diseases (R01AI165535), the VolkswagenStiftung (ZN3424) and the German Research Foundation (426671079).
2309.15476
Dynamic Multi-Scale Context Aggregation for Conversational Aspect-Based Sentiment Quadruple Analysis
Conversational aspect-based sentiment quadruple analysis (DiaASQ) aims to extract the quadruple of target-aspect-opinion-sentiment within a dialogue. In DiaASQ, a quadruple's elements often cross multiple utterances. This situation complicates the extraction process, emphasizing the need for an adequate understanding of conversational context and interactions. However, existing work independently encodes each utterance, thereby struggling to capture long-range conversational context and overlooking the deep inter-utterance dependencies. In this work, we propose a novel Dynamic Multi-scale Context Aggregation network (DMCA) to address the challenges. Specifically, we first utilize dialogue structure to generate multi-scale utterance windows for capturing rich contextual information. After that, we design a Dynamic Hierarchical Aggregation module (DHA) to integrate progressive cues between them. In addition, we form a multi-stage loss strategy to improve model performance and generalization ability. Extensive experimental results show that the DMCA model outperforms baselines significantly and achieves state-of-the-art performance.
Yuqing Li, Wenyuan Zhang, Binbin Li, Siyu Jia, Zisen Qi, Xingbang Tan
2023-09-27T08:17:28Z
http://arxiv.org/abs/2309.15476v1
Dynamic Multi-Scale Context Aggregation for Conversational Aspect-Based Sentiment Quadruple Analysis ###### Abstract Conversational aspect-based sentiment quadruple analysis (DiaASQ) aims to extract the quadruple of target-aspect-opinion-sentiment within a dialogue. In DiaASQ, a quadruple's elements often cross multiple utterances. This situation complicates the extraction process, emphasizing the need for an adequate understanding of conversational context and interactions. However, existing work independently encodes each utterance, thereby struggling to capture long-range conversational context and overlooking the deep inter-utterance dependencies. In this work, we propose a novel Dynamic Multi-scale Context Aggregation network (DMCA) to address the challenges. Specifically, we first utilize dialogue structure to generate multi-scale utterance windows for capturing rich contextual information. After that, we design a Dynamic Hierarchical Aggregation module (DHA) to integrate progressive cues between them. In addition, we form a multi-stage loss strategy to improve model performance and generalization ability. Extensive experimental results show that the DMCA model outperforms baselines significantly and achieves state-of-the-art performance1. Footnote 1: The code is available at [https://github.com/qdCassie-Li/DMCA](https://github.com/qdCassie-Li/DMCA) Yuqing Li\({}^{1,2}\) Wenyuan Zhang\({}^{1,2}\) Binbin Li \({}^{1}\) Siyu Jia\({}^{1}\) Zisen Qi\({}^{1}\) Xingbang Tan\({}^{1}\)\({}^{1}\) Institute of Information Engineering, Chinese Academy of Sciences \({}^{2}\) School of Cyber Security, University of Chinese Academy of Sciences Conversational sentiment quadruple extraction, sentiment analysis, dialogue systems ## 1 Introduction In recent years, sentiment analysis of reviews has gained increasing attention. Broad applications include stance detection [1][2], document-level [3][4] and aspect-based [5][6][7] sentiment analysis. Recent research [8] has broadened the scope of sentiment analysis to incorporate dialogue-level reviews, called the conversational aspect-based sentiment quadruple analysis (DiaASQ), which reflects more realistic dialogue-driven user review scenarios. DiaASQ aims to predict the quads \(\{(\mathbf{t},\mathbf{a},\mathbf{o},\mathbf{s})\}\) from a dialogue. As shown in Fig. 1, multiple speakers express their reviews around several targets (iPhone 7 and Xiaomi 5). They emphasize different aspects (power consumption and system), while expressing their respective opinions (high and smooth). The sentiment is determined based on the opinion of the target. In contrast to sentiment tuples extraction focuses on independent sentence [9][10], DiaASQ expands extraction perspective to the dialogue. Uniquely, a quadruple might span across several utterances, so a comprehensive understanding of the dialogue and the context of utterances is crucial. Despite previous research [8] efforts to mitigate this limitation through attention mechanisms and positional encoding techniques, it still faces challenges in capturing the semantic interactions and rich contextual information in multi-turn dialogues. Relevant works [11][12] have proposed methods for PLMs to adapt to longer inputs, but they mainly focus on attention mechanisms [13] or network architectures, rather than capturing critical information from dialogues. Fixed-size sliding window methods are commonly used for processing long dialogues [14][15], but they overlook the benefits of multi-scale windows which can capture richer context. In this paper, we propose a novel **D**ynamic **M**ulti-scale **C**ontext **A**gregation network (DMCA) for DiaASQ, as shown in Fig. 2. **Firstly**, we employ a flexible sliding window scheme to create variable-sized utterance windows. This Figure 1: Conversational aspect-based sentiment quadruple analysis task with its corresponding outputs. Utterances are represented as nodes in the tree, with the color indicating the speaker, and the structure presents reply relationships. approach facilitates the comprehensive capturing of dialogue context, ranging from a single utterance to broader spans. **Secondly**, we introduce a **D**ynamic **H**ierarchical **A**gregation (DHA) module. The goal of DHA is to enhance dialogue quadruple prediction by aggregating the output logits from multi-scale windows, eliminating the necessity for intricate network designs. Specifically, DHA hierarchically uses logits from smaller windows as a basis to aggregate and update the logits of larger windows that encompass these smaller windows. This process continues until aggregated logits are obtained at the dialogue level. **Furthermore**, we introduce multi-stage losses to jointly optimize different levels of aggregation, including window-level, thread-level, and dialogue-level. We conduct extensive experiments on two public benchmark datasets, and the results prove that DMCA significantly outperforms comparative methods. The main contributions are summarized as follows: 1) We introduce the DMCA network to improve the extraction of dialogue quadruples by utilizing multi-scale context. 2) Without relying on complex network architectures, we design the Dynamic Hierarchical Aggregation module (DHA) along with multi-stage losses to optimize the decision-making process. 3) Extensive experiments show that the DMCA significantly outperforms state-of-the-art methods. ## 2 Methodology ### Problem Definition and Preliminaries A dialogue is denoted as \(\{(u_{i},s_{i},r_{i})\}|_{i=1}^{|D|}\), where utterance \(u_{i}\) is uttered by the speaker \(s_{i}\) and is in response to \(u_{r_{i}}\). \(|D|\) denotes the total number of utterances. Based on the aforementioned input, the goal of the task is to predict all the sentiment quadruples \(Q=\{(\textbf{t},\textbf{a},\textbf{o},\textbf{s})\}\), where each quadruple contains: target(\(t\)), aspect(\(a\)), opinion(\(o\)), and sentiment polarity(\(s\)). Here, sentiment polarity \(\in\{pos,neg,other\}\). _Tagging schema._ To transform the extraction of dialogue quadruples into a unified grid tagging task, we follow the tagging strategy of previous work [8]. Specifically, the dialogue quadruple extraction task is broken down into three joint tasks: detection of _entity boundaries (ent)_, _entity relation pairs (pair)_, and _sentiment polarity (pol)_. In the _entity boundaries_ detection phase, 'tgt', 'asp', and 'opi' are used to respectively represent the head and tail relations of the target, aspect, and opinion items between any word pairs within the window. In the _entity relation pair_ detection phase, the labels 'h2h' and 't2t' are used to align the head and tail markers between two types of entities. For instance, 'iPhone' (target-head) and 'power' (aspect-head) are connected by 'h2h', while '7' (target-tail) and 'consumption' (aspect-tail) are connected by 't2t'. Sentiment labels \(\{pos,neg,other\}\) are obtained in _sentiment polarity_ detection. By combining the results derived from these three tasks, we can efficiently extract the complete dialogue quadruples. ### DMCA Model #### 2.2.1 Multi-scale context windows generation A set of utterances within a dialogue with a complete reply relationship is defined as a thread [8]. Studies [16][17] have delved into the independence between dialogue threads. To effectively capture the rich context, we use a sliding window method to construct multi-scale windows for each thread. Firstly, we analyze the dialogue structure using the reply records \(\{r_{i}\}_{i=1}^{|D|}\), treating each dialogue branch as an independent thread. This gives rise to a collection of threads \(T=\{T_{t}\}_{t=1}^{|T|}\), where \(|T|\) represents the number of threads and each thread \(T_{t}=\{u_{1},u_{j},\cdots,u_{j+\ell_{t}-1}\}\) consists of \(\ell_{t}\) utterances. For each thread, we use a flexible sliding window schema to generate continuous subsets from the thread. We denote these subsets as windows, represented by \(W^{t}=\{W^{t}_{w}\}_{w=1}^{|W^{t}|}\). The size of these windows varies from 1 to \(\ell_{t}\). Therefore, for each thread, the total number of windows \(|W^{t}|\) is determined by the formula \(|W^{t}|=\frac{1}{2}(\ell_{t}^{2}+\ell_{t})\). We have verified that all generated windows meet the input requirements of the PLM. Consider the illustration in Fig. 1, where \(T_{1}=\{u_{1},u_{2},u_{3}\}\). It can produce 6 distinct windows: \(\{u_{1}\}\), \(\{u_{2}\}\), \(\{u_{3}\}\), \(\{u_{1},u_{2}\}\), \(\{u_{2},u_{3}\}\) and \(\{u_{1},u_{2},u_{3}\}\). Secondly, we encode windows to obtain representations: \[\textbf{H}^{t}_{w}=[\textbf{h}_{[CLS]},\textbf{h}_{1},\cdots\textbf{h}_{N_{w}},\textbf{h}_{[SEP]}]=\text{Encoder}\left(W^{t}_{w}\right), \tag{1}\] \[W^{t}_{w}=\{[CLS];u_{1};u_{j};\cdots u_{j+k-1};[SEP]\}. \tag{2}\] We use RoBERTa [18] as Encoder. \(\textbf{H}^{t}_{w}\in\mathbb{R}^{N_{w}\times D_{h}}\) denotes the representation of \(W^{t}_{w}\). \(N_{w}\) is the number of tokens in the window and \(D_{h}\) is hidden size. Subsequently, we obtain the output logits of the word pair matrix, denoted as \(\mathcal{S}_{w}=\{s_{ij}\,|\,i,j\in[1,N_{w}]\}\). Additionally, we introduce a window-level cross-entropy loss \(\mathcal{L}_{w}\) to super-wise predictions at a more granular level for each window: \[\widetilde{\textbf{h}}_{i}=\widetilde{\textbf{W}}\textbf{h}_{i}+ \widetilde{\textbf{b}}, \tag{3}\] \[s_{ij}=\left(\widetilde{\textbf{h}}_{i}\right)^{T}\widetilde{ \textbf{h}}_{j},\] (4) \[p_{ij}=\text{Softmax}(s_{ij}),\] (5) \[\mathcal{L}_{w}=-\sum_{w=1}^{|W|}\sum_{i=1}^{N_{w}}\sum_{j=1}^{N_ {w}}y_{ij}\log(p_{ij}), \tag{6}\] where \(s_{ij}\in\mathbb{R}^{K}\), \(K\) represents the predefined number of categories in the decoding table and \(y_{ij}\) represents the truth label. \(\widetilde{\textbf{W}}\) and \(\widetilde{\textbf{b}}\) are trainable parameters. #### 2.2.2 Dynamic Hierarchical Aggregation module Windows of different scales capture distinct information: smaller windows focus on local details, while larger ones emphasize contextual understanding. We introduce the Dynamic Hierarchical Aggregation (DHA) module to aggregate predicted logits from these windows, avoiding the need for designing complex network architectures. This aggregation process is categorized into thread-level and dialogue-level. _Thread-level Aggregation._ The predicted logits of all windows within the \(t\)-th thread are denoted as \(\mathcal{S}=\{\mathcal{S}_{i}\mid u_{i}\in T_{t}\}\). Adding a superscript \(l\) indicates the number of utterances comprising the window. DHA utilizes the \(\mathcal{S}_{i}^{l}\) from the small window \(W_{i}^{t}\) to aggregate and augment the \(\mathcal{S}_{j}^{l+1}\) of larger overlapping window \(W_{j}^{t}\), while ensuring that \(W_{i}^{t}\subseteq W_{j}^{t}\). Specifically, we extract logits corresponding to \(W_{i}^{t}\) from \(\mathcal{S}_{j}^{l+1}\) to form \(\hat{\mathcal{S}}_{i}^{l}\). To enhance the predictions in the larger window, we select logits among \(\mathcal{R}_{i}^{l}\), \(\hat{\mathcal{S}}_{i}^{l}\), and \(\mathcal{R}_{i}^{l}+\hat{\mathcal{S}}_{i}^{l}\) based on the principle of minimizing cross-entropy. These selected logits are then aggregated using a weighted summation approach. This process updates \(\mathcal{S}_{j}^{l+1}\) to \(\mathcal{R}_{j}^{l+1}\). The definition of this dynamic aggregation process is as follows: \[\mathcal{R}_{j}^{l+1}=\mathcal{S}_{j}^{l+1}\oplus\alpha\cdot \mathcal{I}_{i}^{l}, \tag{7}\] \[\mathcal{F}_{i}^{l}=\operatorname*{arg\,min}_{x\in\mathcal{X}_{i }^{l}}CrossEntropy(x,y),\] (8) \[\mathcal{X}_{i}^{l}=\{\mathcal{R}_{i}^{l},\hat{\mathcal{S}}_{i}^ {l},\mathcal{R}_{i}^{l}+\hat{\mathcal{S}}_{i}^{l}\}, \tag{9}\] where \(\oplus\) denotes the broadcast addition. \(\alpha\) is a predefined parameter. Padding(\(\cdot\)) implies zero-padding. \(y\) denotes corresponding truth labels. The initial value for \(\mathcal{R}_{i}^{l}\) is set as \(\mathcal{S}_{i}^{l}\). Through the dynamic hierarchical process, we obtain the aggregated thread-level logits as: \(\mathcal{T}_{t}\mathcal{R}=\mathcal{R}_{|W^{t}|}^{\ell_{t}}\). The thread-level loss \(\mathcal{L}_{t}\) is calculated in a manner analogous to Eq. 6. Notably, DHA is only used during the training phase since it requires label information (Eq. 8). For validation and test, we adopt Static Hierarchical Aggregation (SHA). The SHA approach hierarchically aggregates the logits of overlapping windows through a direct sum operation. SHA is defined as: \[\mathcal{R}_{j}^{l+1}=\mathcal{S}_{j}^{l+1}\oplus\mathcal{R}_{i}^{l} \tag{10}\] _Dialogue-level Aggregation._ After the aggregation process at the thread level, we obtain refined logits for each thread. Since these threads overlap only at the root utterance \(u_{1}\), we utilize the SHA method to derive final dialogue-level logits \(\mathcal{DR}\in\mathbb{R}^{N\times N\times K}\) and subsequently obtain \(\mathcal{L}_{d}\). \[Padding(\mathcal{T}_{|T|}\mathcal{R})\mathcal{DR}=\mathcal{T}_{1}\mathcal{R} \oplus\cdots\oplus\mathcal{T}_{|T|}\mathcal{R} \tag{11}\] \[\mathcal{L}_{d}=-\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}y_{ij}\log(p_{ij}) \tag{12}\] where N denotes the number of tokens in the dialogue. #### 2.2.3 Training During the training process, we jointly incorporate three distinct stages of loss: \(\mathcal{L}_{w}\), \(\mathcal{L}_{t}\), and \(\mathcal{L}_{d}\). These losses are employed to minimize errors at different aggregation stages. For each task \(\psi\), the loss can be calculated as follows: \[\mathcal{L}^{\psi}=\mathcal{L}_{d}^{\psi}+\eta\mathcal{L}_{t}^{\psi}+\zeta \mathcal{L}_{w}^{\psi} \tag{13}\] where \(\psi\in\{ent,pair,pol\}\), \(\eta\) and \(\zeta\) are predefined weights. The final objective function is determined by the sum of the loss for the three tasks: \[\mathcal{L}=\mathcal{L}^{ent}+\mathcal{L}^{pair}+\mathcal{L}^{pol} \tag{14}\] ## 3 Experiments ### Tasks and Datasets We conduct experiments on two datasets: the Chinese dataset **ZH**[10] and the English dataset **EN**[10]. Both datasets contain speaker and reply-record information for each conversation utterance. Each dataset consists of 1000 dialogues related to electronic product reviews, with an average of 7 utterances and 5 speakers per dialogue. Specifically, the Chinese dataset contains 5,742 quadruples, while the English dataset contains 5,514 quadruples. About 22% of the quadruples in both datasets are cross-utterance. Figure 2: The overall framework of our DMCA model. The model consists of two key components: 1) a flexible sliding window scheme that captures conversational context at multiple scales and granularities, and 2) a Dynamic Hierarchical Aggregation (DHA) module along with a multi-stage loss strategy that hierarchically aggregates the logits of multi-scale windows. Note: The third dimension of the logits has been omitted from the matrix for clearer visualization. ### Comparison Methods _Baseline._ Following the comparison in [8], we consider several powerful performance models closely tied to the task as baselines. These models include ExtractClassify [20], SpERT [21], Span-ASTE [9], ParaPhrase [10], and DiaASQ [8]. _Implementation Details._ To encode **ZH** and **EN** datasets, we take the Chinese-Roberta-wwm-base [22] and RoBERTLarge [18], respectively. Each training process contains 25 epochs. The parameters for both the DHA module (\(\alpha\)) and the loss (\(\eta\) and \(\zeta\)) are initialized to 1 by default. For the tasks \(\psi\in\{ent,pair,pol\}\), the values of \(K\) is \(\{6,4,3\}\). We use micro F1 and identification F1 [19] as the evaluation metrics. ### Results and Analysis _Overall Results._ The overall results are shown in Table 1. Our model outperforms the previous best baseline in almost all tasks and datasets. Notably, on ZH dataset, our model surpasses the previous state-of-the-art by an impressive 7.7%. _Cross-Utterance Results._ To further demonstrate the effectiveness of DMCA model in addressing cross-utterance quadruple extraction, we conduct a detailed analysis and comparison of the cross-utterance results, as shown in Fig. 3. Our approach outperforms previous model in all cross-utterance counts, especially achieving high performance when cross \(\geq 3\). This indicates that DMCA model is more effective in handling extraction problems in multi-turn dialogues. ### Ablation We conduct experiments to assess the impact of the DHA module and the three distinct stage loss functions. As shown in Table 2, the DHA method, which considers the credibility of predictions from multi-scale windows, achieves the highest performance. Without the dynamic weighted aggregation, the performance of the SHA method diminishes. When we remove the aggregation module, the results significantly decline on both datasets, highlighting the success of our DHA. Moreover, as depicted in Table 3, removing any stage of the loss function results in a decrease in performance, particularly for the problem of cross-utterance extraction. This further demonstrates the effectiveness of the multi-stage losses. ## 4 Conclusion In this paper, we propose a novel DMCA network for conversational aspect-based sentiment quadruple analysis. To address the challenges of encoding long dialogues and extracting cross-utterance quadruples, we construct multi-scale utterance windows to capture rich dialogue context. We also design a DHA module and multi-stage loss strategy to enhance the decision-making logits from these multi-scale windows. Experimental results on two datasets demonstrate the superiority of our DMCA over the state-of-the-art methods. \begin{table} \begin{tabular}{l|c c|c c c|c c|c c||c c|c c|c c} \hline \multirow{2}{*}{Model} & \multicolumn{6}{c||}{ZH-dataset} & \multicolumn{6}{c|}{EN-dataset} \\ \cline{2-13} & \multicolumn{3}{c|}{Entity detection} & \multicolumn{3}{c|}{Pair detection} & \multicolumn{3}{c||}{Quads extraction} & \multicolumn{3}{c|}{Entity detection} & \multicolumn{3}{c|}{Pair detection} & \multicolumn{3}{c|}{Quads extraction} \\ & \multicolumn{1}{c}{\(T\)} & \multicolumn{1}{c}{\(A\)} & \multicolumn{1}{c}{\(O\)} & \multicolumn{1}{c}{\(T\).A} & \multicolumn{1}{c}{\(T\).O} & \multicolumn{1}{c||}{\(A\)-\(O\)} & micro-F1 & iden-F1 & \multicolumn{1}{c||}{\(T\)} & \multicolumn{1}{c}{\(A\)} & \multicolumn{1}{c}{\(O\)} & \multicolumn{1}{c}{\(T\).A} & \multicolumn{1}{c}{\(T\).O} & \multicolumn{1}{c}{\(A\)-\(O\)} & micro-F1 & iden-F1 \\ \hline Extract-Classify & 91.11 & 75.24 & 50.06 & 32.47 & 26.78 & 18.90 & 8.81 & 9.25 & 88.31 & 71.71 & 47.90 & 34.31 & 20.94 & 19.21 & 11.59 & 12.80 \\ SpERT & 90.69 & 76.81 & 54.06 & 38.05 & 31.28 & 21.89 & 13.00 & 14.19 & 87.82 & 74.65 & 54.17 & 28.33 & 21.39 & 23.64 & 13.07 & 13.38 \\ ParaPhrase & / & / & 37.81 & 34.32 & 27.76 & 23.27 & 27.98 & / & / & / & 37.22 & 32.19 & 30.78 & 24.54 & 26.76 \\ Span-ASTE & / & / & 44.13 & 34.46 & 32.21 & 27.42 & 30.85 & / & / & / & 42.19 & 30.44 & 45.90 & 26.99 & 28.34 \\ DiaASQ & 90.23 & 76.94 & 59.35 & 48.61 & 43.31 & 45.44 & 34.94 & 37.51 & **88.62** & **74.71** & 60.22 & 47.91 & 45.58 & 42.27 & 33.31 & 36.80 \\ \hline **Ours(DMCA)** & **92.03** & **77.07** & **60.27** & **56.88** & **51.70** & **52.80** & **42.68** & **45.36** & 88.11 & 73.95 & **63.47** & **53.08** & **50.99** & **52.40** & **37.96** & **41.00** \\ \hline \end{tabular} \end{table} Table 1: We report the micro-F1 scores for all tasks and the additional identification F1 (iden-F1) [19] scores for quads extraction. Here, T-A-O stands for Target-Asepct-Opinion, respectively. \begin{table} \begin{tabular}{l|c c|c c} \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c|}{**ZH**} & \multicolumn{2}{c}{**EN**} \\ \cline{2-5} & micro-F1 & iden-F1 & micro-F1 & iden-F1 \\ \hline ** DHA** & **42.68** & **45.36** & **37.96** & **41.00** \\ \hline SHA(Eq. 10) & 42.31 & 44.92 & 37.73 & 39.91 \\ Concat & 41.24 & 43.50 & 34.75 & 37.31 \\ \hline \end{tabular} \end{table} Table 2: Results against different aggregation methods. ‘Concat’ denotes the direct concatenation of logits from the largest window across all threads. Figure 3: Results of cross-utterance quadruples. ‘cross-0’ indicates elements of the quadruple contained in one utterance. \begin{table} \begin{tabular}{l c c} \hline **Methods** & **Intra** & **Inter** & **Overall** \\ \hline DMCA & **46.23** & **32.73** & **42.68** \\ - w/o \(\mathcal{L}_{w}\) & 46.05(\(\downarrow\)0.18) & 31.78(\(\downarrow\)0.95) & 42.43(\(\downarrow\)0.25) \\ - w/o \(\mathcal{L}_{t}\) & 45.10(\(\downarrow\)1.13) & 27.74(\(\downarrow\)4.99) & 40.57(\(\downarrow\)2.11) \\ - w/o \(\mathcal{L}_{d}\) & 45.17(\(\downarrow\)1.06) & 30.94(\(\downarrow\)1.79) & 41.51(\(\downarrow\)1.17) \\ \hline \end{tabular} \end{table} Table 3: Ablation results of DMCA. We report the micro-F1 score for the ZH dataset. ‘Inter’ denotes the score of cross-utterance quadruple extraction.
2309.11610
Hand Gesture Recognition with Two Stage Approach Using Transfer Learning and Deep Ensemble Learning
Human-Computer Interaction (HCI) has been the subject of research for many years, and recent studies have focused on improving its performance through various techniques. In the past decade, deep learning studies have shown high performance in various research areas, leading researchers to explore their application to HCI. Convolutional neural networks can be used to recognize hand gestures from images using deep architectures. In this study, we evaluated pre-trained high-performance deep architectures on the HG14 dataset, which consists of 14 different hand gesture classes. Among 22 different models, versions of the VGGNet and MobileNet models attained the highest accuracy rates. Specifically, the VGG16 and VGG19 models achieved accuracy rates of 94.64% and 94.36%, respectively, while the MobileNet and MobileNetV2 models achieved accuracy rates of 96.79% and 94.43%, respectively. We performed hand gesture recognition on the dataset using an ensemble learning technique, which combined the four most successful models. By utilizing these models as base learners and applying the Dirichlet ensemble technique, we achieved an accuracy rate of 98.88%. These results demonstrate the effectiveness of the deep ensemble learning technique for HCI and its potential applications in areas such as augmented reality, virtual reality, and game technologies.
Serkan Savaş, Atilla Ergüzen
2023-09-20T19:53:05Z
http://arxiv.org/abs/2309.11610v1
# Hand Gesture Recognition with Two Stage Approach Using Transfer Learning and Deep Ensemble Learning ###### Abstract Human-Computer Interaction (HCI) has been the subject of research for many years, and recent studies have focused on improving its performance through various techniques. In the past decade, deep learning studies have shown high performance in various research areas, leading researchers to explore their application to HCI. Convolutional neural networks can be used to recognize hand gestures from images using deep architectures. In this study, we evaluated pre-trained high-performance deep architectures on the HG14 dataset, which consists of 14 different hand gesture classes. Among 22 different models, versions of the VGGNet and MobileNet models attained the highest accuracy rates. Specifically, the VGG16 and VGG19 models achieved accuracy rates of 94.64% and 94.36%, respectively, while the MobileNet and MobileNetV2 models achieved accuracy rates of 96.79% and 94.43%, respectively. We performed hand gesture recognition on the dataset using an ensemble learning technique, which combined the four most successful models. By utilizing these models as base learners and applying the Dirichlet ensemble technique, we achieved an accuracy rate of 98.88%. These results demonstrate the effectiveness of the deep ensemble learning technique for HCI and its potential applications in areas such as augmented reality, virtual reality, and game technologies. Hand gesture recognition, ensemble learning, deep learning, transfer learning, human computer interaction ## I Introduction Recent research has led to the development of interfaces and applications to provide more effective communication between users and computers. These interfaces and applications, referred to as Human-Computer Interaction (HCI), incorporate both human and computer factors, drawing from various fields such as information technologies, software, design, human psychology, and human behavior. Designers work on new technology and interface development while researchers investigate new techniques for interaction, usability, and efficiency of the technologies used. With the advancements in technology, new interaction methods and technologies have emerged in the field of HCI. From simple office programs, dialog boxes, and error messages in the 1980s, HCI studies have expanded with the development of the internet, mobile and portable devices, touch screens, image, motion, and sensation sensors. Today, the most widely studied areas in HCI are mobile devices, touch screens, voice command processing, human motion calculation, image processing, sensors, and interactive systems developed using wearable technologies [1]. Recently, machine learning (ML) studies for computer vision have focused on human gesture recognition and hand gestures (HG). The purpose of these studies is to provide control systems to enhance HCI [2]. To achieve this purpose, identifying hand movements is important for controlling hardware or software [3]. Especially in the last two decades, the application areas using hand recognition systems have increased and become widespread. These systems, which are used in different applications such as augmented reality (AR), virtual reality (VR), extended reality (XR), computer games, internet of things, sign language recognition, robotic systems, etc., [3, 4] have even become the technological themes of science fiction and futuristic movies also. Interfaces developed in the field of HCI are widely used in industries such as military, tourism, education, communication, health, robotics, entertainment, and others. Interactive and user-controlled educational materials are designed using new technologies in education. In the health sector, systems have been developed that allow users to monitor daily pulse, blood pressure, heart rate, sugar, etc., and systems that enable operations to be performed using remote and robotic systems. In the entertainment industry, digital games and virtual environments that recognize user movements are designed. With advancements in the industrial field, all processes can be monitored and controlled in digital environments. In the military field, simulations are used for armed training, defence, and attack systems. In the tourism industry, museum tours are conducted in virtual environments. In the field of communication, sign language recognition and language translation systems bridge the gap between people. In robotic areas, many systems are controlled by users with motion and voice control. Interfaces developed in the field of HCI are increasingly being used effectively in all areas of our lives [1]. A HG identification system can be created using sensors to recognize hand movements, or markers can be used in this system. This system is called sensor-based, and specialized hardware as gloves are often used, which can be a disadvantage due to the expensive set-up costs. Another methodology for creating HG identification systems is using machine vision to detect hand movement. In these vision-based systems, different information like edges, colour, and hand shapes, etc., is extracted from images or videos using algorithms [6]. Due to recent advances in ML and deep learning (DL) studies, vision-based systems are being widely used by researchers. In this study, a two-stage approach is proposed to achieve more accurate HCI rates. In the first stage, fine-tuning was performed to train deep architectures on the dataset determined by the transfer learning method. High-performance pre-trained models were included in the study, and their performances were compared. The most successful models were determined, and in the second stage, they were brought together with the ensemble learning method, and the results were evaluated. The structure of the study is as follows: In the second section, related works are explained, and in the third section, the materials and methodology used in the study are explained. In the fourth section, the results obtained from the tests are explained. Finally, in the fifth and last section, the study is concluded with discussion. ## II Related Works In recent years, there has been an increase in studies on hand gestures (HG) in response to the growing popularity of applications such as three-dimensional (3D), augmented reality (AR), virtual reality (VR), and extended reality (XR) in technology. In particular, the Meta universe, formed by the merger of Facebook and its sub-brands under the name Meta, has accelerated human-computer interaction (HCI) studies in this area. Several studies have been conducted on AR applications using HG. Chun and Hollerer [7] developed a marker-based AR application that enabled users to interact with objects on their mobile phone screens. Seo and Lee [8] improved the feel and interaction in AR-based environments by using a depth camera. Hurst and van Wezel [9] used colored markers on fingertips for finger tracking and gesture-based interaction in AR applications on mobile phones, allowing for translation, scaling, and rotation operations on objects. Akman et al. [10] developed a HG recognition system with multiple hand detection and tracking methods using video glasses with two cameras. Similarly, Ng et al. [11] used a stereo camera to obtain hand depth information and played with virtual objects using the extended hand. Other studies on AR applications using HG include Asad and Slabaugh's [12] study on hand recognition and displaying a virtual object on the recognized hand using a depth camera, and AlAgha and Rasheed's [13] examination of three different techniques that interact with virtual 3D content displayed in AR environments using the NyARToolkit library and Kinect sensor. Adeen et al. [14] presented an animated and intuitive solution to interact with the image projected on flat surfaces, using hand gestures, finger tracking, and hand tracking. Bikos et al. [15] developed an AR chess game that used the thumb and index finger to interact with the virtually developed content. Chang et al. [16] conducted a study on surface drawing and aerial drawing methods, which allow motion input directly on the surfaces of real-world objects and the user's fingertip, respectively, to project onto the real-world model when released, using the HoloLens supported AR platform. Moreover, virtual environments created using AR technology provide HCI tools, such as applications on tablets or mobile phones, for users/employees to interact with machines, control operating systems, and follow maintenance and assembly processes [17]. Guler and Yucedag [18] developed an industrial maintenance and repair application with AR for computer numerical control (CNC) lathe looms. Their developed model was used in the education system, and an increase in student motivation was observed. Another study of these researchers was on the skeletal system with AR using animated 3D models, menus, voice, and text [19]. Tsai et al. [20] developed a multi-template AR system consisting of three units, namely, a multi-template AR, an online 3D model assembly, and an HG interaction, for an interactive assembly teaching aid. Fucentese and Koch [21] developed an AR-based surgical guidance system to measure the effect of prosthesis alignment and positioning on soft tissue balance during surgery. Furthermore, Guler [22] examined the use of AR training applications for aircraft turbo engine training in the aviation sector. While these developments regarding AR are being experienced, DL studies have started to be carried out in this field, recently. Different models were used for different purposes in these studies. Nunez Fernandez & Kwolek [23] used CNN algorithm for recognition of hands from images in their study. CNN algorithm is also used for skin detection and hand area position detection [24], 3D hand recognition using hand-skeleton data [25], hand position prediction and hand sign recognition [26], and directly HG recognition [27]. In addition, 2-level CNN is also used for static HG recognition [27]. The CNN algorithm is also used as 3D-CNN and Recurrent 3D-CNN to recognize the HG of vehicle drivers and for detection and classification of dynamic HG [29, 30]. Recurrent 3D-CNN is also used for interaction with wearable devices such as helmets and glasses in another study [31]. In addition to these studies, Deep CNN is used for HG recognition from image [32] or from Doppler signals [33] using the Deep CNN algorithm have also been made. Besides, motion recognition on multiple data including image and depth data and skeleton properties study was carried out with the use of deep dynamic in NN format [34]. CNN algorithm is also used as Region-Based for two types of HG recognition in open and closed positions [35] and Faster R-CNN for object detector intelligent HG recognition for collaborative robots [36]. In some other studies, it was aimed to increase the performance of the algorithm by using hybrid methods. CNN + LSTM is used for mixed HG recognition, consisting of gestures created with leap motion controller [37] and long-term recurrent convolution network is used for a vision-based HG recognition system for smart vehicles [38]. While deep belief network and CNN combined in a study for sign language recognition using Kinect sensor [39], capsule network + CNN is used for hand gesture from a dataset consisting 14 different classes in another study [40]. Besides, Koller et al., [41] used Expectation maximization (EM) & CNN for multimodal sign language recognition and Cote-Allard et al., [42] used Continuous wavelet transform and CNN for Electromyography (EMG)-based motion recognition. ## III Material and Methodology The study used the CNN algorithm, a deep neural network algorithm, for image processing on the HG14 dataset 1 published on the Kaggle platform. The researchers included 22 pre-trained high performance architectures designed using this algorithm and fine-tuned them by adapting the classification layers of the models to the problem in the study. After the training, validation, and testing stages, the weights of the models were recorded. The researchers applied the deep ensemble learning technique to use the most successful models together and compared the results. Footnote 1: [https://www.kaggle.com/datasets/gulerosman/hg14-handgesture14-dataset](https://www.kaggle.com/datasets/gulerosman/hg14-handgesture14-dataset) HG14 dataset contains 14 different hand gestures with RGB channel images with resolution of 256x256 for hand interaction and application control in AR applications. There are 1000 images from each class and 14000 images in total from 17 different people's hands using different backgrounds. The dataset was created from first-person view and does not include RGB-D and depth data. In addition, it is created directly with a usual camera not with special camera or infrared or sensors [42]. Fig. 1 presents the sample images of each class in the dataset. The dataset used in the study is divided into three subsets to be used in the training, validation, and testing stages. In the first stage, 10% random images from 14000 images were selected from each class and a total of 1400 images were reserved for testing. Then, 20% of the remaining 12600 images (2520 images) were randomly divided for validation. The remaining 10080 images were used for the train process. The main purpose of DL algorithms, which have developed rapidly in the last 10 years and started to be used in almost all fields in a multidisciplinary way, is to produce generalizable solutions. The most important advantage of deep learning compared to machine learning is that a model can be adapted to different problems instead of problem-specific solutions. In addition, models that have achieved high performance in different competitions, especially in the ImageNet competition in recent years, are offered to different studies through the Keras library2. Thus, researchers can use these models in their own studies by applying techniques called transfer learning and fine-tuning. Footnote 2: [https://keras.io/api/applications/](https://keras.io/api/applications/) Based on this, 22 models, which were successful in the ImageNet competition and frequently used in the literature, were used with the transfer learning method, in this study. In this method, the weights of the models in the ImageNet study are downloaded to train on the HG14 dataset. After the feature extraction layers, since the HG14 dataset contains 14 classes, the number of output neurons is reduced to 14 by applying the fine-tuning method in the classification layer. Other fine-tuning processes are as follows. In the study, the images were reduced to 128x128 resolution. The batch-size is set to 20 and the number of epochs to 50. A 0.5% DropOut layer was used in the classification layer, and then the number of neurons was reduced to 512 and 14, respectively. ReLU and Softmax were used as activation functions in these layers, respectively. In the study, the weights obtained after the training and validation stages were saved and the test process was carried out with these weights. By saving the test results, the confusion matrix was created and the results of all operations were graphed. The results of the models were compared and the ensemble learning model, which consisted of combining the most successful models, was established. Dirichlet Ensemble Learning methodology was applied in the establishment of the model. Ensemble learning is the process of merging various learning algorithms to gain their collective performance, or to enhance the performance of current models by mixing many models to produce one trustworthy model [43]. DL models alone performed well in the majority of applications, but there is always room to employ a collection of DL models to accomplish the same goal as an assembly technique. Randomized weighted ensemble, which is used in the study, is an ensemble technique that weights the prediction of each ensemble member, combining the weights to calculate a combined prediction (as shown in Equation 1). Weight optimization search is performed with randomized search based on the dirichlet distribution on a validation dataset [44]. \[w_{1},[\mathcal{Y}_{1}]+w_{2},[\mathcal{Y}_{2}]+\cdots w_{n},[\mathcal{Y}_{n}] =[\mathcal{Y}] \tag{1}\] where \(w\) is weight of each member, \(\mathcal{Y}\) is output of each member, and \(\mathcal{Y}\) is the weighted average ensemble output. ## IV Experimental Results The training, validation, and testing results of the models used for the first phase of the study are presented in Table I. It has been determined that among the models in the table, two model groups are superior to the others. It has been determined that MobileNet and VGGNet models have achieved more successful results than other pre-trained models. The Loss value in the table is a metric that supports the accuracy ratio in evaluating the performance of the models. This metric is used to measure the inconsistency between predicted and actual values. Thus, it is an important indicator for CNN models, which is a non-negative value, where the robustness of the model increases along with the decrease of the value of loss function [45]. Fig. 1: The class examples of the dataset As the most successful model among these two models, MobileNet models have been the most successful group with both test accuracy and loss results, validation accuracy, and test accuracy and loss rates. One of the important findings here is that the validation accuracy rates of almost all of the models are lower than the train and test rates. In addition, validation loss rates were also at high levels. The graph of the train and validation accuracy rates of the four models that achieved the highest accuracy rate in the study is shown in Fig. 2. \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ **Model** & **Time/step** & **Loss** & **Accuracy** & **Loss** & **Accuracy** & **Loss** & **Accuracy** & **Loss** & **Accuracy** \\ \hline VGG16 & 36s 71ms & 0.2121 & **0.9841** & 5.9066 & 0.7948 & 1.3210 & **0.9464** \\ \hline VGG19 & 36s 71ms & 0.1848 & **0.9848** & 5.4644 & 0.7956 & **0.9298** & **0.9436** \\ \hline Xception & 34s 67ms & **0.0973** & 0.9739 & 3.7009 & 0.6369 & 1.1383 & 0.8464 \\ \hline ResNet50 & 34s 67ms & 0.1049 & **0.9847** & 3.8690 & 0.7647 & 1.0533 & 0.9214 \\ \hline ResNet50V2 & 31s 62ms & 0.1000 & **0.9863** & 4.2230 & 0.7841 & 1.3900 & 0.9007 \\ \hline ResNet101 & 36s 72ms & 0.1189 & **0.9823** & 4.1808 & 0.7762 & **0.9118** & 0.9271 \\ \hline ResNet101V2 & 33s 66ms & **0.0870** & **0.9864** & 4.1413 & 0.7825 & 1.2859 & 0.9021 \\ \hline ResNet152 & 39s 77ms & 0.1038 & **0.9823** & 3.9453 & 0.7448 & 1.1360 & 0.9071 \\ \hline ResNet152V2 & 36s 71ms & **0.0897** & **0.9855** & 3.9288 & 0.7476 & 1.5495 & 0.8886 \\ \hline InceptionV3 & 38s 75ms & 0.3013 & 0.8947 & 2.1544 & 0.6012 & 1.0144 & 0.7664 \\ \hline InceptionResNetV2 & 43s 86ms & 0.2191 & 0.9271 & 1.8657 & 0.6813 & **0.6885** & 0.8407 \\ \hline MobileNet & 34s 67ms & **0.0622** & **0.9931** & 3.0226 & **0.8675** & **0.5633** & **0.9679** \\ \hline MobileNetV2 & 35s 69ms & **0.0506** & **0.9935** & 3.3827 & **0.8341** & **0.7605** & **0.9443** \\ \hline DenseNet121 & 38s 75ms & 0.1316 & 0.9661 & 2.0020 & 0.7369 & **0.7487** & 0.8850 \\ \hline DenseNet169 & 39s 78ms & 0.1202 & 0.9719 & 2.1225 & 0.7492 & **0.5843** & 0.9086 \\ \hline DenseNet201 & 43s 86ms & 0.1118 & 0.9757 & 2.5271 & 0.7718 & **0.5394** & 0.9321 \\ \hline EfficientNetB0 & 56s 111ms & **0.0849** & **0.9840** & 6.4851 & **0.8599** & 1.8422 & 0.9336 \\ \hline EfficientNetB1 & 62s 122ms & 0.1265 & **0.9890** & 3.9277 & **0.8774** & 1.5096 & 0.9371 \\ \hline EfficientNetB2 & 58s 115ms & 0.1426 & **0.9858** & 4.7746 & **0.8333** & 1.3951 & 0.9279 \\ \hline ConvNeXtTiny & 43s 84ms & **0.0709** & **0.9893** & 3.2226 & **0.8028** & **0.8192** & 0.9300 \\ \hline ConvNeXtSmall & 61s 120ms & **0.0640** & **0.9897** & 3.9029 & 0.7643 & **0.9123** & 0.9214 \\ \hline ConvNeXtBase & 70s 138ms & **0.0750** & **0.9876** & 3.2020 & 0.7881 & **0.8759** & 0.9229 \\ \hline \end{tabular} E-ISBN: 978-605-72180-3-2 Confusion matrix performances of the models were also obtained in order to display the estimation results for each class in the HG14 dataset of the four models. Obtained results are shown in Figure 3. After the four most successful models were determined, these models were combined with the Dirichlet ensemble weighted average method and tested on the HG14 train and test dataset. The tests for robustness of the model were repeated 10 times and the average was determined. Dirichlet Ensemble Weighted Average results are given in Table II. ## V Discussion and Conclusion The study has demonstrated the superiority of the proposed approach in hand gesture identification compared to the state-of-the-art techniques. The importance of HG studies has been emphasized due to the increasing prevalence of technologies such as 3D, AR, VR, and XR. The control of hardware and software is a crucial element in HCI, with hand movements playing a significant role in control systems. The two-stage approach of the study involved the transfer learning and fine-tuning of high-performance pre-trained deep architectures on the HG14 dataset containing 14 different hand sign classes. The dataset was divided into three groups as training, validation, and test data. Two different model groups, MobileNet and VGGNet, were found to outweigh the other pre-trained models in the first stage. In the second stage, these four models were combined using the dirichlet ensemble method and used in the classification process with the weighted average method. The test data were used, and the tests were repeated 10 times for reliability. The proposed method achieved more successful results than both state-of-the-art studies and single transfer learning models. Future studies could test this approach on different HG datasets and assess the performances of models and deep ensemble learning. The successful models' weights can also be recorded and used in camera systems, game consoles, and other applications.
2309.07072
The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning
In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation. We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks in the above settings is extremely challenging, if at all possible, even when such ideal solutions exist within the given class of neural architectures.
Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou
2023-09-13T16:33:27Z
http://arxiv.org/abs/2309.07072v1
# The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning ###### Abstract In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation. We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks in the above settings is extremely challenging, if at all possible, even when such ideal solutions exist within the given class of neural architectures. Keywords:AI stability AI verifiability AI robustness deep learning. ## Notation \(\mathbb{R}\) denotes the field of real numbers, \(\mathbb{R}_{\geq 0}=\{x\in\mathbb{R}|\ x\geq 0\}\), and \(\mathbb{R}^{n}\) denotes the \(n\)-dimensional real vector space, \(\mathbb{N}\) denotes the set of natural numbers; \((x,y)=\sum_{k}x_{k}y_{k}\) is the inner product of \(x\) and \(y\), and \(\|x\|=\sqrt{(x,x)}\) is the standard Euclidean norm in \(\mathbb{R}^{n}\); \(\mathbb{B}_{n}\) denotes the unit ball in \(\mathbb{R}^{n}\) centered at the origin \(\mathbb{B}_{n}=\{x\in\mathbb{R}^{n}\ |\ \|x\|\leq 1\}\), \(\mathbb{B}_{n}(r,y)\) is the ball in \(\mathbb{R}^{n}\) centred at \(y\) with radius \(r\geq 0\): \(\mathbb{B}_{n}(r,y)=\{x\in\mathbb{R}^{n}\ |\ \|x-y\|\leq r\}\); \(\mathrm{Cb}(\ell,y)\) is the cube in \(\mathbb{R}^{n}\) centered at \(y\) with side-length \(\ell\geq 0\): \(\mathrm{Cb}(\ell,y)=\left\{x\in\mathbb{R}^{n}\ |\ \|x-y\|_{\infty}\leq\frac{\ell}{2}\right\}\); \(\mathbb{S}_{n-1}(r,y)\) is the sphere in \(\mathbb{R}^{n}\) centred at \(y\) with radius \(r\): \(\mathbb{S}_{n-1}(r,y)=\{x\in\mathbb{R}^{n}\ |\ \|x-y\|=r\}\); \(\mathrm{sign}(\cdot):\mathbb{R}\to\mathbb{R}_{\geq 0}\) denotes the function such that \(\mathrm{sign}(s)=1\) for all \(s\in\mathbb{R}_{\geq 0}\) and \(\mathrm{sign}(s)=0\) otherwise; \(\mathcal{K}_{\theta}\) is the class of real-valued functions defined on \(\mathbb{R}\) which are continuous, strictly monotone on \([\theta,\infty)\), and constant on \((-\infty,\theta)\); \(\mathbf{1}_{n}\) denotes the vector \((1,\ldots,1)\in\mathbb{R}^{n}\). ## 1 Introduction Data-driven AI systems and neural networks in particular have shown tremendous successes across a wide range of applications, including automotive, healthcare, gaming, marketing, and more recently natural language processing. Fuelled by high and growing rates of adoption of the new technology across sectors, robustness and stability are vital characterisations of AI performance. The importance of AI stability and robustness is exemplified by the discovery of adversarial perturbations [12] - imperceptible changes of input data leading to misclassifications. These perturbations can be universal [8] (i.e. triggering misclassifications for many inputs), limited to a single attribute [11], or masquerading as legitimate inputs [2]. Sometimes, such AI instabilities can be typical [14], [10]. Moreover, instabilities can also be induced by perturbations of the AI structure [13]. The issue of AI robustness is non-trivial and cannot be considered in isolation from other measures of AI performance: a model returning the same output regardless of the inputs is perfectly robust yet useless. A theoretical framework to approach the problem has recently been proposed in [1]. It has been shown in [1] that (i) there is an uncountably large family of distributions such that for an appropriately large data sample drawn from a distribution from this family there is a feed-forward neural network showing excellent performance on this sample, although (ii) this same network becomes inevitably unstable on some subset of the training and validation sets. Moreover, (iii) for the same distribution and the same data, there is a stable network possibly having a different architecture. Here we show that the stability-accuracy issues have other unexplored dimensions and could be significantly more pronounced than previously thought. Our main result, Theorem 1 shows that there exist large families of well-behaved data distributions for which even networks achieving zero training and validation error may be highly unstable with respect to almost any small perturbation on nearly half of the training or validation data. Yet, for the same data samples and distributions, there exist stable networks _with the same architecture as the unstable network_ which also minimise the loss function. Strikingly, there exist infinitely many pairs of networks, in which one network is stable and accurate and the other is also accurate but unfortunately unstable, whose weights and biases could be made arbitrarily close to each other. What is even more interesting, all this happens and persists when the values of weights and biases are made small. This result reveals a fundamental issue at the heart of current data-driven approaches to learning driven by minimising empirical risk functions, even in the presence of weight regularisation, in distribution-agnostic settings. The issues is that such learning algorithms could be structurally incapable of distinguishing between stable and unstable solutions. The rest of the paper is organised as follows. In Section 2 we introduce notation and problem setting. In Section 3 we state our main results along with discussion, interpretation, and comparison to the literature. Section 4 concludes the paper. ## 2 Preliminaries, assumptions, and problem settings Following [1], by \(\mathcal{NN}_{\mathbf{N},L}\) we denote the class of neural networks with \(L\) layers and dimension \(\mathbf{N}=\{N_{L},N_{L-1},N_{L-2},\ldots,N_{1},N_{0}=n\}\), where \(n\) is the input dimension, and \(N_{L}=1\) is the dimension of the network's output. A neural network with dimension \((\mathbf{N},L)\) is a map \[\phi=G^{L}\sigma G^{L-1}\sigma\cdots\cdots\sigma G^{1},\] where \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is a coordinate-wise activation function, and \(G^{l}:\mathbb{R}^{N_{l-1}}\rightarrow\mathbb{R}^{N_{l}}\) is an affine map defined by \(G^{l}x=W^{l}x+b^{l}\), where \(W^{l}\in\mathbb{R}^{N_{l}\times N_{l-1}}\), \(b^{l}\in\mathbb{R}^{N_{l}}\) are the corresponding matrices of weights and biases. By \(\Theta(\phi)\) we denote the vector of all weights and biases of the network \(\phi\). In general, the activation functions \(\sigma\) do not have to be the same for all components and all layers, although here we will assume (unless stated otherwise) that this is indeed the case. In what follows we will consider feed-forward networks with activation functions in their hidden layers computing mappings from the following broad class: \[\sigma=g_{\theta},\ g_{\theta}\in\mathcal{K}_{\theta},\ \theta\in\mathbb{R}. \tag{1}\] Popular functions such as ReLU are contained in this class (that is the class of functions which are continuous, strictly monotone on \([\theta,\infty)\) and constant on \((-\infty,\theta)\)). The condition of strict monotonicity of \(g_{\theta}\) over \([\theta,\infty)\) can be reduced to strict monotonicity over some \([\theta,\theta_{1}]\), \(\theta_{1}>\theta\), with \(g_{\theta}\) being merely monotone on \([\theta_{1},\infty)\). This extension won't have any affect on the validity of the theoretical statements below, but will enable the inclusion of leaky ReLU activations (since then activation functions satisfying (1) can be constructed as a difference of a leaky ReLU function and its shifted/translated copy, and the results below therefore still follow) as well as "sigmoid"-like piecewise linear functions. We will suppose that all data are drawn from some unknown probability distribution belonging to a family \(\mathcal{F}\), and each element \(\mathcal{D}\in\mathcal{F}\) of this family is supported on \([-1,1]^{n}\times\{0,1\}\). For any given \(\mathcal{D}\in\mathcal{F}\), we will assume that the training and testing algorithms have access to samples \((x^{j},\ell^{j})\), \(j=1,\ldots,s+r\), \(s,r\in\mathbb{N}\), independently drawn from \(\mathcal{D}\), and which can be partitioned into training \[\mathcal{T}=\{(x^{1},\ell^{1}),\ldots,(x^{r},\ell^{r})\}\] and validation/testing \[\mathcal{V}=\{(x^{r+1},\ell^{r+1}),\ldots,(x^{r+s},\ell^{r+s})\}\] (multi)-sets. Let \(M=r+s=|\mathcal{T}\cup\mathcal{V}|\) be the size of the joint training and validation (multi)-set. Further, we impose a condition that the data distribution is sufficiently regular and does not possess hidden instabilities and undesirable accumulation points which could otherwise trivialise our statements and results. In particular, for \(\delta\in(0,2\sqrt{n}]\) we will only consider those distributions \(\mathcal{D}_{\delta}\in\mathcal{F}\) which satisfy: If \((x,\ell_{x}),(y,\ell_{y})\sim\mathcal{D}_{\delta}\) with \(\ell_{x}\neq\ell_{y}\), then, with probability \(1\), \(\|x-y\|\geq\delta\). (2) Finally, we introduce the family of loss functions \[\mathcal{CF}_{\mathrm{loc}}= \{\mathcal{R}:\ \mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}_{ \geq 0}\cup\{\infty\}\ |\ \mathcal{R}(v,w)=0\ \ \Longleftrightarrow\ \ v=w\} \tag{3}\] which will be used to define the corresponding empirical loss functions for the model outputs \(h:\mathbb{R}^{n}\rightarrow\{0,1\}\) on samples \(\mathcal{S}\sim\mathcal{D}_{\delta}\) drawn from \(\mathcal{D}_{\delta}\) \[\mathcal{L}(\mathcal{S},h)=\sum_{(x^{i},\ell^{i})\in\mathcal{S}}\mathcal{R}(h( x^{i}),\ell^{i}). \tag{4}\] The subscript "loc" in (3) emphasises that the loss functions \(\mathcal{R}\) are evaluated on single data points and in this sense are "local". It provides an explicit connection with the classical literature involving empirical risk minimisation, allowing us to exploit the conventional interpretation of the generalisation error as a deviation of the empirical risk from the expected value of the loss over the distribution generating the data. ## 3 Main results Having introduced all relevant notation, are now ready to state the main result of the contribution. Theorem 3.1 (Inevitability, typicality and undetectability of instability): _Consider the class of networks with architecture_ \[\mathbf{N}=(N_{L}=1,N_{L-1},\ldots,N_{1},N_{0}=n),\ \ L\geq 2,\ n\geq 2,\] _where \(N_{1}\geq 2n\) and \(N_{2},\ldots,N_{L-1}\geq 1\), and activation functions \(g_{\theta}\) in layers \(1,\ldots,L-1\) satisfying conditions (1), and the \(\mathrm{sign}(\cdot)\) activation function in layer \(L\)._ _Let \(\varepsilon\in(0,\sqrt{n}-1)\) and fix \(0<\delta\leq\varepsilon/\sqrt{n}\). Then, there is an uncountably large family of distributions \(\mathcal{D}_{\delta}\in\mathcal{F}\) satisfying (2) such that for any \(\mathcal{D}_{\delta}\in\mathcal{F}\), any training and validation data \(\mathcal{T}\), \(\mathcal{V}\) drawn independently from \(\mathcal{D}_{\delta}\), and every \(\mathcal{R}\in\mathcal{CF}_{\mathrm{loc}}\), with probability 1:_ 1. _There exists a network which correctly classifies the training data_ \(\mathcal{T}\) _and generalises to the test data_ \(\mathcal{V}\)_, satisfying_ \[f\in\operatorname*{arg\,min}_{\varphi\in\mathcal{N}\mathcal{N}_{\mathbf{N},L}} \mathcal{L}(\mathcal{T}\cup\mathcal{V},\varphi)\] _with_ \(\mathcal{L}(\mathcal{T}\cup\mathcal{V},f)=0\)_._ 2. _Yet, for any_ \(q\in(0,1/2)\)_, with probability greater than or equal to_ \[1-\exp(-2q^{2}M)\] _there exists a multi-set_ \(\mathcal{U}\subset\mathcal{T}\cup\mathcal{V}\) _of cardinality at least_ \(\lfloor(1/2-q)M\rfloor\) _on which_ \(f\) _is unstable in the sense that for any_ \((x,\ell)\in\mathcal{U}\) _and any_ \(\alpha\in(0,\varepsilon/2)\)_, there exists a perturbation_ \(\zeta\in\mathbb{R}^{n}\) _with_ \(\|\zeta\|\leq\alpha/\sqrt{n}\) _and_ \[|f(x)-f(x+\zeta)|=1.\] (5) _Moreover, such destabilising perturbations are_ typical _in the sense that if vectors_ \(\zeta\) _are sampled from the equidistribution in_ \(\mathbb{B}_{n}(\alpha/\sqrt{n},0)\)_, then for_ \((x,\ell)\in\mathcal{U}\)_, the probability that (_5_) is satisfied is at least_ \[1-\frac{1}{2^{n}}.\] _Furthermore, there exist_ universal _destabilising perturbations, in the sense that a single perturbation_ \(\zeta\) _drawn from the equidistribution in_ \(\mathbb{B}_{n}(\alpha/\sqrt{n},0)\) _destabilises_ \(m\leq|\mathcal{U}|\) _points from the set_ \(\mathcal{U}\) _with probability at least_ \[1-\frac{m}{2^{n}}.\] 2. _At the same time, for the same distribution_ \(\mathcal{D}_{\delta}\) _there is a robust network with the same architecture as_ \(f\)_, satisfying_ \[\tilde{f}\in\operatorname*{arg\,min}_{\varphi\in\mathcal{N}\mathcal{N}_{ \text{\tiny{\bf{N}},L}}}\mathcal{L}(\mathcal{T}\cup\mathcal{V},\varphi)\] _with_ \(\mathcal{L}(\mathcal{T}\cup\mathcal{V},\tilde{f})=0,\) _which is robust in the sense that for all_ \((x,\ell)\in\mathcal{T}\cup\mathcal{V}\)__ \[\tilde{f}(x)=\tilde{f}(x+\zeta)\] _for any_ \(\zeta\in\mathbb{R}^{n}\) _with_ \(\|\zeta\|\leq\alpha/\sqrt{n}\)_, even when_ \(|\mathcal{T}\cup\mathcal{V}|=\infty\)_. Moreover, there exist pairs of unstable and robust networks,_ \(f_{\lambda},\tilde{f}_{\lambda}\) _and_ \(f_{\Lambda},\tilde{f}_{\Lambda}\)_, satisfying the statements above such that the maximum absolute difference between their weights and biases is either arbitrarily small or arbitrarily large. That is, for any_ \(\lambda>0,\Lambda>0\)_:_ \[\|\Theta(f_{\lambda})-\Theta(\tilde{f}_{\lambda})\|_{\infty}<\lambda,\ \| \Theta(f_{\Lambda})-\Theta(\tilde{f}_{\Lambda})\|_{\infty}>\Lambda.\] 3. _However, for the above robust solution_ \(\tilde{f}\)_,_ 1. _there exists an uncountably large family of distributions_ \(\tilde{D}_{\delta}\in\mathcal{F}\) _on which_ \(\tilde{f}\) _correctly classifies both the training and test data, yet fails in the same way as stated in (_1_)._ 2. _there exists an uncountably large family of distributions_ \(\hat{D}_{\delta}\in\mathcal{F}\) _such that the map_ \(\tilde{f}\) _is robust on_ \(\mathcal{T}\cup\mathcal{V}\) _(with respect to perturbations_ \(\zeta\) _with_ \(\|\zeta\|\leq\alpha/\sqrt{n}\)_,_ \(\alpha\in(0,\varepsilon/2)\)_) with probability_ \[\left(1-\frac{1}{2^{n+1}}\right)^{Mk}\] _but is unstable to arbitrarily small perturbations on future samples with probability_ \(k/2^{n+1}\)_._ The proof of the theorem is provided in the Appendix. ### Interpretation of results According to statement (i) of Theorem 3.1, not only are instabilities to be expected, but they can also be remarkably widespread: for sufficiently large data sets they may occur, with high probability, for nearly half of all data. Statement (ii) of Theorem 3.1 confirms that a stable solution exists _within precisely the same class of network architectures_, although it is difficult to compute it by using only the loss functional \(\mathcal{L}\) as a measure of quality. This shows that the architecture isn't necessarily the source of the instability. Moreover, a robust solution may be found in an arbitrarily small neighborhood of the specific non-robust one in the space of network weights and biases. As the construction in the proof shows, using networks with small Lipshitz constants can, counter-intuitively, make the problem worse. The robust solution, in turn, can also be unstable, as follows from statement (iii), part (a). This is reminiscent of a "no free lunch" principle for robust and accurate learning, although with a subtle distinction. In fact, as part b) of the statement states, there are solutions which may appear to be certifiably robust (and one can indeed certify the model on the training and validation sets), although there is no guarantee whatsoever that the certificate remains valid for future samples. To minimise the risks, one needs to certify the model on data sets which are exponentially large in \(n\). This is particularly relevant for safety-critical settings, where the risk of failure must be calculated and bounded in advance. Finally, we note that the instabilities considered in Theorem 3.1 become particularly pronounced for networks with sufficiently high input dimension \(n\) (see statement (iii) of the theorem). Moreover, statement (ii) shows that the fraction of perturbations around unstable points \(x\) in the sample which alter the network's response approaches \(1\) as \(n\) grows. These high-dimensional effects may still be observed in networks with arbitrarily low input dimensions if such networks realise appropriate auxiliary space-filling mappings in relevant layers. The technical point that the statement of Theorem 3.1 holds with probability one is due to the fact that the proof constructs data distributions which assign probability zero to certain sets, so there may exist training samples with probability zero for which the construction does not apply. ### Discussion #### 3.2.1 Instabilities and regularisation The construction we used in the proof of Theorem 3.1 reveals that the instability discussed in statements (i) and (ii) of the theorem is inherent to the very definition of the binary classification problem and may not be addressed by regularisation approaches constraining norms of network's parameters and Lipschitz constants of non-threshold layers. Indeed, consider just the first two layers of the network \(f\) constructed in the proof of the theorem, remove the sign\((\cdot)\) activation function, and introduce an arbitrarily small positive factor \(\beta\) (cf. (13)): \[\begin{split} f_{\text{reg}}(x)=&\sum_{i=1}^{n}g_{ \theta}(\theta)-g_{\theta}(\beta((x,e_{i})-1/\sqrt{n})+\theta)\\ &+\sum_{i=1}^{n}g_{\theta}(\theta)-g_{\theta}(\beta(-(x,e_{i})- 1/\sqrt{n})+\theta).\end{split} \tag{6}\] If the functions \(g_{\theta}\) are Lipschitz then the Lipschitz constant of the function \(f_{\text{reg}}\) can be made arbitrarily small by setting \(\beta\) to some sufficiently small value. At the same time, the values of \(\text{sign}f_{\text{reg}}(x)\) and \(f(x)\) coincide. This implies that regardless of how well-behaved the function \(f_{\text{reg}}\) in (6) is, forced classification achieved either by the application of the sign function or, alternatively, through thresholding or softmax, brings instabilities. In this respect, network regularisation by pruning, restricting norms of the network's weights, and forcing the network's Lipschitz constant to stay small do not always warrant robustness. Similarly, requesting that there is some non-zero margin separating the classes does not address or alleviate the problem either. The instability occurs due to the fact that the algorithm is required to produce a decision boundary, but is unaware that the data is placed directly on this boundary. #### 3.2.2 Adversarial training A potential way to overcome the instabilities formalised in statement (i) of Theorem 3.1 is to invoke a type of training capable of assessing that instabilities (5) do not occur. Adversarial training and data augmentation, whereby each data sample produces a set of points corresponding to perturbed data is an example of an approach which can potentially address the problem. The approach is not without its own challenges as one needs to ensure that all points in the sets \(\mathbb{B}_{n}(\alpha/n,x)\), \(\alpha\in(0,\varepsilon/2)\) are checked. The latter task can be computationally and numerically overwhelming for large \(n\). #### 3.2.3 Dark data The final and perhaps the most interesting point in relation to the problem of verifiability is statement (iii), which can be related to challenge of the "dark data" - the data which exists but to which we don't have access [9] or, more generally, the missing data and the data which we don't have [6]. As the theorem states, high-dimensional distributions could be a very real source of such dark data, potentially leading to instabilities or non-verifiability. ## 4 Conclusion Deep learning networks and models have convincingly shown ample capabilities in many practical tasks. When properly engineered, these models stunningly outperform shallower architectures (see e.g. [7], [15] for examples and precise statements). Moreover, recent breakthroughs such as the emergence of Chat-GPT show exceptional power these models may bring. These models operate in high-dimensional spaces and process and execute decisions on genuinely high-dimensional data. At the same time, and despite these remarkable achievements, the application of these highly expressive and capable models requires special care and understanding of their fundamental limitations. Our work, by building on [1], reveals a new set of limitations which are particularly inherent to high-dimensional data. These limitations constitute the presence of nested uncountably large families of exceptions on which even moderately-sized networks may and likely will fail. The results also show that it may be computationally hard to verify both robustness and accuracy of models within classical distribution-agnostic learning frameworks based solely on the notions of risk and empirical risk minimisation. All these call for the need to rethink standard distribution-agnostic learning frameworks and introduce more appropriate models of reality into the mathematical setting of statistical learning. The results, by showing fundamental difficulties with guaranteeing simultaneous stability, accuracy, and verifiability, highlight the importance of mathematical theory and methods for the continuous correction of AI models [4], [5], [3]. At present, the results do not include networks with classical sigmoidal activation functions. Detailed analysis of these types of networks will be the topic of our future work. #### 4.0.1 Acknowledgements This work is supported by the UKRI, EPSRC [UKRI Turing AI Fellowship ARaISE EP/V025295/2 and UKRI Trustworthy Autonomous Systems Node in Verifiability EP/V026801/2 to I.Y.T., EP/V025295/2 to O.S., A.N.G., and Q.Z., EP/V046527/1 and EP/P020720/1 to D.J.H, EP/V046527/1 to A.B.].
2305.19827
Bose Gas Modeling of the Schwarzschild Black Hole Thermodynamics
Black holes violate the third law of thermodynamics, and this gives rise to difficulties with the microscopic description of the entropy of black holes. Recently, it has been shown that the microscopic description of the Schwarzschild black hole thermodynamics in $D = 4$ spacetime dimensions is provided by the analytical continuation of the entropy of Bose gas with non-relativistic one particle energy to d =-4 negative spatial dimension. In this paper, we show that the D=5 and D=6 Schwarzschild black holes thermodynamics can be modeled by the d-dimensional Bose gas, d=1,2,3..., with the one particle energy $\varepsilon(k)=k^\alpha$ under conditions $\alpha=-d/3$ and $\alpha=-d/4$, respectively. In these cases the free energy of the Bose gas has divergences and we introduce a cut-off and perform the minimal renormalizations. We also perform renormalizations using analytical regularization and prove that the minimal cut-off renormalization gives the same answer as the analytical regularization by the Riemann zeta-function.
I. Ya. Aref'eva, I. V. Volovich
2023-05-31T13:08:34Z
http://arxiv.org/abs/2305.19827v1
# Bose Gas Modeling of the Schwarzschild Black Hole Thermodynamics ###### Abstract Black holes violate the third law of thermodynamics, and this gives rise to difficulties with the microscopic description of the entropy of black holes. Recently, it has been shown that the microscopic description of the Schwarzschild black hole thermodynamics in \(D=4\) spacetime dimensions is provided by the analytical continuation of the entropy of Bose gas with non-relativistic one particle energy to \(d=-4\) negative spatial dimension. In this paper, we show that the \(D=5\) and \(D=6\) Schwarzschild black holes thermodynamics can be modeled by the d-dimensional Bose gas, \(d=1,2,3...\), with the one particle energy \(\varepsilon(k)=k^{\alpha}\) under conditions \(\alpha=-d/3\) and \(\alpha=-d/4\), respectively. In these cases the free energy of the Bose gas has divergences and we introduce a cut-off and perform the minimal renormalizations. We also perform renormalizations using analytical regularization and prove that the minimal cut-off renormalization gives the same answer as the analytical regularization by the Riemann zeta-function. ## 1 Introduction The problem with the microscopic origin of the Bekenstein-Hawking entropy [1; 2] for the Schwarzschild black holes is that black holes do not satisfy the third law of thermodynamics in its standard formulation. Therefore, such exotic thermodynamics behaviour of black hole cannot be obtained by using ordinary quantum statistical mechanics models which obey the third law, see discussion and refs in [3]. In [3] we have shown that the entropy of the \(D=4\) Schwarzschild black hole \[S_{BH}=\frac{\beta^{2}}{16\pi},\qquad\beta=\frac{1}{T}\,, \tag{1}\] where \(T\) is the temperature, corresponds to the Bose gas in \(d=-\,4\)_negative_ spatial dimensions. This conclusion is obtained by using properties of the Riemann zeta function. The entropy of the Bose gas in \(d\)-dimensional space is proportional to \[S_{BG}\sim\left(\frac{d}{2}+1\right)\zeta\left(\frac{d}{2}+1\right)\beta^{- \frac{d}{2}}\,, \tag{2}\] where \(\zeta\) is the Riemann zeta function. The expression (2) admits the analytical continuation for complex \(d\), in particular for \(d=-4\) we have \[S_{BG}\sim\beta^{2}, \tag{3}\] therefore, we get the entropy of the \(D=4\) Schwarzschild black hole. Note that the proportionality factor is a positive number and there is no divergences in this calculation. In this paper we show that some higher-dimensional black holes can be described using the Bose gas in positive dimensions. However, in these cases there are divergences that should be renormalized. We consider the \(d\) dimensional Bose gas with the kinetic term \(k^{\alpha}\), in this case the free energy \(F_{BG}\) is proportional to \[F_{BG}\sim I(-\frac{d}{\alpha})\,\beta^{-1-d/\alpha}, \tag{4}\] where \[I(s)=\int_{0}^{\infty}\ln\left(1-e^{-x}\right)\,\frac{dx}{x^{1+s}}. \tag{5}\] Of particular interest to us is the case with \(d/\alpha=2-D\), since in this case we get \[F_{BG}\sim I(D-2)\,\beta^{D-3}, \tag{6}\] that coincides with the Schwarzschild black hole dependence of the free energy on the inverse temperature \(\beta\), \(F_{BH}\sim\beta^{D-3}\). However, the integral \(I(s)\) diverges for \(s\geq 0\), and the formula (4) has no immediate meaning. To cure the formula (4) we introduce regularization in (5) and then perform renormalizations. We consider two possible regularizations of the integral in (5): cut-off regularization and analytical regularization [4]. In both cases we performed minimal subtractions and define \(I_{ren}\) and \(\mathcal{I}_{ren}\) in the first and second cases, respectively. We prove that both regularizations give the same answer, that explicitly means the validity of the identity (10) presented in Sect.5. In particular, \(D=5\) and \(D=6\) black hole spacetime dimensions correspond to the Bose gas model with \(d/\alpha=-3\) and \(d/\alpha=-4\), respectively. The paper is organized as follows. In Sect.2 the Bose gas model with non-standard kinetic term is presented and two possible schemes of free energy renormalizations are mentioned. In Sect.3 the cut-off regularization is introduced and the minimal renormalization is performed. In Sect.4 the analytical regularization is introduced and the its minimal renormalization is presented. Sect.5 the equivalence of the cut-off minimal renormalization and minimal analytical renormalization is proved. In Sect.6 few explicit examples are presented and we conclude in Sect.7 with the discussion of obtained results. Setup We consider the Bose gas with kinetic term \(\lambda(\vec{k},\vec{k})^{\alpha/2}\). In d-dimensional case the free energy is [5; 6] \[F_{BG}=\frac{\Omega_{d-1}}{\beta}\left(\frac{L}{2\pi}\right)^{d}\int_{0}^{\infty }\ln\left(1-e^{-\beta\,\lambda\,k^{\alpha}}\right)\,k^{d-1}dk, \tag{1}\] where \(\Omega_{d-1}=2\pi^{d/2}/\Gamma(d/2)\) and \(\beta,\lambda,\alpha,L\) are positive constants, \(d=1,2,3,...\). By changing the variable \[k=\left(\frac{x}{\beta\lambda}\right)^{1/\alpha}, \tag{2}\] we get \[F_{BG}=\frac{\Omega_{d-1}}{\alpha\beta}\left(\frac{L}{2\pi}\right)^{d}\left( \frac{1}{\beta\lambda}\right)^{d/\alpha}I(-\frac{d}{\alpha}), \tag{3}\] where \[I(s)=\,\int_{0}^{\infty}\ln\left(1-e^{-x}\right)\,\frac{dx}{x^{1+s}}. \tag{4}\] For \(d=1,2,3,...\) and \(\alpha>0\) the integral in (4) converges and \[I(s)=-\Gamma(-s)\zeta(1-s),\quad\mathfrak{R}s<-1. \tag{5}\] However, as has been mentioned in Introduction, the integral in (4) diverges for \(s\geq 0\). To give a meaning for this formula for \(s\geq 0\) we introduce regularizations. We consider two regularizations: cut-off regularization and analytical regularization. We performed minimal subtractions and define \(I_{ren}\) and \(\mathcal{I}_{ren}\) in the first and second cases, respectively. Below we schematically describe both of them. * Cut-off regularizations. In this case we start from \[I(s,a)\equiv\,\int_{a}^{\infty}\ln\left(1-e^{-x}\right)\frac{dx}{x^{1+s}},\,a >0.\] (6) We find a singular part of the asymptotics of the integral \(I(s,a)\) as \(a\to 0\) in the form \[S(s,a)=\sum_{i\geq 0}A_{i}\frac{\log a}{a^{i}}+\sum_{i\geq 1}C_{i}\frac{1}{a^{i}}.\] (7) Then we subtract this singular part \(S(s,a)\) \[I_{ren}(s,a)=I(s,a)-S(s,a),\] (8) and finally remove the regularisation \[I_{ren}(s)=\lim_{a\to 0}I_{ren}(s,a).\] (9) * Analytical regularization. In this case we start from the following representation \[I(s)=\int_{0}^{\infty}\ln\left(1-e^{-x}\right)\frac{dx}{x^{1+s}}=-\,\Gamma(-s) \,\zeta(-s+1),\quad\mathfrak{R}s<0\] (10) However, the right-hand side of (10) is well defined for all \(s\neq 0\) and \(s\neq n\), here \(n\in\mathbb{Z}_{+}\) and we denote it by \(\mathcal{I}(s)\), \[\mathcal{I}(s)=-\,\Gamma(-s)\,\zeta(-s+1).\] (11) The function \(\mathcal{I}(s)\) given by (11) is a meromorphic function for \(s\in\mathbb{C}\). It has poles at \(s=n>0\) and a double pole at \(n=0\). We define \(\mathcal{I}_{ren}(n)\) as \[\mathcal{I}_{ren}(n) \equiv \lim_{\epsilon\to 0}\left[-\Gamma(-n+\epsilon)\zeta(1-n+ \epsilon)-\text{Pole Part}\left[(-\Gamma(-n+\epsilon)\zeta(1-n+\epsilon) ]\right]\right]\] (12) \[\text{at point}\quad n=1,2,3,...\] and \[\mathcal{I}_{ren}(0) \equiv \lim_{\epsilon\to 0}\left[-\Gamma(\epsilon)\zeta(1+ \epsilon)-\text{Double Pole Part}\left[(-\Gamma(\epsilon)\zeta(1+\epsilon) \right]\right]\] (13) \[\mathcal{I}_{ren}(s) \equiv \mathcal{I},\quad s>0,s\neq\mathbb{Z}_{+}\,.\] (14) * In what follows we prove that \[\mathcal{I}_{ren}(n) = I_{ren}(n),\] (15) \[\mathcal{I}(s) = I_{ren}(s),\quad s\neq n\] (16) The detail definitions of \(I_{ren}(n)\) and \(\mathcal{I}_{ren}(n)\) will be given in Sect.3 and Sect.4, respectively. In Sect. 5 we show the equivalence of these two forms of renormalizations, i.e. validity of (15) and (16). Cut-off renormalization In this section we present the explicit form of the renormalized version of (6) after the minimal renormalization. We distinguish two cases: integer and non-integer \(s\geq 0\). * For \(s=n\), \(n=0,1,2,...\), the following proposition holds. **Proposition 1.**_The renormalized version of (6) after minimal renormalizations defined by (9) is given by_ \[I_{ren}(n) = \int_{0}^{1}\frac{1}{x^{n+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n}c_{k}x^{k}\Big{]}\,dx \tag{10}\] \[- \frac{1}{n^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-n}+\int_{1}^{ \infty}\frac{1}{x^{n+1}}\ln\Big{(}1-e^{-x}\Big{)}dx,\quad n>0;\] \[I_{ren}(0) = \int_{0}^{1}\frac{1}{x}\ln\Big{(}\frac{1-e^{-x}}{x}\Big{)}\,dx+ \int_{1}^{\infty}\frac{1}{x}\ln\Big{(}1-e^{-x}\Big{)}dx. \tag{11}\] * For \(s\neq 0\), \(s\neq n\in\mathbb{Z}_{+}\), in the following proposition holds. **Proposition 1\({}^{\prime}\).**_The renormalized version of (6) after minimal renormalizations is_ \[I_{ren}(s) = \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n(s)}c_{k}x^{k}\Big{]}dx \tag{12}\] \[- \frac{1}{s^{2}}+\sum_{k=1}^{n(s)}\frac{c_{k}}{k-s}+\int_{1}^{ \infty}\frac{1}{x^{s+1}}\ln\Big{(}1-e^{-x}\Big{)}dx\,,\] \[n(s) = \text{Entier}[s],\,\text{i.e the integer part of }\,s. \tag{13}\] **Remark.** The formula (11) can be considered as a generalization of the Chebyshev formula for the zeta-function, see [7; 8]. To prove these propositions we present \(I(s,a)\) given by (6) as \[I(s,a)=I(s,a,1)+I(s,1,\infty), \tag{14}\] where \[I(s,a,1) = \int_{a}^{1}\ln\Big{(}1-e^{-x}\Big{)}\,\frac{dx}{x^{1+s}},\qquad a<1 \tag{15}\] \[I(s,1,\infty) = \int_{1}^{\infty}\ln\Big{(}1-e^{-x}\Big{)}\,\frac{dx}{x^{1+s}}. \tag{16}\] We expand the integrand in (15) in the power series near the \(x=0\). We have \[\ln\Big{(}1-e^{-x}\Big{)}=\log(x)+\sum_{k=1}^{\infty}c_{k}x^{k}, \tag{17}\] \(c_{k}\) are related with the Bernoulli numbers \(B_{k}\), see Appendix A, \[c_{k}=\frac{1}{k\,k!}\,B_{k} \tag{3.9}\] and we have \[\frac{1}{x^{1+s}}\ln\left(1-e^{-x}\right)=\frac{1}{x^{1+s}}\log(x)+\sum_{k=1}^{ n(s)}c_{k}x^{k-1-s}+\sum_{k=n(s)+1}^{\infty}c_{k}x^{k-1-s} \tag{3.10}\] We take \(n(s)=E[s]\), where \(E[s]\) is the integer part of \(s\). Therefore, in the first sum in the RHS of (3.10) all terms have power less then \(-1\) and after integrating the equality (3.10) in interval \((a,1)\) give raise to singular terms for \(a\to 0\). Let us find these singular terms explicitly first for \(s=n\). * \(s=n\). We have \[I(n,a,1) = \int_{a}^{1}\Bigl{[}\ln\left(\frac{1-e^{-x}}{x}\right)-\sum_{k=1} ^{n}c_{k}x^{k}\Bigr{]}\frac{dx}{x^{1+n}}+\int_{a}^{1}\frac{\log x}{x^{1+n}}dx +\sum_{k=1}^{n}c_{k}\int_{a}^{1}\frac{dx}{x^{1+n-k}}\] (3.11) \[= \int_{a}^{1}\frac{1}{x^{1+n}}\Biggl{[}\ln\left(\frac{1-e^{-x}}{x }\right)-\sum_{k=1}^{n}c_{k}x^{k}\Biggr{]}\,dx\] \[+ \frac{1}{n^{2}a^{n}}+\frac{\log a}{na^{n}}-c_{n}\log a-\frac{1}{n ^{2}}+\sum_{k=1}^{n-1}c_{k}\Biggl{[}\frac{1}{k-n}-\frac{a^{k-n}}{k-n}\Biggr{]}\] (3.12) This identity gives the representation \[I(n,a,1)=S(n,a)+F(n)+\mathcal{O}(a),\] (3.13) where \(S(a,n)\) includes all singular terms at \(a\to 0\) \[S(n,a)=\frac{\log a}{na^{n}}+\frac{1}{n^{2}a^{n}}-c_{n}\log a-\sum_{k=1}^{n-1 }c_{k}\frac{a^{k-n}}{k-n},\] (3.14) \(F(n)\) is the finite part that contains the limit at \(a\to 0\) of the convergent integral in the line (3.11) and two terms from the line (3.12) \[-\frac{1}{n^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-n} \tag{3.15}\] The representation (3.13) gives the statement of Proposition 1. * For arbitrary \(s>0\) and \(s=n+\delta\), \(0<\delta<1\) we have \[I(s,a,1) = \int_{a}^{1}\Big{[}\ln\left(\frac{1-e^{-x}}{x}\right)-\sum_{k=1}^{n (s)}c_{k}x^{k}\Big{]}\frac{dx}{x^{1+s}}+\int_{a}^{1}\frac{\log x}{x^{1+s}}dx+ \sum_{k=1}^{n(s)}c_{k}\int_{a}^{1}\frac{dx}{x^{1+s-k}} \tag{3.16}\] \[!!!= \int_{a}^{1}\Big{[}\ln\left(\frac{1-e^{-x}}{x}\right)-\sum_{k=1}^{ n(s)}c_{k}x^{k}\Big{]}\frac{dx}{x^{1+s}}\] (3.17) \[+ \frac{1}{s^{2}a^{s}}+\frac{\log(a)}{sa^{s}}-\frac{1}{s^{2}}+\sum _{k=1}^{n(s)}c_{k}\Bigg{[}\frac{1}{k-s}-\frac{a^{k-s}}{k-s}\Bigg{]},\] \(n(s)\) is the integer part of \(s\). This identity gives representation \[I(s,a,1)=S(s,a)+F(s)+\mathcal{O}(a), \tag{3.18}\] where \(S(s,a)\) includes all singular terms at \(a\to 0\) \[S(s,a)=\frac{1}{(s)^{2}a^{s}}+\frac{\log(a)}{sa^{s}}+\sum_{k=1}^{n(s)}c_{k} \left[-\frac{a^{k-n-\delta}}{k-n-\delta}\right]. \tag{3.19}\] Few terms give contributions to \(F(s)\). The integral in the line (3.16) converges at \(a\to 0\) and contributes to the finite part \(F(s)\). Two terms \[-\frac{1}{s^{2}}+\sum_{k=1}^{n(s)}\frac{1}{k-s}c_{k} \tag{3.20}\] also contribute to the final part \(F(s)\) and we get \[F(s)=\int_{0}^{1}\Big{[}\ln\left(\frac{1-e^{-x}}{x}\right)-\sum_{k=1}^{n(s)}c _{k}x^{k}\Big{]}\frac{dx}{x^{1+s}}-\frac{1}{s^{2}}+\sum_{k=1}^{n(s)}\frac{1}{ k-s}c_{k} \tag{3.21}\] Subtracting \(S(s,a)\) and removing regularization we get the proof of the Proposition \(1^{\prime}\). ## 4 Analytical renormalization In this section we present the explicit form of the renormalized version of (2.6) after the analytical renormalization. As in Sect.3 we distinguish two cases: integer and non-integer \(s\geq 0\). * For \(s=n\), \(n=0,1,2,...\) the following proposition holds. **Proposition 2.**_The renormalized version of (2.10) after analytical renormalizations defined by (2.12) is given by_ \[\mathcal{I}_{ren}(n)=-\left\{\begin{array}{cc}\frac{(-1)^{n}}{n!}\left[ \zeta^{\prime}(1-n)+\left(-\gamma+\sum_{k=1}^{n}\frac{1}{k}\right)\zeta(1-n) \right],&n=1,2,3...\\ \\ \frac{1}{12}\left(12\gamma_{1}+6\gamma^{2}-\pi^{2}\right),&n=0\end{array}\right. \tag{4.1}\] To prove this Proposition we follow the definitions (12) and take \(s=n-\epsilon\), \(n\neq 0\) and (11) for \(n=0\). We have \[\Gamma(-s)\,\zeta(-s+1)=\Gamma(\epsilon-n)\zeta(1-n+\epsilon)\] \[= \frac{(-1)^{n}}{n!}\,\zeta(1-n)\,\frac{1}{\epsilon}+\frac{(-1)^{n }}{n!}\left[\zeta^{\prime}(1-n)+\left(-\gamma+\sum_{k=1}^{n}\frac{1}{k}\right) \zeta(1-n)\right]+\mathcal{O}(\epsilon) \tag{22}\] and for \(n=0\) we have \[-\Gamma(-\epsilon)\zeta(1-\epsilon)=\frac{1}{12}\left(12\gamma_{1}+6\gamma^{2 }-\pi^{2}\right)-\frac{1}{\epsilon^{2}}+\mathcal{O}(\epsilon), \tag{23}\] where \(\gamma\) is the Euler-Mascheroni constant, \(\gamma=0.577\) and \(\gamma_{1}\) is the Stieltjes constant, \(\gamma_{1}=-0.0728\). Subtracting the pole in (22) and double pole in (23) we get the first line and the second line in (21), respectively. **Proposition \(2^{\prime}\).**_The analytical regularization for \(s\neq\mathbb{Z}\) gives directly the finite answer \(\mathcal{I}(s)\)_. The proof follows immediately from the form of \(\mathcal{I}(s)\) given by (11). **Remark.** Note that we have considered here the integral (4) as a whole. However, it should be noted that this integral (4) is equal to the product of the gamma function and the zeta function and in fact the divergences occur only in the gamma function. In this case, it is possible to carry the gamma function renormalization and obtain similar results. In this case instead of the expression (25) we get \[\mathcal{I}_{ren,\Gamma}(n)=\frac{(-1)^{n}}{n!}\left(-\gamma+\sum_{k=1}^{n} \frac{1}{k}\right)\zeta(1-n) \tag{24}\] By using (13) we get \[\mathcal{I}_{ren,\Gamma}(n)=\frac{B_{n}}{n!\,n}\left(\gamma-\sum_{k=1}^{n} \frac{1}{k}\right) \tag{25}\] ## 5 Equivalence of cut-off and analytical renormalizations In this Section we prove that the renormalized free energies defined by the cut-of renormalization (9) and the analytical regularization (12)-(14) coincide. We distinguish three cases: \(s=n\neq 0\), \(s=0\) and \(s\neq 0,n\in\mathbb{Z}_{+}\). **Proposition 3**. _The minimal renormalized free energy (9) for \(s=n\neq 0\) and the analytic renormalized free energy (12) coincide_ \[I_{ren}(n)=\mathcal{I}_{ren}(n). \tag{26}\] _Explicitly (5.1) means the validity of the following identity_ \[\int_{1}^{\infty}\frac{1}{x^{n+1}}\ln\Big{(}1-e^{-x}\Big{)}dx+\int_ {0}^{1}\frac{1}{x^{n+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x}\Big{)}-\sum_{k=1}^{ n}c_{k}x^{k}\Big{]}dx\] \[\qquad-\frac{1}{n^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-n}\] \[=\,\frac{(-1)^{n}}{n!}\left[\zeta^{\prime}(1-n)+\left(-\gamma+ \sum_{k=1}^{n}\frac{1}{k}\right)\zeta(1-n)\right],\quad n=1,2,..., \tag{5.2}\] _for_ \[c_{k}=-\frac{(-1)^{k}}{k!}\,\zeta(1-k)=\frac{B_{k}}{k\,k!},\qquad k=1,2,3,...\,. \tag{5.3}\] **Proof.** Let us consider the function \(\psi(n,s)\) of \(s\)-variable depending on the integer parameter \(n\), \(n>0\), defined for \(\mathfrak{R}s<n+1\) as \[\psi(n,s) = -\frac{1}{s^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-s}+\int_{1}^{ \infty}\frac{1}{x^{s+1}}\ln\Big{(}1-e^{-x}\Big{)}dx \tag{5.4}\] \[+ \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n}c_{k}x^{k}\Big{]}dx.\] According Proposition 1, \[\psi(n,n)=I_{ren}(n). \tag{5.5}\] For \(s<0\) the integral \[\int_{0}^{1}\frac{1}{x^{s+1}}\ln\Big{(}1-e^{-x}\Big{)}\,dx \tag{5.6}\] converges and after rearrangement of the terms in the RHS of (5.4) we can rewrite \(\psi(n,s)\) as \[\psi(n,s)=H(n,s)-\,\Gamma(-s)\,\zeta(-s+1)-T(n,s),\quad s<0, \tag{5.7}\] where \[H(n,s) = -\frac{1}{s^{2}}+\sum_{k=1}^{n-1}\frac{c_{k}}{k-s},\] \[T(n,s) = \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln x+\sum_{k=1}^{n}c_{k}x^{ k}\Big{]}dx. \tag{5.8}\] Evaluating \(T(n,s)\) for \(\mathfrak{R}s<0\) we get \[T(n,s)=-\frac{1}{s^{2}}+\sum_{k}^{n}\frac{c_{k}}{k-s} \tag{5.9}\] and the RHS of (5.4) becomes equal to \[-\Gamma(-s)\,\zeta(-s+1)-\frac{c_{n}}{n-s}, \tag{5.10}\] that is meromorphic function of variable \(s\) on whole \(\mathbb{C}\). This function from one side due to equation (5.5) for \(s=n\) coincides with \(I_{ren}\) and from other side can be evaluated in the following way. First note that the pole in (5.10) is exactly the pole that has to be subtracted in the analytical renormalization defined in (2.12). For this purpose we take \(s=n-\epsilon\) and consider \(\Gamma(-s)\,\zeta(-s+1)\) for small \(\epsilon\). Due to (B.8), see Appendix B we have \[\Gamma(-s)\,\zeta(-s+1)=\Gamma(\epsilon-n)\zeta(1-n+\epsilon)\] \[=\frac{(-1)^{n}}{n!}\,\zeta(1-n)\,\frac{1}{\epsilon}+\frac{(-1)^ {n}}{n!}\left[\zeta^{\prime}(1-n)+\left(-\gamma+\sum_{k=1}^{n}\frac{1}{k} \right)\zeta(1-n)\right]+\mathcal{O}(\epsilon) \tag{5.11}\] Therefore, to check that the pole in (5.10) is exactly the pole that we have in (5.11), we have to check that \[c_{n}=-\frac{(-1)^{n}}{n!}\,\zeta(1-n) \tag{5.12}\] The proof of equation (5.12) follows from representation of \(\zeta(-n)\) in term of the Bernoulli numbers, see (B.1) in Appendix B, we have \[\zeta(1-k)=\frac{(-1)^{1-k}\,B_{k}}{k}. \tag{5.13}\] Due to (5.13) the RHS of (5.12) is \[-\frac{(-1)^{n}}{n!}\,\zeta(1-n)=-\frac{(-1)^{n}}{n!}\,\frac{(-1)^{1-n}\,B_{n }}{n}=\frac{1}{n\,n!}\,B_{n} \tag{5.14}\] and the obtained expression coincides with definition (3.9) of \(c_{k}\). Also from (5.11) we get \[\mathcal{I}_{ren}(n)=\frac{(-1)^{n}}{n!}\left[\zeta^{\prime}(1-n)+\left(- \gamma+\sum_{k=1}^{n}\frac{1}{k}\right)\,\zeta(1-n)\right] \tag{5.15}\] **Proposition \(3^{\prime}\)**. _The minimal renormalized free energy (3.1) and the analytic renormalized free energy (2.14) coincide,_ \[I_{ren}(s)=\mathcal{I}(s),\quad\text{for}\quad s>0\quad\text{and}\quad s\neq n \in\mathbb{Z}_{*}. \tag{5.16}\] _Explicitly (5.16) means the validity of the following identity_ \[\int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n(s)}c_{k}x^{k}\Big{]}dx+\int_{1}^{\infty}\frac{1}{x^{s+1}} \ln\Big{(}1-e^{-x}\Big{)}dx\] \[\qquad-\frac{1}{s^{2}}+\sum_{k=1}^{n(s)}\frac{c_{k}}{k-s}\] \[=-\,\Gamma(-s)\,\zeta(-s+1),\quad n(s)=E(s)-\text{the integer part of number}\,s. \tag{5.17}\] To prove the identity (5.17) we consider the function \(\psi(n,s)\), \(s<n+1\), \[\psi(n,s) = -\frac{1}{s^{2}}+\sum_{k=1}^{n}\frac{c_{k}}{k-s}+\int_{1}^{\infty }\frac{1}{x^{s+1}}\ln\Big{(}1-e^{-x}\Big{)}dx \tag{5.18}\] \[+ \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln\Big{(}\frac{1-e^{-x}}{x} \Big{)}-\sum_{k=1}^{n}c_{k}x^{k}\Big{]}dx.\] From the Proposition 1\({}^{\prime}\) we see that \[\psi(n(s),s)=I_{ren}(s). \tag{5.19}\] From other site, for \(\psi(n,s)\) at \(\mathfrak{R}s<0\) we can write the representation \[\psi(n,s)=-\,\Gamma(-s)\,\zeta(-s+1),\quad\mathfrak{R}s<0. \tag{5.20}\] Indeed, for \(s<0\) we rearrange the terms in (5.18) and get \[\psi(n,s)=H(s,n)-\,\Gamma(-s)\,\zeta(-s+1)-T(n,s), \tag{5.21}\] where \[H(n,s) = -\frac{1}{s^{2}}+\sum_{k=1}^{n}\frac{c_{k}}{k-s},\] \[T(n,s) = \int_{0}^{1}\frac{1}{x^{s+1}}\Big{[}\ln x+\sum_{k=1}^{n}c_{k}x^{k }\Big{]}dx. \tag{5.22}\] Evaluating \(T(n,s)\) for \(\mathfrak{R}s<0\) we get \[T(n,s)=-\frac{1}{s^{2}}+\sum_{k}^{n}\frac{c_{k}}{k-s} \tag{5.23}\] and \(T(n,s)\) compensates \(H(n,s)\) and we get (5.20). From the uniqueness of analytical continuation we get (5.17). Examples In this Section we consider few examples of specific values of \(d,\,D,\,\alpha\) which provide the Bose gas interpretations of the Schwarzschild black hole thermodynamics. For any \(D=4,5,6,..\) and \(d=1,2,3,...\), we set \(\alpha=d/(2-D)\). Using (6) we get \[F_{BG,ren}=\frac{\Omega_{d-1}}{\alpha}\left(\frac{L}{2\pi}\right)^{d}\lambda^{D -2}\,I_{ren}(D-2)\,\beta^{D-3}. \tag{10}\] Considering the equation (108)-(110) we obtain the following expressions for the Bose gas free energy * \(-\frac{d}{\alpha}=2\). In this case \(D=4\) and according (10) \[\mathcal{I}_{ren}(2)=\frac{1}{48}(24\log(A)+1-2\gamma)=0.121,\] (11) therefore, \[F_{BG,ren}=-\frac{2\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\,\lambda^{2 }\,I_{ren}(2)\beta=-0.242\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^ {d}\,\lambda^{2}\,\beta.\] (12) This case is not suitable for us since it gives negative entropy. * \(-\frac{d}{\alpha}=3\). In this case \(D=5\) and according (10) \[\mathcal{I}_{ren}(3)=\frac{1}{6}\zeta^{\prime}(-2)=-0.00507,\] (13) therefore we have \[F_{BG,ren}=-\frac{3\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\,\lambda^{ 3}\,I_{ren}(3)\,\beta^{2}=0.0152\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi} \right)^{d}\,\lambda^{3}\,\beta^{2}.\] (14) * \(-\frac{d}{\alpha}=4\). In this case \(D=6\) and according (110) \[\mathcal{I}_{ren}(4)=\frac{-1440\zeta^{\prime}(-3)-25+12\gamma}{34560}=-0.000747,\] (15) therefore we have \[F_{BG,ren}=-\frac{4\,\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\,\lambda ^{4}\,I_{ren}(4)\,\beta^{3}=0.00299\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi }\right)^{d}\,\lambda^{4}\,\beta^{3}.\] (16) From the consideration above, we see that among the listed cases, only in the cases \(D=5,6\) we obtain a positive value of the corresponding entropy. * \(D=5\). In this case according (14) we have \[S_{BG,ren}=0.0304\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\, \lambda^{3}\,\beta^{3}.\] (17) Here \(d=1,2,3,...\) and the corresponding \(\alpha\) takes values \(-1/3,-2/3,-1,....\). * \(D=6\). In this case according (6.7) we have \[S_{BG,ren}=0.00896\,\frac{\Omega_{d-1}}{d}\left(\frac{L}{2\pi}\right)^{d}\lambda^ {4}\,\beta^{4}.\] (6.9) Here \(d=1,2,3,...\) and the corresponding \(\alpha\) takes values \(-1/4,-1/2,-3/4,...\). Let us note, that in the case of renormalization (4.5) we get \[F_{BG,ren,G}(D)=\frac{\Omega_{d-1}}{\alpha}\left(\frac{L}{2\pi}\right)^{d}\, \lambda^{D-2}\,I_{ren,G}(D-2)\,\beta^{D-3}. \tag{6.10}\] Since \(\alpha<0\) the sign of (6.10) is opposite to the sign of \(I_{ren,G}(D-2)\) and according to (4.5) the sign of \(F_{BG,ren,G}\) is defined by the Bernoulli number, i.e. \(F_{BG,ren,G}(D)<0\) for \(D=4k\) and \(F_{BG,ren,G}(D)>0\) for \(D=4k+2\), \(k=1,2,3\). For odd dimensions \(F_{BG,ren,G}(D)=0\). ## 7 Conclusion In this paper the Schwarzschild black hole thermodynamics is modeled by the Bose gas statistical system. It is shown that the Schwarzschild black hole in \(D=5\) and \(D=6\) space-time dimensions correspond to the Bose gas with one-particle energy \(\varepsilon(k)=\lambda\,k^{\alpha}\) in \(d\) dimensional space with \(d/\alpha=-3\) and \(d/\alpha=-4\), respectively. Divergences in these Bose gas models are discussed. It is shown that the cut-off minimal subtraction renormalization scheme is equivalent to the analytical renormalization.This method does not work for the case of \(D=4\) Schwarzschild black hole, which corresponds to the Bose gas in \(d=-4\) negative dimension as it has been shown in the previous paper [3]. The microscopic statistical mechanics description of the Schwarzschild black hole thermodynamics suggested in this and the previous papers use negative dimensions or renormalizations of the Bose gas models. It would be interesting to obtain a similar microscopic description of more general black holes including the Reissner-Nordstrom, Kerr and other black holes. These models also violate the third law of thermodynamics, so it is natural to expect that the corresponding statistical mechanic models will also have unusual properties. ## Acknowlegments We would like to thank D. Ageev, V. Berezin, V. Frolov, M. Khramtsov, K. Rannu, P. Slepov, A. Teretenkov, A. Trushechkin and V. Zagrebnov for fruitful discussions. This work is supported by the Russian Science Foundation (project 19-11-00320, V.A. Steklov Mathematical Institute). ## Appendix A Bernoulli numbers and \(c_{k}\) Differentiating (3.8) one has \[\frac{1}{e^{x}-1}=\sum_{k=0}kc_{k}\,x^{k-1}.\] (A.1) Comparing (A.1) with the generation function for the Bernoulli numbers \[\frac{x}{e^{x}-1}=\sum B_{k}\frac{x^{k}}{k!},\] (A.2) we see that \[kc_{k}=\frac{B_{k}}{k!}.\] (A.3) that gives (3.9). ## Appendix B Values of \(\zeta\) and \(\Gamma\) functions Here we present some known facts about gamma and zeta functions [9]. One has \[\zeta(-n)=\frac{(-1)^{n}B_{n+1}}{n+1},\quad n=1,2,3,...\] (B.1) where \(B_{n}\) are the Bernoulli numbers defined by the generating function (A.2). \[\zeta(-1)=-\frac{1}{12};\qquad\zeta^{\prime}(-1)=\frac{1}{12}-\ln A.\] (B.2) For \(n\in\mathbb{N}\): \[\zeta^{\prime}(-2n) = (-1)^{n}\frac{(2n)!}{2(2\pi)^{2n}}\,\zeta(2n+1)\] (B.3) \[-\zeta^{\prime}(1-2n) = \left.(2(2\pi)^{-s}\Gamma(s)\,\zeta(s))\right.^{{}^{\prime}} \Big{|}_{s=2n}\,\cos\left(\pi\,n\right)\] (B.4) \[\zeta^{\prime}(1-2n) = (-1)^{n+1}\frac{2\,\Gamma(2n)}{(2\pi)^{2n}}\Big{[}\left(-\log(2 \pi)+\psi(2n)\right)\zeta(2n)+\zeta^{\prime}(2n)\Big{]},\] here \(\psi\) is the digamma function \[\psi(s)=\frac{\Gamma^{\prime}(s)}{\Gamma(s)}.\] For \(\Gamma\)-function we have \[\frac{\Gamma(z-n)}{\Gamma(1+z)}=\frac{(-1)^{n}}{n!}\left(\frac{1}{z }+\sum_{r=0}^{\infty}A_{r}z^{r}\right), \tag{100}\] \[A_{r}=\sum_{k=1}^{n}\binom{n}{k}\frac{(-1)^{k-1}}{k^{r+1}},\qquad A _{0}=\sum_{k=1}^{n}\frac{1}{k}. \tag{101}\] Therefore, we have \[\Gamma(\epsilon-n) = \Gamma(1+\epsilon)\,\frac{(-1)^{n}}{n!}\left(\frac{1}{\epsilon}+ A_{0}+\mathcal{O}(\epsilon)\right) \tag{102}\] \[= \frac{(-1)^{n}}{n!}\left(\frac{1}{\epsilon}-\gamma+\sum_{k=1}^{n} \frac{1}{k}\right)+\mathcal{O}(\epsilon)\] and \[\Gamma(\epsilon-n)\zeta(1-n+\epsilon) \tag{103}\] \[= \frac{(-1)^{n}}{n!}\,\zeta(1-n)\,\frac{1}{\epsilon}+\frac{(-1)^{n }}{n!}\left[\zeta^{\prime}(1-n)+\left(-\gamma+\sum_{k=1}^{n}\frac{1}{k} \right)\,\zeta(1-n)\right]+\mathcal{O}(\epsilon)\] **Particular cases of (103)** \[n=0: -\Gamma(\epsilon)\zeta(1+\epsilon)=-\frac{1}{\epsilon^{2}}+\frac{ 1}{12}\left(12\gamma_{1}+6\gamma^{2}-\pi^{2}\right)+\mathcal{O}(\epsilon) \tag{104}\] \[n=1: -\Gamma(\epsilon-1)\zeta(\epsilon)=-\frac{1}{2\epsilon}+\frac{1}{ 2}(-1+\gamma-\log(2\pi))+\mathcal{O}(\epsilon)\] (105) \[n=2: -\Gamma(\epsilon-2)\zeta(\epsilon-1)=\frac{1}{24\epsilon}+\frac{ 1}{48}(24\log(A)+1-2\gamma)+\mathcal{O}(\epsilon)\] (106) \[n=3: -\Gamma(\epsilon-3)\zeta(\epsilon-2)=\frac{1}{6}\zeta^{\prime}(- 2)+\mathcal{O}(\epsilon)\] (107) \[n=4: -\Gamma(\epsilon-4)\zeta(\epsilon-3)=-\frac{1}{2880\epsilon}+ \frac{-1440\zeta^{\prime}(-3)-25+12\gamma}{34560}+\mathcal{O}(\epsilon)\] (108) \[n=5: -\Gamma(\epsilon-5)\zeta(\epsilon-4)=\frac{1}{120}\zeta^{\prime}( -4)+\mathcal{O}(\epsilon). \tag{109}\] Here \(\gamma\) is the Euler-Mascheroni constant, \[\gamma=0.577..., \tag{110}\] \(\gamma_{1}\) is the Stieltjes constant, \[\gamma_{1}=-0.0728... \tag{111}\] and \(A\) is the Glaisher constant \[A=1.28.... \tag{101}\] For \({\cal I}_{ren}\) we have \[n=0: {\cal I}_{ren}(0)=\frac{1}{12}\left(12\gamma_{1}+6\gamma^{2}-\pi^{2} \right)=-0.728694 \tag{102}\] \[n=1: {\cal I}_{ren}(1)=\frac{1}{2}(-1+\gamma-\log(2\pi))=-1.13033\] (103) \[n=2: {\cal I}_{ren}(2)=\frac{1}{48}(24\log(A)+1-2\gamma)=0.12116\] (104) \[n=3: {\cal I}_{ren}(3)=\frac{1}{6}\zeta^{\prime}(-2)=-0.00507474\] (105) \[n=4: {\cal I}_{ren}(4)=\frac{-1440\zeta^{\prime}(-3)-25+12\gamma}{3456 0}=-0.000747065\] (106) \[n=5: {\cal I}_{ren}(5)=\frac{1}{120}\zeta^{\prime}(-4)=0.0000665318 \tag{107}\]
2309.16619
Cubical Approximation for Directed Topology II
The paper establishes an equivalence between localizations of (diagrams of) cubical sets and (diagrams of) directed topological spaces by those maps defining (natural) cubical homotopy equivalences after application of the directed singular functor and a directed analogue of fibrant replacement. This equivalence both lifts and extends an equivalence between classical homotopy categories of cubical sets and topological spaces. Some simple applications include combinatorial descriptions and subsequent calculations of directed homotopy monoids and directed singular 1-cohomology monoids. Another application is a characterization of isomorphisms between small categories up to zig-zags of natural transformations as directed homotopy equivalences between directed classifying spaces. Cubical sets throughout the paper are taken to mean presheaves over the minimal symmetric monoidal variant of the cube category. Along the way, the paper characterizes morphisms in this variant as the interval-preserving lattice homomorphisms between finite Boolean lattice and describes some of the test model structure on presheaves over this variant.
Sanjeevi Krishnan
2023-09-28T17:19:04Z
http://arxiv.org/abs/2309.16619v1
# Cubical approximation for directed topology II ###### Abstract. The paper establishes an equivalence between localizations of (diagrams of) cubical sets and (diagrams of) directed topological spaces by those maps defining (natural) cubical homotopy equivalences after application of the directed singular functor and a directed analogue of fibrant replacement. This equivalence both lifts and extends an equivalence between classical homotopy categories of cubical sets and topological spaces. Some simple applications include combinatorial descriptions and subsequent calculations of directed homotopy monoids and directed singular 1-cohomology monoids. Another application is a characterization of isomorphisms between small categories up to zig-zags of natural transformations as directed homotopy equivalences between directed classifying spaces. Cubical sets throughout the paper are taken to mean presheaves over the minimal symmetric monoidal variant of the cube category. Along the way, the paper characterizes morphisms in this variant as the interval-preserving lattice homomorphisms between finite Boolean lattice and describes some of the test model structure on presheaves over this variant. ###### Contents * 1 Introduction * 2 Conventions * 3 Directed Spaces * 3.1 Continuous * 3.2 Cubical * 3.3 Comparisons * 3.4 Cubcats * 4 Homotopy * 4.1 Abstract * 4.2 Continuous * 4.3 Cubical * 4.4 Algebraic * 4.5 Comparisons * 5 Conclusion ## 1. Introduction State spaces, which include classifying spaces of monoids and more general homotopy colimits of dynamical systems [39, 69] as well as spacetimes, often admit extra directed structure encoding causal relationships between states. Examples of such structure include time-orientations and more general cosheaves of preorders. The qualitative behavior of a complex system often corresponds to features of a directed topological state space \(X\) invariant under continuous deformations on \(X\) that respect the given directionality. To that end, a basic goal is a formula for the set \([X,Y]\) of directed maps \(X\to Y\) between directed topological spaces up to a directed homotopy relation, directly in terms of some combinatorical description of \(X\) and \(Y\). Such a formula yields methods of calculation for representable directed cohomology theories (eg. [46, SS7]), with applications to the formal validation of critical software (eg. [20]) and homological algebra for monoids [13, 59, 37]. A general enough formula in effect gives an intrinsically combinatorial directed homotopy theory, with applications to the semantics of programming languages (eg. [29, 50, 64], see also SS5.) Combinatorial descriptions of directed topological spaces often take the form of _cubical sets_. In the case \(X\) and \(Y\) are _directed realizations_ of respective cubical sets \(A\) and \(B\), a desired formula for \([X,Y]\) exists in the literature under each of the following conditions: 1. \(A\) is \(1\)-dimensional [19, Theorem 4.1] 2. \(A\) is finite and \(B\) satisfies a simplicial-like condition [44, Corollary 8.2] 3. \(B\) is fibrant in the test model structure There are concrete reasons for wanting a formula for \([X,Y]\) under more general conditions than (1)-(3). Condition (1) rules out state spaces \(X\) representing the concurrent execution of more than \(1\) process. Condition (2) rules out non-compact directed topological spaces \(X\), like the state spaces of infinitely running but non-looping computations, the representing directed topological spaces of directed cohomology theories (eg. [46, SS7]), or functorial approximations \(\left|\mathsf{sing}\ M\right|\) of spacetimes \(M\) as directed realizations of cubical sets. Condition (3) constrains \(Y\) so much so that \([X,Y]\) is just the set of classical homotopy classes of maps \(X\to Y\) of underlying topological spaces and therefore ignores information about the directionality of \(X\). In short, the three conditions collectively do not cover all possibilities for \(A\) and \(B\) needed to give a completely, intrinsically combinatorial directed homotopy theory on cubical sets. In the search for a general formula, the main challenge is that directed topological spaces almost never decompose into homotopy colimits, with respect to directed homotopy, of simpler directed topological spaces [46, paragraph after Theorem 4.1]. This indecomposability is inextricably tied to the general difficulty of analyzing the global, qualitative behavior of a complex process, such as the execution of an asynchronous microprocessor, the concurrent operation of sequential threads in a computer, or a natural process described by a dynamical system. Small changes in the behavior of a single agent can have dramatic affects on the global behavior of a multi-agent system. Said differently, a seemingly minor local deformation in a directed state space \(X\) can sometimes drastically affect the global deformation type of \(X\). Classical approximations, such as cellular and simplicial approximation (cf. [15, Theorem 12.1]), can be constructed one cell at a time because CW complexes are homotopy colimits, with respect to classical homotopy, of their cells. Directed analogues require much greater delicacy. Intuitively, a general formula should just be that \([X,Y]\) is the set \[[X,Y]=\pi_{0}C^{A}\] of connected components of a mapping cubical set \(C^{A}\) for an extension \(C\) of \(B\) to a cubical set admitting higher algebraic structure generalizing fibrancy (eg. [1, 9, 17, 32]). The desired extension will not generally define a higher category in the standard sense (eg. [9, 17, 32, 40, 68, 67]); the directed singular cubical set of a physically realistic spacetime [55], lacking non-constant non-spacelike curves, cannot admit invertible directed singular cubes witnessing higher associativity and unitality. This paper introduces a _cubcat_ as a cubical set admitting the requisite structure, a cubical set admitting extra operations parametrized by directed maps between topological cubes and compatible composition operations [Definition 3.32]. The main point is that cubcats are directed analogues of fibrant cubical sets. Cubical sets can be replaced by fibrant cubical sets without changing classical homotopy types of topological realizations. Cubical sets can be replaced by cubcats without changing directed homotopy types of directed realizations [Proposition 3.35 and Corollary 4.23]. Fibrant cubical sets model small \(\infty\)-groupoids. Cubcats, which include cubical nerves of small categories [Proposition 3.36], singular cubical sets of directed topological spaces [Proposition 3.35], and, at least up to cubical homotopy equivalence, fibrant cubical sets [Proposition 4.13], model variants of small \((\infty,\infty)\)-categories (cf. [9, 17, 32, 40, 68, 67]) interpretable as higher order abstract rewriting systems; associativity and unitality hold not necessarily up to higher isomorphism (reversible higher order rewrites) but instead up to zig-zags of higher morphisms (the congruence defined by higher order rewrite rules). Equivalent classical homotopy categories \(h(\hat{\square})\) and \(h(\mathbf{Top})\) of cubical sets and topological spaces can be constructed by inverting those morphisms defining cubical homotopy equivalences after respective applications of fibrant replacement and the singular functor. Equivalent directed refinements \(d(\hat{\square})\) and \(d(\mathbf{DiTop})\) can be analogously constructed, with cubcat replacement playing the role of fibrant replacement [Corollary 4.27 for \(\mathscr{G}=\star\)]. This latter equivalence of directed homotopy categories in fact extends to an equivariant equivalence between diagrams [Corollary 4.27], whose classical counterpart does not follow from existing, non-algebraic (cf. [63]) Quillen equivalences. The proofs require new techniques. The first is the use of _algebraic_ lifting properties [7, 34, 63], not only against diagrams of morphisms [Lemmas 4.5, 4.6, and 4.7] as was implicitly done in a predecessor to this paper [44] but also against _double diagrams_ of morphisms [Lemma 3.3]. Algebraic lifting properties underlie recent refinements, not used in this paper, of weak factorization systems, the small object argument, and model categories [7, 34, 63]. The second is the use [Lemmas 3.24 and 4.8] of pro-objects to encode some of the data of weak factorization systems; lifts of pro-diagrams indexed by finite acyclic categories [54, SS3] mimic Reedy-cofibrant replacement. These techniques apply in principle to other homotopy theories, including those (eg. [10, 47, 61]) in which homotopy colimit decompositions are also rare. Directed cubical approximation yields immmediate consequences. One consequence is the desired formula [Corollary 4.30], tractable in practice for each tractable cubcat model \(C\) of the codomain \(Y=\mid B\mid\). Special cases include combinatorial descriptions and subsequent calculations of directed homotopy monoids [Corollary 4.28, Example 4.29] and singular directed \(1\)-cohomology monoids [Corollary 4.32]; these latter monoids in particular define functorial, computable, causal and conformal global spacetime invariants [Examples 4.33 and 4.34] (cf. [4, 42]). A localization \(d(\mathbf{Cat})\) of small categories by equivalences up to zig-zags of natural transformations, intermediate in generality between Thomason weak equivalences and categorical equivalences, has been previously studied in the literature [56]. Another consequence is that \(d(\mathbf{Cat})\)-isomorphisms are exactly the directed homotopy equivalences between directed classifying spaces [Corollary 4.26]. The following observation summarizes how directed homotopy both extends and refines classical homotopy theory, as encoded by \(h(\hat{\square}),h(\mathbf{Top})\) as well as classical homotopy categories \(h(\mathbf{Cat}),h(\mathbf{Gpd}),h(\infty\mathbf{Gpd})\) of small categories, small groupoids, and fibrant cubical sets. **Theorem**.: _There exists a commutative diagram_ _in which the vertical arrows are induced from forgetful functors, the leftmost horizontal arrows in each row are induced from the cubical nerve, the rightmost horizontal arrows in the top and bottom rows are induced from realization functors, and the diagonal arrows pointing towards the left are induced from inclusions. Functors denoted as \(\hookrightarrow\) are fully faithful. Functors denoted as \(\twoheadrightarrow\) are essentially surjective. Functors are categorical equivalences if and only if they are labelled with \(\simeq\)._ Along the way, the category \(\square\) of cubes is enlarged from the usual minimal variant to the minimal symmetric monoidal variant. The change in setting makes it possible to explicitly characterize the \(\square\)-morphisms as the interval-preserving lattice homomorphisms between finite Boolean lattices [Theorem 3.10]. An application is an order-theoretic construction of cubical edgewise subdivision analogous to the usual order-theoretic construction of simplicial barycentric subdivision [Propositions 3.13 and 3.17]. Several of the main results likely bootstrap to larger variants of \(\square\) that, for example, include coconnections of one or both kinds. To this end, various observations are recorded for a general class of such variants [Propositions C.1 and C.4]. **Organization**.: After fixing some conventions in SS2, point-set theories of directed topological spaces, cubical sets, and cubcats are recalled, introduced, and compared in SS3. Homotopy theories, classical, directed, and categorical, are then compared in SS4. The main results Figure 1. **Equivalence as different categorical structures**. The directed graphs above freely generate equivalent groupoids but freely generate mutually inequivalent categories, some of which are nonetheless directed homotopy equivalent to one another. After passage to free categories, the left two directed graphs are directed homotopy equivalent to one another, the right two directed graphs are directed homotopy equivalent to one another, but the left two and the right two are not directed homotopy equivalent to one another. Intuitively, classical equivalences ignore the structure of time in state spaces while categorical equivalences are sensitive to arbitrary subdivisions of time. Directed homotopy sidesteps some of the combinatorial explosion that bedevils geometric models of state spaces sensitive to arbitrary subdivisions in time. Section §4.4 formalizes the different notions of equivalence between small categories. are contextualized within the broader literature in SS5. Some relevant facts about lattices, pro-objects, and test categories are recalled and further developed in SSA, SSB, and SSC. ## 2. Conventions This section first fixes some conventions. Let \(k,m,n,p,q\) denote natural numbers. Let \(\mathbb{I}\) denote the unit interval. Let \(\mathfrak{im}\,f\) denote the image of a function \(f\). Let \(\hookrightarrow\) denote an inclusion of some sort, such as an inclusion of a subset into a set, a subspace into a space, or a subcategory into a category. #### 2.0.1. Categories Let \(\mathscr{X},\mathscr{Y}\) denote arbitrary categories. Let \(\bigcirc,\mathcal{X},\mathcal{Y},\mathscr{G}\) denote small categories. Let \(\star\) denote a terminal object in a given category. For a given monoidal category, let \(\otimes\) denote its tensor product. For each object \(o\) in a given closed monoidal category, \(o^{(-)}\) will denote the right adjoint to the endofunctor \(o\otimes-\). Notate special categories as follows. \begin{tabular}{l l l} **Set** & sets (and functions) \\ **Top** & (weak Hausdorff k-)spaces (and continuous functions) \\ **Cat** & small categories (and functors) \\ **Pos** & compact pospaces with connected intervals & SS3.1.1 \\ **DiTop** & (weak Hausdorff k-)streams & SS3.1.2 \\ **Dis** & finite distributive lattices & SS3.2.2 \\ \(\infty\)**Gpd** & fibrant cubical sets in the test model structure & SS4.3.1 \\ \(\square_{1}\) & domain of abstract interval objects & SS3.2.1 \\ \(\square\) & cube category & SS3.2.1 \\ \end{tabular} Write \(\hat{\bigcirc}\) for the category of **Set**-valued presheaves on \(\bigcirc\), the functor category \[\hat{\bigcirc}=\textbf{Set}^{\bigcirc^{\text{op}}}.\] Write \(\bigcirc[-]\) for the Yoneda embedding \(\bigcirc\hookrightarrow\hat{\bigcirc}\). Let \(F/G\) denote the comma category for diagrams \(F,G\) in the same category. For a diagram \(F\) in \(\hat{\bigcirc}\), let \(\bigcirc/F=\bigcirc[-]/F\). Let \(1_{o}\) denote the identity morphism for an object \(o\) in a given category. Write \(\mathfrak{adj}(\zeta)\) for the adjoint to a morphism \(\zeta\) across an adjunction that is clear from context. A functor \(F:\mathscr{X}\to\mathscr{Y}\) is _topological_ if, for each diagram \(D:\mathcal{X}\to\mathscr{X}\), every cone \(x\to FD\) in \(\mathscr{Y}\) admits an initial lift to a cone in \(\mathscr{X}\) along \(F\); topological functors create limits and colimits [6]. A _pointed endofunctor_ is an endofunctor \(E\) on a category \(\mathscr{X}\) equipped with a distinguished natural transformation \(1_{\mathscr{X}}\to E\), denoted by \(\eta\). Dually, a _copointed endofunctor_ is an endofunctor \(E\) on a category \(\mathscr{X}\) equipped with a distinguished natural transformation \(E\to 1_{\mathscr{X}}\), denoted by \(\epsilon\). A category \(\mathscr{X}\) is _cofiltered_ if every finite diagram in \(\mathscr{X}\) has a cone. A _cofiltered limit_ is the limit of a diagram shaped like a cofiltered small category. #### 2.0.2. Diagrams We will sometimes regard diagrams in a category \(\mathscr{X}\) as equivariant versions of \(\mathscr{X}\)-objects. When we do, we adopt the following terminology. We take \(\mathscr{G}\)_-streams_, \(\mathscr{G}\)_-cubical sets_, and \(\mathscr{G}\)_-categories_ to mean \(\mathscr{G}\)-shaped diagrams in the respective categories **DiTop**, \(\hat{\square}\), and **Cat**. We take \(\mathscr{G}\)_-stream maps_, \(\mathscr{G}\)_-cubical functions_, and \(\mathscr{G}\)_-functors_ to mean natural transformations between \(\mathscr{G}\)-streams, \(\mathscr{G}\)-cubical sets, and \(\mathscr{G}\)-categories. #### 2.0.3. Pro-objects Informally, pro-objects are formal cofiltered limits. There exists a _category of pro-objects_\(\textbf{pro-}\mathscr{X}\) in \(\mathscr{X}\), a category having all cofiltered limits together with a full and faithful inclusion \(\mathscr{X}\hookrightarrow\textbf{pro-}\mathscr{X}\), characterized up to categorical equivalence by the property that for each functor \(G\) in the solid diagram there exists a dotted functor, unique up to natural isomorphism, preserving cofiltered limits and making the entire diagram commute. The reader is referred elsewhere [38] for explicit constructions, unnecessary in this paper, of **pro-\(\mathscr{X}\)**. For each functor \(F:\mathscr{X}\to\mathscr{Y}\), we also write \(F\) for the extension \(\textbf{pro-}\mathscr{X}\to\textbf{pro-}\mathscr{Y}\), unique up to natural isomorphism, making the diagram above commute when \(\mathscr{M}=\textbf{pro-}\mathscr{Y}\) and \(G=(\mathscr{Y}\hookrightarrow\textbf{pro-}\mathscr{Y})F\). We say that a natural transformation \(\eta:D_{1}\to D_{2}\) between diagrams \(D_{1}\) and \(D_{2}\) in \(\mathscr{X}\) indexed by the same small cofiltered category _represents_ a **pro-\(\mathscr{X}\)**-morphism \(\lim\,D_{1}\to\lim\,D_{2}\) if the latter morphism is induced by \(\eta\). #### 2.0.4. Supports We employ some common notation for _supports_ and _carriers_, like the support of a point in a topological realization or the carrier of a cube in a cubical subdivision. Consider a functor \(F:\mathscr{X}\to\mathscr{Y}\) and \(\mathscr{X}\)-object \(o\) admitting a complete lattice of subobjects. Let \(\text{supp}_{F}(x,o)\) denote the minimal subobject of \(o\) to which \(\zeta\) corestricts, for each \((x/F)\)-object \(\zeta:x\to Fo\). For instance, \(\text{supp}_{|-|}(x,B)\) is the usual support of a point \(x\) in the topological realization \(|B|\) of a simplicial set \(B\), the minimal subpresheaf \(A\subset B\) with \(x\in|A|\). #### 2.0.5. Relations A binary relation \(R_{X}\) on a set \(X\) is the data of the set \(X\) and its _graph_, a subset of \(X^{2}\) denoted as \(graph(R_{X})\). For each binary relation \(R_{X}\) on a set \(X\), write \(x\,R_{X}\,y\) if \((x,y)\in graph(R_{X})\). A binary relation \(R_{X}\) on a set \(X\) is _reflexive_ if \(x\,R_{X}\,x\) for all \(x\in X\), _transitive_ if \(x\,R_{X}\,z\) whenever \(x\,R_{X}\,y\) and \(y\,R_{X}\,z\), _antisymmetric_ if \(x=y\) whenever \(x\,R_{X}\,y\) and \(y\,R_{X}\,x\), and _total_ if for each pair \(x,y\in X\), \(x\,R_{X}\,y\) or \(y\,R_{X}\,x\). A _preorder_ on a set \(P\) is a binary, reflexive, transitive relation on \(P\). A _partial order_ on a set \(P\) is an antisymmetric preorder on \(P\). The _lexicographic order_ on all finite sequences in \(\mathbb{N}\) is the total order \(\leqslant_{\text{lex}}\) on such sequences defined by \((s_{1},\dots,s_{m})\leqslant_{\text{lex}}(t_{1},t_{2},\dots,t_{n})\) if \(m\leqslant n\) and \(s_{i}=t_{i}\) for all \(1\leqslant i\leqslant m\) or there exists \(1\leqslant j\leqslant\min(m,n)\) such that \(s_{j}<t_{j}\) and \(s_{i}=t_{i}\) for all \(1\leqslant i<j\). #### 2.0.6. Preordered Sets A _preordered set_ is a set \(P\) equipped with a preorder, which we denote as \(\leqslant_{P}\), on it. Preordered sets \(P\) will be regarded as small categories with object set given by the underlying set of \(P\) and with one morphism \(x\to y\) precisely when \(x\leqslant_{P}y\). A _poset_ is a set equipped with a partial order on it or equivalently a skeletal preordered set. The _minimum_ and _maximum_ of a poset \(P\) are the unique initial and final objects of \(P\), respectively denoted by \(\min\,P\) and \(\max\,P\) whenever such unique objects exist. The minimum and maximum of a poset having initial and final objects are the _extrema_ of \(P\). A _subposet_\(P\) of a poset \(Q\) is a poset \(P\) that is full as a subcategory of \(Q\). A subposet \(P\) of a poset \(Q\) is... 1. _...order-convex in \(Q\)_ if \(y\in P\) whenever \(x\leqslant_{Q}y\leqslant_{Q}z\) and \(x,z\in P\) 2....an _interval in \(Q\)_ if it is order-convex and has both a minimum and maximum 3....a _chain in \(Q\)_ if \(\leqslant_{P}\) is total. In a poset \(P\), an element \(z\) is an _immediate successor_ to an element \(x\) if \(x\leqslant_{P}z\) and \(x=y\) or \(y=z\) whenever \(x\leqslant_{P}y\leqslant_{P}z\). In a poset, categorical products are called _infima_ and categorical coproducts are called _suprema_. In a poset \(P\), write \(x\vee_{P}y\) for the unique join of \(x,y\) if it exists and \(x\wedge_{P}y\) for the unique meet of \(x,y\) if it exists. A _monotone function_ is a functor between preordered sets, a function \(\phi:P\to Q\) between preordered sets with \(\phi(x)\leqslant_{Q}\phi(y)\) whenever \(x\leqslant_{P}y\). A monotone function \(\phi:P\to Q\) of posets is _order-convex_ if images of order-convex subposets in \(P\) under \(\phi\) are order-convex subposets in \(Q\). #### 2.0.7. Lattices A _lattice_ is always taken in the order-theoretic sense to mean a poset having all binary infima and binary suprema. A lattice is _complete_ if it is complete as a small category, or equivalently if it has all infima, or equivalently if it has all suprema. A lattice is _distributive_ if \(x\wedge_{L}(y\vee_{L}z)=(x\wedge_{L}y)\vee_{L}(x\wedge_{L}z)\) for all \(x,y,z\in L\) or equivalently if \(x\vee_{L}(y\wedge_{L}z)=(x\vee_{L}y)\wedge_{L}(x\vee_{L}z)\) for all \(x,y,z\in L\). A _sublattice_ of a lattice \(L\) is a subposet \(K\) of \(L\) such that \(\wedge_{K},\vee_{K}:K^{2}\to K\) are respective restrictions of \(\wedge_{L},\vee_{L}:L^{2}\to L\). Write \(\omega\) for the set of natural numbers equipped with its standard total order. Let \([n]\) denote the subposet \(\{0,1,\dots,n\}\) of \(\omega\). Define functions \[\delta_{\pm} :[0] \to[1] \delta_{\pm}(0)=\nicefrac{{1}}{{2}}\pm\nicefrac{{1}}{{2}}\] \[\sigma :[1] \to[0]\] Henceforth write \([k]^{n}\) for the \(n\)-fold \(\mathbf{Cat}\)-product of \([k]\). A poset is _Boolean_ if it is \(\mathbf{Cat}\)-isomorphic to a power set, regarded as a poset under inclusion. A monotone function \(\phi:L\to M\) of finite lattices _preserves (Boolean) intervals_ if images of (Boolean) intervals in \(L\) under \(\phi\) are (Boolean) intervals in \(M\). **Example 2.1**.: The finite Boolean lattices are, up to \(\mathbf{Cat}\)-isomorphism, \[[0],[1],[1]^{2},[1]^{3},\dots\] Every interval in a Boolean lattice is Boolean. A _lattice homomorphism_ is a function \(\phi:L\to M\) between lattices preserving binary suprema and binary infima. #### 2.0.8. Constructions For reference, we list certain constructions defined throughout. \begin{tabular}{l l l} \(\mathfrak{so}_{k+1}\) & subdivisions & §3.2.2, §3.2.3 \\ \(\mathfrak{ev}_{k+1}\) & right adjoint to \(\mathfrak{so}_{k+1}\) & §3.2.3 \\ \(|-|\) & topological realizations & §3.3 \\ \(|-|\) & directed realizations & §3.3 \\ \(\mathfrak{sing}\) & directed cubical singular functor & §3.3 \\ \(\mathfrak{net}\) & cubical nerves & §3.2.3 \\ \(\mathrm{T}_{1}\) & fundamental category & §3.2.3 \\ \(\Pi_{1}\) & fundamental groupoid & §3.2.3 \\ \(\pi_{0}\) & path-components & §4.2.1 \\ \(\Omega^{n}\) & \(n\)-fold directed loop space & §3.2.3 \\ \(\tau_{n}\) & \(n\)th directed homotopy monoid & §3.2.3 \\ \(\mathfrak{d}\) & cannonical interval object in \(\dot{\square}\) & §4.1 \\ \(\mathfrak{h}\) & interval object in \(\mathbf{DiTop}\) that defines h-homotopy & §4.2.2 \\ \([-,-]_{\mathrm{i}}\) & homotopy classes with respect to interval object \(\mathrm{i}\) & §4.1 \\ \(H^{1}\) & cubical 1-cohomology & §4.3.1, §4.3.2 \\ \end{tabular} ## 3. Directed Spaces Directed spaces can be modelled topologically and combinatorially. This section recalls topological models, presheaf models, and comparison functors between them. _Streams_ provide topological models of directed spaces. _Cubical sets_, presheaves over a particular variant of the cube category, provide combinatorial models of directed spaces. _Cubcats_ are a mixture of topological and combinatorial formalisms. Streams can be constructed from cubical sets as _directed realizations_. Novel material includes a double algebraic lifting property of compact topological distributive lattices [Lemma 3.3], a characterization of morphisms in the cube category [Theorem 3.10], a subsequent order-theoretic construction of cubical subdivision [SS3.2.2 and Proposition 3.17], a lifting lemma for directed singular cubes [Lemma 3.31], and the entire theory of cubcats [SS3.4]. ### Continuous Directed spaces are modelled topologically in this paper as _streams_. An alternative topological model for directed spaces, common in the literature and essentially interchangeable with streams as foundations for directed homotopy, are _d-spaces_[30]. An advantage of a stream-theoretic foundation for directed topology is that it naturally subsumes some of the theory of pospaces, whose point-set theory is well-developed in the literature [48]. #### 3.1.1. Pospaces A _pospace_ is a poset \(P\) topologized so that \(graph\,(\leqslant_{P})\) is closed in the standard product topology on \(P^{2}\). A _subpospace_ of a pospace \(Q\) is a pospace \(P\) that is at once a subposet and subspace of \(Q\). A _topological lattice_ is a lattice \(L\) topologized so that \(\vee_{L},\wedge_{L}\) are jointly continuous functions \(L^{2}\to L\). The underlying topological spaces of pospaces are necessarily Hausdorff. A _subtopological sublattice_ of a topological lattice \(L\) is a topological lattice that is at once a sublattice and subspace of \(L\). Conversely, topological lattices with Hausdorff underlying topological spaces are pospaces. The following observation is a straightforward combination of observations made elsewhere [[57], [5, Exercise SIV.8 4(b)]]. **Lemma 3.1**.: _Each compact Hausdorff topological lattice is complete as a lattice._ There should exist a continuous evolution between states \(x\leqslant_{P}y\) in a pospace \(P\) of states. We therefore define a category of compact pospaces satisfying such a continuity constraint as follows. A _monotone map_ of pospaces is a function between pospaces that is at once monotone as a function between underlying posets and continuous as a function between underlying topological spaces. Let \(\mathbf{Pos}\) be the concrete category whose objects are those compact pospaces \(P\) such that \(x=z\) if \(x\leqslant_{P}z\) and there does not exist \(y\neq x,z\) in \(P\) such that \(x\leqslant_{P}y\leqslant_{P}z\) and whose morphisms are all monotone maps between them. **Example 3.2**.: Fix \(n\). The \(\mathbf{Pos}\)-object \(\vec{\mathbb{I}}^{n}=\vec{\mathbb{I}}^{\times_{\mathbf{Pos}}n}\), the topological hypercube \(\mathbb{I}^{n}\) with \[(x_{1},x_{2},\ldots,x_{n})\leqslant_{\mathbb{I}^{n}}(y_{1},y_{2},\ldots,y_{n} )\iff y_{1}-x_{1},y_{2}-x_{2},\ldots,y_{n}-x_{n}\geqslant 0,\] is a topological lattice whose underlying space is compact Hausdorff and connected. Every topological lattice whose underlying topological space is compact Hausdorff and connected is a \(\mathbf{Pos}\)-object [24, Proposition VI-5.15]. Terminal maps \(L\to\star\) from compact topological (distributive) lattices \(L\) admit the following right lifting property against a (_double_) _diagram_ of certain inclusions of pospaces [7]. **Lemma 3.3**.: _Consider the solid arrows in the left triangle in the diagram_ _where the left vertical inclusion is the inclusion of a closed order-convex subtopological sublattice into a compact Hausdorff topological lattice. There exists a choice of dotted monotone map \(r_{LM}\) making the left triangle commute such that the middle diagram commutes for all order-convex subtopological sublattices \(L^{\prime}\) and \(L^{\prime\prime}\) of respective compact Hausdorff topological lattices \(M^{\prime}\) and \(M^{\prime\prime}\) and all horizontal arrows making the outer rectangle commute such that the top horizontal arrow is an extrema-preserving monotone map and the bottom horizontal arrow is a continuous lattice homomorphism. If \(L_{3}\) is a distributive compact Hausdorff topological lattice with order-convex subpospaces \(L_{1}\subset L_{2}\) that are also topological lattices, then the right diagram commutes._ Proof.: For a closed, order-convex subtopological sublattice \(L\) of a compact Hausdorff topological lattice \(M\), \(L\) admits both a minimum min \(L\) and maximum max \(L\)[Lemma 3.1] and \(x\in M\) lies in \(L\) if and only if \(\min\,L\leqslant_{M}x\leqslant_{M}\max\,L\) by \(L\) order-convex in \(M\). It is therefore possible to define a monotone map making the left triangle commute by \[r_{L,M}(x)=(\min\,L)\vee_{M}(x\wedge_{M}(\max\,L)).\] The middle diagram commutes when the bottom horizontal arrow commutes with binary suprema and binary infima and the top horizontal arrrow preserves extrema. Consider a distributive compact Hausdorff topological lattice \(L_{3}\) with order-convex subpospaces \(L_{1}\subset L_{2}\) that are also topological lattices. For brevity, write \(\bot_{i}\) for min \(L_{i}\) and \(\top_{i}\) for max \(L_{i}\). For each \(x\in L_{3}\), \[r_{L_{1},L_{2}}(r_{L_{2},L_{3}}(x)) =r_{L_{1},L_{2}}(\bot_{2}\vee_{L_{3}}(x\wedge_{L_{3}}\top_{2}))\] \[=\bot_{1}\vee_{L_{2}}((\bot_{2}\vee_{L_{3}}(x\wedge_{L_{3}}\top_{ 2}))\wedge_{L_{2}}\bot_{1}\] \[=(\bot_{1}\vee_{L_{3}}(x\wedge_{L_{3}}\top_{2}))\wedge_{L_{2}} \bot_{1}\] \[=(\bot_{1}\vee_{L_{3}}x)\wedge_{L_{3}}\top_{2}\wedge_{L_{2}}\bot_{1}\] \[=(\bot_{1}\vee_{L_{2}}x)\wedge_{L_{3}}\bot_{1}\] \[=\bot_{1}\vee_{L_{2}}(x\wedge_{L_{3}}\bot_{1})\] \[=r_{L_{1},L_{3}}(x)\] from repeated applications of distributivity, idempotency of lattice operations, and \(\bot_{1}\vee_{L_{2}}\)\(\bot_{2}=\bot_{1}\) by \(L_{1}\subset L_{2}\), \(\bot_{1}\vee_{L_{3}}\top_{2}=\top_{2}\) by \(L_{1}\subset L_{2}\), and \(\top_{2}\wedge_{L_{3}}\bot_{1}=\bot_{1}\) by \(L_{1}\subset L_{2}\). Thus the right diagram commutes. #### 3.1.2. Streams A _circulation_ on a topological space \(X\) is a function \[\leqslant:U\mapsto\leqslant_{U}\] assigning to each open subset \(U\subset X\) a preorder \(\leqslant_{U}\) on \(U\) such that \(\leqslant\) sends the union of a collection \(\mathcal{O}\) of open subsets of \(X\) to the preorder with smallest graph containing \(graph\,(\leqslant_{U})\) for each \(U\in\mathcal{O}\)[43]. A _stream_ is a space equipped with a circulation on it [43]. Intuitively, \(x\leqslant_{U}y\) in a state stream whenever a system restricted to the subset \(U\) of states can evolve from \(x\) to \(y\). **Example 3.4**.: Every topological space admits an _initial circulation_\(\leqslant\) defined by \[x\leqslant_{U}y\iff x=y\in U\] A continuous function \(f:X\to Y\) of streams is a _stream map_ if \(f(x)\leqslant_{U}f(y)\) whenever \(x\leqslant_{f^{-1}U}y\) for each open subset \(U\) of \(Y\)[43]. A _k-space_ if \(X\) is a colimit of compact Hausdorff spaces in the category of topological spaces and continuous functions. Similarly, a _k-stream_ is a colimit of compact Hausdorff streams in the category of streams and stream maps [43]. The underlying space of a k-stream is a k-space [43, Proposition 5.8]. A topological space \(X\) is _weak Hausdorff_ if images of compact Hausdorff spaces in \(X\) are Hausdorff. **Theorem 5.4**, [43].: _Locally compact Hausdorff streams are weak Hausdorff k-streams._ Let \(\mathbf{Top}\) denote the complete, cocomplete, and Cartesian closed [52] category of weak Hausdorff k-spaces and continuous functions between them. Let \(\mathbf{DiTop}\) denote the category of weak Hausdorff k-streams and stream maps. Redefine topological space and stream, like elsewhere (eg. [43, 51]), to means objects in the respective categories \(\mathbf{Top}\) and \(\mathbf{DiTop}\). The _forgetful functor_\(\mathbf{DiTop}\to\mathbf{Top}\) lifts topological constructions in the following sense. **Proposition 5.8**, [43].: _The forgetful functor \(\mathbf{DiTop}\to\mathbf{Top}\) is topological._ In other words, each class of continuous functions \(f_{i}:X\to Y_{i}\) from a topological space \(X\) to streams \(Y_{i}\) induces a terminal circulation on \(X\) making the \(f_{i}\)'s stream maps \(X\to Y_{i}\). Equivalently and dually, each class of continuous functions from streams to a fixed topological space induces a suitably initial circulation on that topological space. In particular, the forgetful functor \(\mathbf{DiTop}\to\mathbf{Top}\) creates limits and colimits. A _stream embedding_ is a stream map \(e:Y\to Z\) such that a stream map \(f:X\to Z\) corestricts to a stream map Figure 2. **Conal manifolds**_Conal manifolds_, smooth manifolds whose tangent spaces are all equipped with convex cones, naturally encode state spaces of processes under some causal constraints. The convex cones define partial orders on an open basis of charts that uniquely extend to circulations on the entire manifold. The time-oriented Klein bottle \(K\) (left) and time-oriented torus \(T\) (right) depicted above are examples of conal manifolds that arise as directed realizations of cubical sets. Over cancellative commutative monoid coefficients \(\tau\), their directed 1-cohomologies are \(H^{1}(K;\tau)=\tau\times_{2\tau}\tau\)\(\tau\) and \(H^{1}(T;\tau)=\tau^{2}\) by a simple application of cubical approximation [Examples 4.34 and 4.33]. \(X\to Y\) whenever \(\mathfrak{im}\,f\subset\,\mathfrak{im}\,e\). A _substream_ of a stream \(Y\) is a stream \(X\) such that inclusion defines a stream embedding \(X\to Y\). **Example 3.5**.: An open substream is an open subspace with a restricted circulation. **Theorem 5.12**, [43].: _The category \(\mathbf{DiTop}\) is Cartesian closed._ The categories \(\mathbf{DiTop},\mathbf{Top}\) will sometimes be regarded as Cartesian monoidal. Explicit constructions of circulations are often cumbersome. Instead, circulations can be implicitly constructed from certain global partial orders in the sense of the following result, a special case of a more general observation [43, Lemmas 4.2, 4.4 and Example 4.5]. The following theorem allows us to henceforth regard \(\mathbf{Pos}\)-objects as streams and monotone maps between them as stream maps. **Theorem 4.7**, [43].: _There exists a fully faithful and concrete embedding_ \[\mathbf{Pos}\hookrightarrow\mathbf{DiTop},\] _sending each \(\mathbf{Pos}\)-object \(P\) to a unique stream having the same underlying topological space as \(P\) and whose circulation sends the entire space to the given partial order on \(P\)._ ### Cubical Directed cubes can be modelled as finite Boolean lattices, more general complexes of such cubes can be modelled as posets, and even more general formal colimits of such cubes can be modelled as cubical sets. The paper expands the typical setting (eg. [44]). #### 3.2.1. Cubes There are several variants of the cube category (eg. [8, 33]). While the predecessor [44] to this paper adopts the minimal variant, this paper adopts the minimal symmetric monoidal variant. For a monotone function \(\phi:[1]^{n_{1}}\to[1]^{n_{2}}\) and \(1\leqslant i\leqslant n\), let \(\phi_{i;n}\) denote the Cartesian monoidal product \[\phi_{i;n}=[1]^{i-1}\otimes\phi\otimes[1]^{n-i}:[1]^{n+n_{1}-1}\to[1]^{n+n_{2} -1}.\] _Codegeneracies_ are monotone functions of the form \(\sigma_{i;n}:[1]^{n+1}\to[1]^{n}\). _Cofaces_ are monotone functions of the form \(\delta_{\pm i;n}=(\delta_{\pm})_{i;n}:[1]^{n}\to[1]^{n+1}\). **Example 3.6**.: The codegeneracy \(\sigma_{i;n}\) is exactly the projection \[\sigma_{i;n}:[1]^{n}\to[1]^{n-1}\] onto all but the \(i\)th factor. Let \(\square_{1}\) denote the subcategory of \(\mathbf{Cat}\) generated by \(\delta_{\pm},\sigma\). The submonoidal category of \(\mathbf{Cat}\) generated by \(\square_{1}\) is the usual definition of the cube category in the literature, the subcategory of \(\mathbf{Cat}\) generated by all cofaces and codegeneracies. Instead let \(\square\) denote the _symmetric_ monoidal subcategory of the Cartesian monoidal category \(\mathbf{Cat}\) generated by \(\square_{1}\), a category whose objects are still the lattices \([0],[1],[1]^{2},[1]^{3},\ldots\) but whose morphisms are generated by the cofaces, codegeneracies, and coordinate permutations. We write \([1]^{\infty}\) for the (**pro**-\(\square\))-object defined as the limit \[[1]^{\infty}=\lim\left(\cdots\xrightarrow{\sigma_{i;4}}[1]^{3}\xrightarrow{ \sigma_{3;3}}[1]^{2}\xrightarrow{\sigma_{2;2}}[1]^{1}\to[0]\right). \tag{1}\] The following observation allows us to extend certain results on the minimal variant of the cube category to the new variant \(\square\). **Lemma 6.2**, [44].: _For each \(n\) and interval \(I\) in \([1]^{n}\), there exist unique \(m_{I}\) and composite_ \[[1]^{m_{I}}\to[1]^{n}\] _of cofaces that has image \(I\)._ We will repeatedly use the convenient fact that \(\square\) is the free strict symmetric monoidal category generated by the category \(\square_{1}\) pointed at \([0]\), in that every solid horizontal functor to a symmetric monoidal category \(\mathscr{M}\) sending \([0]\) to the unit uniquely extends to a strict monoidal functor making the following commute by observations made elsewhere [33]. There are some advantages to adding coordinate permutations to \(\square\). One is that the class of all directed realizations of cubical sets (see SS3.3) includes, for example, all closed conal manifolds whose cone bundles are fibrewise generating and free [45, Theorem 1.1]. A bigger one is an explicit characterization of \(\square\)-morphisms [Theorem 3.10] to which the rest of this section is devoted. **Example 3.7**.: In \(\square\) the \(\dots\) 1. \(\dots\) isomorphisms are the coordinate permutations 2. \(\dots\) monos are the cofaces up to coordinate permutation 3. \(\dots\) epis are the codegeneracies, projections onto some of the coordinates, up to coordinate permutation Let \(\tau\) denote the coordinate transposition \([1]^{2}\to[1]^{2}\). _Principal coordinate transpositions_ are \(\square\)-morphisms of the form \(\tau_{i;n}:[1]^{n+2}\to[1]^{n+2}\). **Lemma 3.8**.: _The following are equivalent for a monotone function of the form_ \[\phi:[1]^{m}\to[1]^{n}.\] 1. \(\phi\) _is bijective_ 2. \(\phi\) _is an interval-preserving lattice isomorphism_ 3. \(\phi\) _is a lattice isomorphism_ 4. \(\phi\) _is a coordinate permutation_ 5. \(\phi\) _is composite of principal coordinate transpositions_ 6. \(\phi\) _is a_ \(\square\)_-isomorphism_ The proof uses the fact that the symmetric group on \(\{1,2,\dots,n\}\) is generated by all principal transpositions, transpositions of the form \((i\,i+1)\) for \(1\leqslant i<n\)[14, SS6.2]. Proof.: Let \(\mathbf{0}\) denote the minimum \((0,\dots,0)\) of an element in \(\square\). Let \(\mathbf{e}_{i}\) denote the element in \([1]^{n}\) whose coordinates are all \(0\) except for the ith coordinate. It suffices to take \(m=n\) because all of the statements imply that \(\phi\) is a bijection between finite sets and hence \(\phi\) has domain and codomain both with the same cardinality. Suppose (1). Then \(\phi\) preserves extrema because it is a monotone surjection. Let \(I\) be an interval in \([1]^{n}\), necessarily isomorphic to a lattice of the form \([1]^{k}\). Then \(I\) contains exactly \(k\) distinct maximal chains of length \(k\). The function \(\phi\) preserves chains of length \(k\) because it is a monotone injection between posets. Hence \(\phi(I)\) contains \(k\) distinct maximal chains each of length \(k\) by \(\phi\) injective and monotone. Hence \(\phi(I)\) must be an interval in \([1]^{n}\). Thus \(\phi\) maps intervals onto intervals. Finite non-empty suprema of the \(\mathbf{e}_{i}\)s are the maxima of intervals in \([1]^{n}\) containing \(\mathbf{0}\). And \(\phi\) maps intervals in \([1]^{n}\) containing \(\mathbf{0}\) onto intervals in \([1]^{n}\) containing \(\mathbf{0}\). It therefore follows that \(\phi\) preserves finite non-empty suprema of the \(\mathbf{e}_{i}\)s because monotone surjections preserve maxima. Hence \(\phi\) preserves all finite non-empty suprema. Similarly \(\phi\) preserves all finite non-empty infima by duality. It therefore follows that \(\phi\) is a bijective lattice homomorphism and hence a lattice isomorphism. Hence (2). And (2) implies (3). Suppose (3). The function \(\phi\), a monoid automorphism with respect to \(\vee_{[1]^{m}}\), permutes the unique minimal set of monoid generators \(\mathbf{e}_{1},\mathbf{e}_{2},\ldots,\mathbf{e}_{n}\). Thus there exists a permutation \(\sigma\) of \(\{1,2,\ldots,n\}\) such that \(\phi(\mathbf{e}_{i})=\mathbf{e}_{\sigma(i)}\) for each \(i\). Hence \(\phi(x_{1},\ldots,x_{n})=\phi(\vee_{x_{i}=1}\mathbf{e}_{i})=\vee_{x_{i}=1} \phi(\mathbf{e}_{i})=\vee_{x_{i}=1}\mathbf{e}_{\sigma(i)}=(x_{\sigma(1)},\ldots,x_{\sigma(n)})\). Hence (4). If (4), then \(\phi\) is a composite of transpositions of successive coordinates, principal coordinate transpositions [14, SS6.2]. Then (5). If (5), then \(\phi\) is a composite of \(\square\)-isomorphisms and hence a \(\square\)-isomorphism. Hence (6). If (6), then (1) because the forgetful functor \(\square\to\mathbf{Set}\), like all functors, preserves isomorphisms. **Lemma 3.9**.: _The following are equivalent for a function of the form_ \[\phi:[1]^{m}\to[1]^{n}.\] 1. \(\phi\) _is a surjective interval-preserving lattice homomorphism_ 2. \(\phi\) _is a surjective lattice homomorphism_ 3. \(\phi\) _is a composite of codegeneracies and principal coordinate transpositions_ Proof.: For clarity, let \(\wedge=\wedge_{L}\) and \(\vee=\vee_{L}\) when the lattice \(L\) is clear from context. Let \(\mathbf{e}_{i}^{\perp}\) denotes the element in \([1]^{m}\) whose only coordinate having value \(0\) is its \(i\)th coordinate. (1) implies (2). Suppose (2). Then \(m\geqslant n\) by surjectivity. We show (3) by induction on \(m-n\) In the base case \(m=n\), \(\phi\) is a bijection because it is a surjection between sets of the same cardinality and hence is a composite of principal coordinate transpositions [Lemma 3.8]. Consider \(m-n>0\). Inductively suppose (3) for the case \(m-n<d\) and now consider the case \(m-n=d>0\). Then \(\phi\) is not injective by \(m>n\). Thus there exist distinct \(x,y\in[1]^{m}\) such that \(\phi(x)=\phi(y)\). There exists \(j\) such that \(x_{j}\neq y_{j}\) by \(x\neq y\). Take \(0=x_{j}<y_{j}=1\) and \(x_{i}=y_{i}=1\) for \(i\neq j\) without loss of generality by reordering \(x\) and \(y\) if necessary, replacing \(x\) with \(x\vee\mathbf{e}_{j}^{\perp}\) and \(y\) with \(y\vee\mathbf{e}_{j}^{\perp}\), and noting that \(\phi(x\vee\mathbf{e}_{j}^{\perp})=\phi(x)\vee\phi(\mathbf{e}_{j}^{\perp})= \phi(y)\vee\phi(\mathbf{e}_{j}^{\perp})=\phi(y\vee\mathbf{e}_{j}^{\perp})= \phi(y\vee\mathbf{e}_{j}^{\perp})\), It suffices to show the existence of a dotted function making commute. For then the dotted function is a surjective lattice homomorphism by \(\phi\) a surjective lattice homomorphism and \(\sigma_{j}\) a projection. To that end, suppose distinct \(x^{\prime},y^{\prime}\in[1]^{m}\) satisfy \(\sigma_{j}(x^{\prime})=\sigma_{j}(y^{\prime})\). It suffices to show \(\phi(x^{\prime})=\phi(y^{\prime})\). Take \(0=x^{\prime}_{j}<y^{\prime}_{j}=1\) without loss of generality. Then \(\phi(x^{\prime})=\phi(y^{\prime}\wedge x)=\phi(y^{\prime})\wedge\phi(x)=\phi(y ^{\prime})\wedge\phi(y)=\phi(y^{\prime}\wedge y)=\phi(y^{\prime})\). Hence (3). (3) implies (1) because identities, \(\sigma,\tau\) are all surjective interval-preserving lattice homomorphisms and the tensor on \(\square\) is closed under surjective interval-preserving lattice homomorphisms. **Theorem 3.10**.: _The following are equivalent for a function \(\phi\) of the form_ \[\phi:[1]^{m}\to[1]^{n}.\] 1. \(\phi\) _is an interval-preserving lattice homomorphism_ 2. \(\phi\) _is a_ \(\square\)_-morphism_ Proof.: Suppose (1). The function \(\phi\) factors into a composite of its corestriction onto its image \(I\), regarded as a subposet of \([1]^{n}\), followed by an inclusion \(I\hookrightarrow[1]^{n}\). Both functions \([1]^{m}\to I\) and \(I\hookrightarrow[1]^{n}\) are interval-preserving lattice homomorphisms because \(\phi\) is an interval-preserving lattice homomorphism. Moreover \(I\hookrightarrow[1]^{n}\) is isomorphic to a \(\square\)-morphism [Lemma 6.2, [44]]. Hence to show (2), it suffices to take \(\phi\) surjective. In that case \(\phi\) factors as a composite of tensor products of identities with \(\sigma,\tau\) [Lemma 3.9]. Hence (2). Suppose (2). Then \(\phi\) is an interval-preserving lattice homomorphism because \(\sigma,\delta_{\pm},\tau\) are interval-preserving lattice homomorphisms and \(\otimes\) preserves interval-preserving lattice homomorphisms. Hence (1). #### 3.2.2. Cube configurations Just as posets encode simplicial complexes whose simplices correspond to finite chains, posets can encode cubical complexes whose cubes correspond to finite Boolean intervals. Let \(\mathbf{Dis}\) be the category whose objects are the finite distributive lattices and lattice homomorphisms between such lattices preserving Boolean intervals. **Example 3.11**.: The category \(\mathbf{Dis}\) contains \(\square\) as a full subcategory [Theorem 3.10]. Technical observations about \(\mathbf{Dis}\) [Lemma 3.12 and Proposition 3.13], which require specialized observations about finite distributive lattices, are proven in SSA. **Lemma 3.12**.: _The following are equivalent for a function_ \[\phi:L\to M\] _between finite distributive lattices._ 1. \(\phi\) _is a_ \(\mathbf{Dis}\)_-morphism_ 2. _each restriction of_ \(\phi\) _to a Boolean interval in_ \(L\) _coerstricts to a surjective lattice homomorphism onto a Boolean interval in_ \(M\)_._ For each \(k\), we can make the natural identifications \[\left([1]^{n}\right)^{[k]}=[k+1]^{n}\] under unique isomorphisms for the case \(n=1\), that send each monotone function \(\phi\) in the middle poset to the element \(\sum_{i}\phi(i)\) in the right side, and hence under natural Cartesian monoidal \(\mathbf{Cat}\)-isomorphisms for the general case. Thus the construction \((-)^{[k]}\) intuitively subdivides an \(n\)-cube, as encoded by the Boolean lattice \([1]^{n}\), into \(kn\) subcubes. The following proposition naturally extends this subdivision construction to an endofunctor \(\mathfrak{so}_{k+1}\) on \(\mathbf{Dis}\). **Proposition 3.13**.: _Consider the commutative outer square_ _in which the bottom horizontal functor is the left Kan extension of the composite of the top horizontal and right vertical arrows along the left vertical inclusion. There exists a unique dotted monoidal functor making the entire diagram commute. For each monotone injection \(\phi:[n]\to[m]\), there exists a unique monoidal natural transformation \(\mathfrak{so}_{m+1}\to\mathfrak{so}_{n+1}\) whose \(I\)-component is defined by \(I^{\phi}\) for each \(\square\)-object \(I\)._ #### 3.2.3. Cubical sets Take _cubical sets_ and _cubical functions_ to mean the respective objects and morphisms of \(\hat{\square}\). Regard \(\hat{\square}\) as closed symmetric monoidal with tensor \(\otimes\) characterized by \(\square[-]:\square\hookrightarrow\hat{\square}\) monoidal. The \(\square\)-morphisms from tensor products defined by **Cat**-projections induce inclusions of the following form, natural in cubical sets \(A\) and \(B\): \[A\otimes B\hookrightarrow A\times B\] Write \((-)_{n}\) for the functor \(\hat{\square}\to\textbf{Set}\) naturally defined on objects by \[C_{n}=C([1]^{n}).\] For each atomic cubical set \(A\), let \(\partial A\) denote the maximal subpresheaf of \(A\) having dimension \(\dim\,A-1\). For integers \(1\leqslant i\leqslant n\), let \(\sqcup^{\pm i}[1]^{n}\) denote the maximal subpresheaf of \(\square[1]^{n}\) for which \(\delta_{\pm i;n}\notin(\sqcup^{\pm i}[1]^{n})_{n-1}\). **Example 3.14**.: For each \(1\leqslant i\leqslant n\), we have the inclusions of cubical sets \[\sqcup^{\pm i}[1]^{n}\subset\partial\square[1]^{n}\subset\square[1]^{n}. \tag{2}\] For each \(n>0\), \(\partial\square[1]^{n}\) intuitively models the boundary of an \(n\)-cube, an \(n\)-cube missing its interior and for integers \(1\leqslant i\leqslant n\), \(\sqcup^{\pm i}[1]^{n}\) intuitively models an \(n\)-cube missing its interior and its \(\pm i\)th face, and \(\square[1]^{n}\) models an \(n\)-cube. **Example 3.15**.: A (**pro**-\(\hat{\square}\))-morphism of the form \[C\to\square[1]^{\infty},\] from a cubical set \(C\) to the image of \([1]^{\infty}\) (1) under the extension of the Yoneda embedding to a functor \(\square[-]:\textbf{pro}\text{-}\square\to\textbf{pro}\text{-}\hat{\square}\), informally, is the data of a cubical function from \(C\) to a representable of arbitrarily large dimension up to surjective morphisms between such representables. Define a monoidal adjunction \(\mathrm{T}_{1}\dashv\mathfrak{ner}\) of the form \[\mathrm{T}_{1}:\hat{\square}\leftrightarrow\textbf{Cat}:\mathfrak{ner}\,,\] where the cocontinuous monoidal functor \(\mathrm{T}_{1}\) is characterized by the commutative diagram because \(\square\) is the free symmetric monoidal category on \(\square_{1}\) having unit \([0]\)[33]. Call \(\mathfrak{ner}\,\mathcal{X}\) the _cubical nerve_ of a small category \(\mathcal{X}\). For each finite poset \(P\), let \(\square[P]\) denote the subpresheaf of \(\mathfrak{ner}\,P\) whose \(n\)-cubes are all monotone functions \([1]^{n}\to P\) preserving binary infima and binary suprema and mapping (Boolean) intervals onto Boolean intervals. For each monotone function \(\phi:P\to Q\) of posets mapping Boolean intervals onto Boolean intervals, \(\mathfrak{ner}\,\phi\) restricts and corestricts to a cubical function \(\square[\phi]:\square[P]\to\square[Q]\). In particular, \(\square[-]\) will not only denote the Yoneda embedding \(\square\to\hat{\square}\), but also its extension \[\square[-]:\textbf{Dis}\to\hat{\square}.\] The _vertices_ of \(C\) are the elements of \(C_{0}\). Let \(\mathrm{Star}_{C}(v)\) denote the _closed star_ of a vertex \(v\in C_{0}\) in \(C\), the subpresheaf of \(C\) consisting of all images \(A\subset C\) of representables in \(C\) with \(v\in A_{0}\). Call \(C\,\dots\) 1. \(\dots\)_atomic_ if \(C\) is the image of a representable. _ 2. _...finite_ if \(C\) has finitely many atomic subpresheaves. The _dimension_ of the initial cubical set \(\varnothing\) is \(-1\) and the dimension of a non-initial cubical set \(C\neq\varnothing\) is the infimum over all \(n=0,1,\ldots\) such that \(C\) is the colimit of representables of the form \(\square[1]^{n}\). **Lemma 3.16**.: _The functor \(\square[-]:\mathbf{Dis}\to\hat{\square}\) is fully faithful and cocontinuous._ Proof.: For \(\bigcirc\) a skeleton of \(\mathbf{Dis}\) containing \(\square\), commutes up to natural isomorphism. In this diagram, the Yoneda embedding \(\bigcirc[-]\) is fully faithful and cocontinuous. The other diagonal arrow, a functor between presheaf categories induced by a functor between sites and therefore cocontinuous, is fully faithful [Lemma 3.12]. Therefore \(\square[-]:\mathbf{Dis}\to\hat{\square}\) is naturally isomorphic to a composite of a categorical equivalence \(\mathbf{Dis}\simeq\bigcirc\) followed by fully faithful and cocontinuous functors. We extend \(\mathfrak{so}_{k+1}\) to an endofunctor on \(\hat{\square}\) as follows. **Proposition 3.17**.: _There exists a unique dotted monoidal left adjoint making_ _commute up to natural isomorphism._ Proof.: The left Kan extension of the composite of the top horizontal with right vertical functors along the left vertical functor makes the entire diagram commute up to natural isomorphism by the left vertical functor cocontinuous [Lemma 3.16]. This left Kan extension is monoidal by the top horizontal functor monoidal and \(\otimes\) cocontinuous. Intuitively, \(\mathfrak{so}_{k+1}C\) is the cubical set obtained by taking \((k+1)\)-fold edgewise subdivisions of the cubes in \(C\). **Example 3.18**.: There exists a natural isomorphism \[\mathfrak{so}_{1}\cong 1_{\hat{\square}}:\hat{\square}\cong\hat{\square}.\] Write \(\mathfrak{ev}_{k+1}\) for the right adjoint to \(\mathfrak{so}_{k+1}\) in the adjunction \[\mathfrak{so}_{k+1}:\hat{\square}\leftrightarrowleftrightarrow\hat{\square}: \mathfrak{ev}_{k+1}.\] Regard \(\mathfrak{so}_{3}\) as copointed by the unique monoidal natural transformation \(\epsilon\) such that \[\epsilon_{\square[1]^{n}}=\square\left[(-)^{0\to 1:[0]\to[2]}\right]:\square[3]^{n}\to\square[1]^{n}.\] Define a cubical analogue \(\Omega^{n}(C,v)\) of an \(n\)-fold loop space by the following Cartesian square natural in a cubical set \(C\) equipped with vertex \(v\in C_{0}\), where \(\langle v\rangle\) denotes the minimal subpresheaf of \(C\) containing \(v\) as its unique vertex. **Example 3.19**.: For each monoid \(M\), \(\Omega^{1}(\operatorname{\mathfrak{n}er}M,\star)\) is the discrete cubical set \(M\). A crucial technical tool in classical proofs of simplicial approximation is the factorizability of double barycentric subdivisions through polyhedral complexes [15, SS12]. There exists a directed, cubical analogue. The following three lemmas adapt observations made in a predecessor to this paper [44, Lemmas 6.11, 6.12, 6.13] from the traditional setting of cubical sets to the cubical sets considered in this paper and from sixteen-fold subdivision \(\mathfrak{so}_{16}=\mathfrak{so}_{2}^{4}\) to nine-fold subdivision \(\mathfrak{so}_{9}=\mathfrak{so}_{3}^{2}\); justifications are given after all three lemmas are stated. Recall that under our conventions, \(\operatorname{\mathfrak{s}upp}_{\mathfrak{so}_{3}}(v,C)\) denotes the minimal subpresheaf \(B\) of \(C\) for which \(\mathfrak{so}_{3}B\) has vertex \(v\). **Lemma 3.20**.: _For all cubical sets \(C\) and \(v\in\mathfrak{so}_{3}C_{0}\),_ \[\epsilon_{C}(\operatorname{Star}_{\mathfrak{so}_{3}C})(v)\subset\operatorname {\mathfrak{s}upp}_{\mathfrak{so}_{3}}(v,C).\] **Lemma 3.21**.: _Fix cubical set \(C\) and atomic subpresheaf \(A\subset\mathfrak{so}_{3}C\). There exist:_ 1. _unique minimal subpresheaf_ \(C_{A}\subset C\) _with_ \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\)__ 2. _retraction_ \(\pi_{(C,A)}:A\to A\cap\mathfrak{so}_{3}C_{A}\)_, unique up to isomorphism_ _Moreover, \(A\cap\mathfrak{so}_{3}C_{A}\) is representable and \(\epsilon_{C}(A\hookrightarrow\mathfrak{so}_{3}C)=\epsilon_{C}(A\cap\mathfrak{so }_{3}C_{A}\hookrightarrow\mathfrak{so}_{3}C)\pi_{(C,A)}\)._ **Lemma 3.22**.: _Consider the left of the solid commutative diagrams_ _where \(A^{\prime},A^{\prime\prime}\) are non-empty subpresheaves of atomic cubical sets. Suppose \(B^{\prime},B^{\prime\prime}\) are minimal respective subpresheaves of \(C^{\prime},C^{\prime\prime}\) such that \(A^{\prime}\cap\mathfrak{so}_{3}B^{\prime}\neq\varnothing\) and \(A^{\prime\prime}\cap\mathfrak{so}_{3}B^{\prime\prime}\neq\varnothing\). Let \(\pi^{\prime},\pi^{\prime\prime}\) be retractions of inclusions in the right diagram. There exists a unique dotted cubical function making the right square commute._ The claim that \(A\cap\mathfrak{so}_{3}C_{A}\) is representable in Lemma 3.21 follows from the fact that \(C_{A}\) and hence also \(A\cap\mathfrak{so}_{3}C_{A}\) are atomic and \(A\cap\mathfrak{so}_{3}\partial C_{A}=\varnothing\) by minimality. To show the other claims, it suffices to take the case where \(C\) is representable by naturality and hence the even more special case where \(C=\square[1]\) because all the functors and natural transformations in sight are monoidal. In that case, these other claims follow from inspection. **Lemma 3.23**.: _Consider the top left vertical inclusion of cubical sets in_ _There exist dotted cubical functions, natural in objects \(A\hookrightarrow\mathfrak{so}_{3}C\) in the full subcategory of \((\bar{\square}/\mathfrak{so}_{3})\) consisting of inclusions of non-empty subpresheaves \(A\) of atomic subpresheaves of \(\mathfrak{so}_{3}C\), making the diagram commute. The right vertical arrows can be chosen to have as their image the minimal subpresheaf \(C_{A}\subset C\) with \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\)._ The proof mimics a proof of an analogous result in a predecessor to this paper [44, Lemma 8.16]. That result is stated at the level of streams instead of cubical sets and for \(\mathfrak{so}_{4}=\mathfrak{so}_{2}^{2}\) instead of \(\mathfrak{so}_{3}\). We therefore include the following proof for completeness. Proof.: Call the objects in the full subcategory of \((\hat{\square}/\mathfrak{so}_{3})\) consisting of inclusions \(A\hookrightarrow\mathfrak{so}_{3}C\) of non-empty subpresheaves \(A\) of atomic subpresheaves of \(\mathfrak{so}_{3}C\)_subatomic inclusions_. Let \(\epsilon_{(C,A)}=\epsilon_{C}(A\hookrightarrow\mathfrak{so}_{3}C)\) There exists a unique minimal atomic subpresheaf \(C_{A}\subset C\) with \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\) [Lemma 3.21]. The inclusion \(B_{(C,A)}\hookrightarrow A\) of \(B_{(C,A)}=A\cap\mathfrak{so}_{3}C_{A}\) admits a retraction \(\pi_{(C,A)}\) making the following diagram commute [Lemmas 3.21 and 3.22]: (3) The cubical set \(B_{(C,A)}\) isomorphic to a representable [Lemma 3.21]. It therefore suffices to show that the above diagram is natural in subatomic inclusions \(A\hookrightarrow\mathfrak{so}_{3}C\). To that end, consider the solid commutative outer rectangle in the diagram in which the top vertical arrows are subatomic inclusions. There exists a unique dotted cubical function making the upper trapezoid commute [Lemma 3.22]. The triangles commute by (3) commutative. The lower trapezoid commutes because the outer rectangle commutes and the cubical functions of the form \(\pi_{(C,A)}\) are epi. Thus the entire diagram commutes. The desired naturality of (3) follows. The following lemma defines pro-diagrams that encode something like weak factorization systems. **Lemma 3.24**.: _There exists a functor \(F_{C}\), natural in cubical sets \(C\), of the form_ \[F_{C}:(\mathcal{A}(\mathfrak{so}_{3}C))^{\mathrm{op}}\to\mathbf{pro}\text{-} \left(\square/C\right),\] _where \(\mathcal{A}(\mathfrak{so}_{3}C)\) is the poset of non-empty subpresheaves of atomic subpresheaves of \(\mathfrak{so}_{3}C\) ordered by inclusion, satisfying the following. For each \(\mathcal{A}(\mathfrak{so}_{3}C)^{\mathrm{op}}\)-object \(A\), \(F_{C}A:\square[1]^{\infty}\to C\) and \(F_{C}A\) has as its image the minimal subpresheaf \(C_{A}\) of \(C\) with \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\). For each \(\mathcal{A}(\mathfrak{so}_{3})\)-morphism \(A^{\prime}\hookrightarrow A^{\prime\prime}\), \(F_{C}(A^{\prime}\hookrightarrow A^{\prime\prime})\) is represented by a monic natural transformation between cofiltered diagrams in \(\square/C\)._ The proof relies on the fact that parallel epis in \(\square\) are always isomorphic to one another in the arrow category \(\square^{[1]}\). For this reason the proof does not adapt to larger variants of \(\square\) that include, for example, coconnections of one or both kinds. Proof.: Let \(A\) denote an atomic subpresheaf of \(\mathfrak{so}_{3}C\). There exists a unique minimal atomic subpresheaf \(C_{A}\subset C\) with \(A\cap\mathfrak{so}_{3}C_{A}\neq\varnothing\) for each \(A\) [Lemma 3.21]. Let \(n_{A}=\dim\,C_{A}\). Let \(\pi_{A}\) denote a choice, unique up to \((\square/C_{A})\)-isomorphism, of epi \(\square[1]^{n_{A}}\to C_{A}\) for each \(A\). Let \(F_{C}\) denote the limit in \(\mathbf{pro}\text{-}(\square/C)\) of the cofiltered diagram whose morphisms are the outer triangles in commutative triangles of the form Consider an \(\mathcal{A}(\mathfrak{so}_{3}C)\)-morphism \(A^{\prime}\hookrightarrow A^{\prime\prime}\). Then \(C_{A^{\prime\prime}}\subset C_{A^{\prime}}\) by minimality. The cubical set \(A^{\prime\prime}\) is atomic and hence \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) is an atomic subpresheaf of \(\mathfrak{so}_{3}C_{A^{\prime}}\). The top dimensional cube in the atomic cubical set \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) is not a cube in \(\mathfrak{so}_{3}\partial C_{A^{\prime}}\) because \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) contains an atomic subpresheaf \(A^{\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) which does not intersect \(\mathfrak{so}_{3}\partial C_{A^{\prime}}\) by minimality of \(C_{A^{\prime}}\). Therefore \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) has a unique atomic preimage under \(\mathfrak{so}_{3}\pi_{A^{\prime}}\). Therefore there exists a unique minimal and hence atomic subpresheaf \(P_{A^{\prime},A^{\prime\prime}}\subset\square[1]^{n_{A^{\prime}}}\) with \(\mathfrak{so}_{3}P_{A^{\prime}A^{\prime\prime}}\) intersecting the preimage of \(A^{\prime\prime}\cap\mathfrak{so}_{3}C_{A^{\prime}}\) under \(\mathfrak{so}_{3}\pi_{A^{\prime}}\) [Lemma 3.21]. The cubical set \(P_{A^{\prime},A^{\prime\prime}}\), an atomic subpresheaf of \(\square[1]^{n_{A^{\prime}}}\), is isomorphic to a representable. The restriction of \(\pi_{A^{\prime}}\) to \(P_{A^{\prime},A^{\prime\prime}}\) corestricts to a cubical function \(\pi_{A^{\prime},A^{\prime\prime}}\) making the diagram commute by minimality of \(P_{A^{\prime},A^{\prime\prime}}\). Thus it is possible to define \(F_{C}(A^{\prime}\hookrightarrow A^{\prime\prime})\) as the \(\mathbf{pro}\text{-}(\square/C)\)-morphism \(F_{C}A^{\prime\prime}\to F_{C}A^{\prime}\) induced by the vertical arrows in the commutative diagram above. For each \(\mathcal{A}(\mathfrak{so}_{3}C)\)-object \(A\), \(F_{C}(1_{A})=1_{F_{C}A}\) because \(P_{A,A}=P_{A}\). It therefore suffices to show \(F_{C}\) preserves composition. For then \(F_{C}\), which preserves identities, would define the desired functor. To that end, consider a composable sequence of \(\mathcal{A}(\mathfrak{so}_{3}C)\)-morphisms \[A^{\prime}\hookrightarrow A^{\prime\prime}\hookrightarrow A^{\prime\prime\prime}.\] Observe \(P_{A^{\prime},A^{\prime\prime\prime}}\subset P_{A^{\prime},A^{\prime\prime }}\) by minimality. Consider the solid arrows in There exists a unique rightmost dotted horizontal epi in the top row whose composite with \(\pi_{A^{\prime\prime\prime}}\) is \(\pi_{A^{\prime\prime},A^{\prime\prime\prime}}\) by \(\pi_{A^{\prime\prime\prime}}\) terminal in \(\square/C\) among all epis from representables having image \(C_{A^{\prime\prime\prime}}\). There exists a unique rightmost dotted horizontal epi in the middle row whose composite with \(\pi_{A^{\prime\prime}}\) is \(\pi_{A^{\prime},A^{\prime\prime}}\) by \(\pi_{A^{\prime\prime}}\) terminal in \(\square/C\) among all epis from representables having image \(C_{A^{\prime\prime}}\).. There exists a unique leftmost dotted horizontal epi in the top row whose composite with \(\pi_{A^{\prime\prime},A^{\prime\prime\prime}}\), the composite of the other arrows in the top row, is \(\pi_{A^{\prime},A^{\prime\prime\prime}}\) by minimality in our choice of \(P_{A^{\prime},A^{\prime\prime\prime}}\). The cofiltered limits of the top, middle, and bottom rows define the respective objects \(F_{C}A^{\prime\prime\prime},F_{C}A^{\prime\prime},F_{C}A^{\prime}\) because epis in \(\square\) are determined up to isomorphism by their domain and codomain [Lemma 3.9]. Vertical inclusions define natural transformations of these aforementioned diagrams. The top vertical arrows induce \(F_{C}(A^{\prime\prime}\hookrightarrow A^{\prime\prime\prime})\) by I commutative. The bottom vertical arrows induce \(F_{C}(A^{\prime}\hookrightarrow A^{\prime\prime})\) by II commutative. The composite of the vertical arrows induces \(F_{C}(A^{\prime}\hookrightarrow A^{\prime\prime\prime})\) by I+II+III commutative. \(\square\) ### Comparisons Define dotted functors in the commutative diagram in which the right vertical arrow is the forgetful functor, so that \(\left\lvert\square[\delta_{\pm}]\right\rvert\) is the stream map \(\star\to\vec{\mathbb{I}}\) having image \(\nicefrac{{1}}{{2}}\pm\nicefrac{{1}}{{2}}\). **Example 3.25**.: We can make the identifications \[\left\lvert\square[1]^{n}\right\rvert=\mathbb{I}^{n}\ \ \ \ \left\lvert\square[1]^{n} \right\rvert=\vec{\mathbb{I}}^{n}\] along the continuous function that naturally sends each vertex \((x_{1},\dots,x_{n})\in[1]^{n}\subset\left\lvert\square[1]^{n}\right\rvert\) to \((x_{1},\dots,x_{n})\in\mathbb{I}^{n}\). Directed realization preserves embeddings by a straightforward adaptation of a proof under the usual definition of cubical sets [44, Theorem 6.19]. **Proposition 3.26**.: _For each monic cubical function \(\iota\), \(\left\lvert\iota\right\rvert\) is a stream embedding._ **Example 3.27**.: There exists a stream embedding of the form \[\left\lvert A\otimes B\hookrightarrow A\times B\right\rvert\colon(\left\lvert A \right\rvert\times\left\lvert B\right\rvert)\hookrightarrow\left\lvert(A \times B)\right\rvert,\] natural in cubical sets \(A\) and \(B\). For each cubical set \(C\), write \(\varphi_{C;k+1}\) for the component \[\varphi_{C;k+1}:\left\lvert\mathfrak{so}_{k+1}C\right\rvert\cong\left\lvert C\right\rvert\] of the natural isomorphism defined by the following proposition. **Proposition 3.28**.: _The following diagram_ _commutes up to a natural isomorphism whose \(\square[1]^{n}\)-component \(\left\lvert\mathfrak{so}_{k+1}\square[1]^{n}\right\rvert\cong\left\lvert \square[1]^{n}\right\rvert\) is linear on each cell and sends each geometric vertex \(v\in[k+1]^{n}\) in \(\left\lvert\square[k+1]^{n}\right\rvert\) to \(\nicefrac{{v}}{{k+1}}\in\mathbb{I}^{n}\)._ Let \(\mathsf{sing}\) denote the right adjoint to \(\left\lvert-\right\rvert\colon\hat{\Box}\to\mathbf{DiTop}\) naturally defined by \[(\mathsf{sing}\,X)_{n}=\mathbf{DiTop}(\left\lvert\Box[1]^{n}\right\rvert,X).\] The following lemma is the main method of obtaining information about edge orientations on a cubical set from the circulation on a directed realization. Recall that under our definition of supports, \(\mathsf{supp}_{\left\lvert\Box[-]\right\rvert}(x,L)\) is the minimal Boolean interval \(I\) in a finite distributive lattice \(L\) such that \(x\in\left\lvert\Box[I]\right\rvert\). **Lemma 3.29**.: _Fix a \(\mathbf{Dis}\)-object \(L\). Consider \(x\leqslant_{\left\lvert\Box[L]\right\rvert}y\). Then_ \[\min\mathsf{supp}_{\left\lvert\Box[-]\right\rvert}(x,L)\leqslant_{L}\min \mathsf{supp}_{\left\lvert\Box[-]\right\rvert}(y,L).\] Proof.: In the case \(L=[1]^{n}\), \[\min\mathsf{supp}_{\left\lvert\Box[-]\right\rvert}(x,[1]^{n})=(\lfloor x_{1} \rfloor,\dots,\lfloor x_{n}\rfloor)\leqslant_{[1]^{n}}(\lfloor y_{1}\rfloor, \dots,\lfloor y_{n}\rfloor)=\min\mathsf{supp}_{\left\lvert\Box[-]\right\rvert }(y,L).\] The general case follows from transitivity of preorders. **Remark 3.30**.: The following classes coincide [2, Theorem 2.5]: 1. \(\mathrm{CAT}(0)\) cubical complexes 2. cubical complexes in which the cubes represent the Boolean intervals in a poset of _consistent ideals_ in a _poset-with-inconsistent-pairs_ _Posets-with-inconsistent-ideals_ refer to posets with certain extra structure in which the _consistent ideals_ are the lower sets compatible with that extra structure. A Stone Duality generalizes to a duality between _distributive semilattices_ and structures which, in the finite case, coincide with posets-with-inconsistent-pairs [25, Propositions 5.7,5.8]. Thus the finite \(\mathrm{CAT}(0)\) cubical complexes are precisely the cubical complexes of the form \(\left\lvert\Box[L]\right\rvert\) for \(L\) a finite distributive semilattice. The \(\mathrm{CAT}(0)\) condition has been recently studied in directed homotopy theory as a sufficient criterion for fundamental categories to faithfully embed into fundamental groupoids [28]. We end the section with an analogue of Reedy cofibrant replacement for the support of a directed singular cube and a special _right_ lifting property for this replacement against directed cubes. **Lemma 3.31**.: _Let \(f\) denote a \((\left\lvert\Box[-]\right\rvert/\left\lvert\mathfrak{so}_{3}-\right\rvert)\)-object as in the diagram_ _Let \(\mathscr{R}\) be the full subcategory of \(\hat{\Box}\) consisting of those cubical sets whose atomic subpresheaves are all isomorphic to representables. There exist dotted \((\mathbf{pro}\text{-}(\mathscr{R}/C_{f}))\)-object \(\Lambda_{f}^{*}\) and \((\mathbf{pro}\text{-}\mathbf{DiTop})\)-morphism \(f^{*}\), both natural in \(f\), making the diagram commute._ The proof relies on the fact that the natural quotient functor \[\mathbf{pro}\text{-}\left(\mathscr{X}^{\mathscr{G}}\right)\to(\mathbf{pro} \text{-}\mathscr{X})^{\mathscr{G}} \tag{4}\] is a categorical equivalence for categories \(\mathscr{G},\mathscr{X}\) with \(\mathscr{G}\) finite and having only identities for endomorphisms [54, SS3]. In our case, \(\mathscr{G}\) is a poset of Boolean intervals in a lattice of the form \(\mathfrak{so}_{k+1}[1]^{n_{f}}\). The acyclicity required of \(\mathscr{G}\) generalizes the inductive structure of Reedy categories. Factorizations in \(\operatorname{\mathbf{pro}}\)-\(\mathscr{X}\) resemble weak factorization systems in model structures. Certain choices \(o\) of diagrams \(\mathscr{G}\to\mathscr{X}\) whose formal limit coincides with an object in the codomain of (4) resemble inductive constructions like Reedy-cofibrant replacement. When \(\mathscr{X}=\square/C_{f}\), the colimits of the choices \(o\) correspond to analogues \(C^{*}_{f;o}\to C\) of Reedy-cofibrant replacement. When \(\mathscr{X}\) is a more complicated category \(\mathscr{F}_{f}\) of local lifts of \(f\) up to natural directed homotopy, the choices \(o\) give the replacement \(C^{*}_{f;o}\to C\) as well as the lift of \(|\epsilon_{C_{f}}|\;f\) at once. Proof.: For brevity, write \(\varphi_{f;k}\) for the homeomorphism \[\varphi_{f;k}=\varphi_{\square[1]^{n_{f}};2^{k}}:|\mathfrak{so}_{2}^{k} \square[1]^{n_{f}}|\cong|\square[1]^{n_{f}}|.\] For each \(i=0,1,2\), write \(f_{i}\) for the stream map \[f_{i}= |\epsilon^{i}_{C_{f}}|\;f:|\square[1]^{n_{f}}|\to |\mathfrak{so}_{3}^{2-i}C_{f}|\;.\] Let \(\mathcal{A}(C)\) denote the category whose objects are the non-empty subpresheaves of atomic subpresheaves of a cubical set \(C\) and whose morphisms are all inclusions between them. Let \(\mathcal{L}_{f}\) denote the poset, ordered under inclusion, of all order-convex subtopological sublattices of \(|\square[1]^{n_{f}}|\) that \(f\) maps into open stars of vertices. Let \(\mathcal{L}_{f;k,j}\) denote the subposet of \(\mathcal{L}_{f}\) consisting of all images of closed cells under \(\varphi_{f;k+i}\) for all \(0\leqslant i\leqslant j\). Let \(L\) denote an \(\mathcal{L}_{f}\)-object. Let \(\mathscr{R}\) be the full subcategory of \(\hat{\square}\) whose objects are those cubical sets whose atomic subpresheaves are isomorphic to representables. Let \(\mathscr{D}\) be the category of compact Hausdorff topological distributive lattices and continuous lattice homomorphisms between them. For each injective \(\square\)-morphism \(\delta\), write \(\delta^{\dagger}\) for the unique retraction in \(\square\) to \(\delta\). Let an _injection_ of the form \([m]\to[m+n]\) simply refer to an injection of underlying sets. _terminal local lifts_: There exists a unique minimal non-empty and hence atomic subpresheaf \(C_{A}\subset C\) such that \(\mathfrak{so}_{3}C_{A}\cap A\neq\varnothing\) [Lemma 3.21] for each cubical set \(C\) and \(\mathcal{A}(\mathfrak{so}_{3}C)\)-object \(A\). There exists a choice of cubical function \(\theta_{A}:\square[1]^{\dim\;C_{A}}\to C\) unique up to \((\square/C)\)-isomorphism by minimality of \(\dim\;C_{A}\). There exists a choice of cubical function \(\psi_{A}:A\to\square[1]^{\dim\;A}\), natural in cubical sets \(C\) and \(\mathcal{A}(\mathfrak{so}_{3}C)\)-objects \(A\), lifting \(\epsilon_{C}(A\hookrightarrow\mathfrak{so}_{3}C)\) against \(\theta_{A}\) [Lemma 3.23]. Let \(V_{f}\) be the set of all vertices in \(\operatorname{\mathbf{supp}}_{|-|}(f,\mathfrak{so}_{9}C_{f})\), finite by \(|\square[1]^{n_{f}}|\) compact, whose open stars contain \(f_{0}(L)\). The vertices in \(V_{f}L\), whose open stars have non-empty intersection, therefore are the vertices of a unique closed cell \(E_{f}L\) in \(|\mathfrak{so}_{9}C_{f}|\). Then \(A_{f}L=\operatorname{\mathbf{supp}}_{\mathfrak{so}_{3}}(E_{f}L,\mathfrak{so} _{9}C_{f})\) is an \(\mathcal{A}(\mathfrak{so}_{3}C_{f})\)-object [Lemma 3.20]. Thus \(A_{f}\) defines a monotone function \(\mathcal{L}_{f}\to\mathcal{A}(\mathfrak{so}_{3}C_{f})\) natural in \(f\). Let \(n_{f;L}=\dim\;C_{A_{f}L}\). Let \(\theta_{f;L}=\theta_{A_{f}L}\). The restriction of \(f_{1}\) to \(L\) has image in \(|A_{f}L|\) and therefore corestricts to \(|\,A_{f}L\,|\) [Proposition 3.26]. Let \(f_{L}^{*}:L\to\!\! of horizontal arrows in the right of the diagrams (5) _local pro-lifts \(\Gamma_{f}L\)_: Let \(\pi_{s;\phi}\) denote the commutative triangle of the form \[\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)} )}{\overset{s_{0}\times s_{1}\times\dots\times s_{m+n}}{\underset{s_{\phi(0)} \times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0) },x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{0}\times s_{1}\times\dots \times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{ (x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}} \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m) })}{\overset{s_{0}\times s_{1}\times\dots\times s_{m+n}}{\underset{s_{\phi(0) }\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi (0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x _{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{0}\times s_{1}\times \dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times \dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times \dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1} \times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1} \times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n}) \mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)} \times s_{1}\times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s _{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)}, \dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{ \phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots \times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{ \phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{ m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0}, \dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{ \phi(0)}\times s_{1}\times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times \dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{ \phi(1)},\dots,x_{\phi(m)})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{ \phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1} \times\dots\times s_{m+n}}{\underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{ \underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m )})}}}}\underset{(x_{0},\dots,x_{m+n})\mapsto(x_{\phi(0)},x_{\phi(1)}, \dots,x_{\phi(m)})}{\overset{s_{\phi(0)}\times s_{1}\times\dots\times s_{m+n}}{ \underset{s_{\phi(0)}\times\dots\times s_{\phi(m)}}{\underset{(x_{0},\dots,x_{m+n}) \mapsto(x_{\phi(0)},x_{\phi(1)},\dots,x_{\phi(m)})}} in which the vertical arrows on the right are defined by the components of a limiting cone, commute by our choice of \(P_{f}\). Define \(f_{L_{1},L_{2}}^{*}\) by the commutative diagram Fix an \(\mathcal{I}_{L_{1}}\)-morphism \(s:L_{1}\to\vec{\mathbb{I}}^{i_{s}}\). To show that the diagram commutes, it suffices to show that the outer rectangle commutes because all of the inner triangles commute. It therefore suffices to show that both possible composites \(L_{1}\to\vec{\mathbb{I}}^{n_{f;L_{1}}+i_{s}}\) of maximal composable sequences of arrows in the diagram coincide. The image of \(f_{L_{1}}^{*}\) lies in the image of \(|\square[\delta_{f;L_{1},L_{2}}]|\) by naturality of the construction. Both such composites thus coincide on the first \(n_{f;L_{2}}\) coordinates. Both such composites thus also coincide on the next \(n_{f;L_{1},L_{2}}\) coordinates because the composite \(|\square[\delta_{f;L_{1},L_{2}}^{\dagger}]||\square[\delta_{f;L_{1},L_{2}}]|\) is the identity on the image of \(|\square[\delta_{f;L_{1},L_{2}}]|\) and \(sr_{L_{1},L_{2}}(L_{1}\hookrightarrow L_{2})=s\). Finally, both such composites coincide on the last \(i_{s}\) coordinates because \(sr_{L_{1},L_{2}}(L_{1}\hookrightarrow L_{2})=s\). For each projection \(p:\vec{\mathbb{I}}^{i_{s}}\to\vec{\mathbb{I}}\), \(p(sr_{L_{1},L_{2}})\) is uniquely determined by \(ps\). For each \(\mathcal{I}_{L_{1}}\)-morphism of the form \(\pi_{s;\phi}:s^{\prime}\to s^{\prime\prime}\), \(f_{L_{1},L_{2}}^{*}\times\pi_{s;\phi}\) is defines an \(\mathcal{I}_{L_{2}}\)-morphism. It therefore follows that there exists a unique dotted (**pro**-\(\mathscr{F}_{f}\))-morphism making the right of the diagrams in which the vertical arrows are the components of limiting cones, commute for each choice of bottom horizontal \(\mathscr{F}_{f}\)-morphism given by left commutative diagrams in which the unlabelled arrows are composites of projections, onto the first \(n_{f;L_{2}}\) and \(n_{f;L_{1}}\) coordinates, with stream maps \(|\theta_{f;L_{2}}|\) and \(|\theta_{f;L_{1}}|\) [Lemma B.1]. \(\Gamma_{f}\) _defines a functor_: In the case \(L=L_{1}=L_{2}\), \(\delta_{f;L_{1},L_{2}}=1_{[0]}\), hence the left commutative square above is an identity arrow in \(\mathscr{F}_{f}\). We therefore conclude \(\Gamma_{f}(L\hookrightarrow L)=1_{\Gamma_{f}L}\) [Lemma B.1]. For inclusions \(L_{1}\hookrightarrow L_{2}\hookrightarrow L_{3}\) in \(\mathcal{L}_{f}\), \[\big{(}f^{*}_{L_{3}}\times f^{*}_{L_{2},L_{3}}\times f^{*}_{L_{1},L_{3}}\times sr _{L_{1},L_{2}}r_{L_{2},L_{3}}\big{)}=\big{(}f^{*}_{L_{3}}\times f^{*}_{L_{1},L_ {3}}\times sr_{L_{1},L_{3}}\big{)}\] by \(r_{L_{1},L_{2}}r_{L_{2},L_{3}}=r_{L_{1},L_{3}}\), \(\delta_{f;L_{1},L_{3}}=(\delta_{f;L_{2},L_{3}}\otimes[1]^{n_{f;L_{1},L_{2}}}) (\delta_{f;L_{1},L_{2}})\) and hence also \(\delta^{\dagger}_{f;L_{1},L_{3}}=(\delta^{\dagger}_{f;L_{1},L_{2}})(\delta_{f ;L_{2},L_{3}}\otimes[1]^{n_{f;L_{1},L_{2}}})^{\dagger}\). We therefore conclude \(\Gamma_{f}(L_{1}\hookrightarrow L_{3})=\Gamma_{f}(L_{2}\hookrightarrow L_{3}) \Gamma_{f}(L_{1}\hookrightarrow L_{2})\) [Lemma B.1]. _global lifts_: The composite functor \[\Gamma_{f;k,j}=\Gamma_{f}(\mathcal{L}_{f;k,j}\hookrightarrow\mathcal{L}_{f}): \mathcal{L}^{\mathrm{op}}_{f;k,j}\to\mathbf{pro\text{-}}\mathscr{F}_{f}\] lifts under the natural quotient functor \[\mathbf{pro\text{-}}\left(\mathscr{F}_{f}^{\mathcal{L}^{\mathrm{op}}_{f;k,j}} \right)\to(\mathbf{pro\text{-}}\mathscr{F}_{f})^{\mathcal{L}^{\mathrm{op}}_{f; k,j}}\,,\] by \(\mathcal{L}^{\mathrm{op}}_{f;k,j}\) a finite poset [54, SS3], to a cofiltered limit of diagrams of the form \[\Gamma_{f;(k,j,c)}:\mathcal{L}_{f;k,j}\to\mathscr{F}_{f},\] naturally indexed by objects \(c\) in some small cofiltered category. Define \(\square\)-object \(I_{f;(k,j,c)}L\), stream map \(f^{*}_{L;(k,j,c)}\) and cubical function \(\theta_{f;(k,j,c)}\) natural in \(\mathcal{L}_{f;k,j}\)-object \(L\) so that \(\Gamma_{f;(k,j,c)}L\) is the left commutative triangle in (5) with \(g=f_{L;(k,j,c)}\), \([1]^{n}=I_{f;(k,j,c)}L\) and \(\theta=\theta_{f;(k,j,c)}\). Define cubical function \(\Lambda^{*}_{f;(k,j,c)}\) and cubical set \(C^{*}_{f;(k,j,c)}\), both natural in \(c\) and \(f\), by \[\Lambda_{f;(k,j,c)}=\big{(}\mathrm{colim}\,I_{f;(k,j,c)}:(\bullet\leftarrow \bullet\rightarrow\bullet\leftarrow\bullet\cdots\rightarrow\bullet)^{n_{f}} \rightarrow(\square/C_{f})\big{)}:C^{*}_{f;(k,j,c)}\to C_{f},\] a finite iterated pushout of inclusions of \((\mathscr{R}/C_{f})\)-objects and hence a \((\mathscr{R}/C_{f})\)-object. There exists a unique top horizontal dotted stream map, natural in \(f\), making the top trapezoid and hence entire diagram commute for each \(\mathcal{L}_{f;k,j}\)-object \(X\) by \(|\square[1]^{n_{f}}\rfloor\) a \(\mathbf{DiTop}\)-colimit of \(\mathcal{L}_{f;k,j}\)-objects. Inclusions \(\mathcal{L}_{f;k_{2},j_{1}}\hookrightarrow\mathcal{L}_{f;k_{1},j_{2}}\) for all \(k_{1}\leqslant k_{2}\) and \(j_{1}\leqslant j_{2}\) imply that \(C^{*}_{f;(k,j,c)}\) is natural not only in \(f\) and \(c\) but also in \(\omega^{\mathrm{op}}\)-objects \(k\gg 0\) and \(\omega\)-objects \(j\). Taking cofiltered limits indexed over all objects \(o=(k,j,c)\) gives the desired constructions. ### Cubcats Commutative diagrams of the form \[\begin{CD}\square[1]^{n_{\theta}}@>{\mathfrak{a}\mathfrak{j}(\varphi_{ \square[1]^{n_{\theta}}})}>{}>\mathfrak{e}_{\mathbb{F}_{2}}(S\ |\square[1]^{n_{\theta}}\lfloor)\\ @V{\mathfrak{e}_{\mathbb{F}_{2}}\theta}V{}V\\ \mathfrak{colim}_{\square[1]^{n}\to C}\mathsf{sing}\ |\square[1]^{n}\lfloor\ \mathfrak{e}_{\mathbb{F}_{2}}(\mathrm{colim}_{\square[1]^{n}\to C}\mathsf{sing}\ |\square[1]^{n}\lfloor),\end{CD}\] natural in \((\square/\mathrm{colim}_{\square[1]^{n}\to C}\mathsf{sing}\ \uparrow\square[1]^{n}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! minimal variant of \(\square\) admitting both the structure of a strict \(\infty\)-fold category and compatible connections [1, Theorem 8.8]. One one hand, the compositions that a cubcat must admit are not required to satisfy the associativity and unitality axioms of compositions in strict \(\infty\)-fold categories. On the other hand, a cubcat admits the symmetries implicit in our working definition of cubical sets and must admit many other compatible unary operations on cubes, parametrized by \(\blacksquare\)-morphisms, than just the connections. **Proposition 3.35**.: _For each \(\mathscr{G}\)-stream \(X\), \(\mathsf{sing}\,X\) is a \(\mathscr{G}\)-cubcat._ The proof is formal. Proof.: Let \(\eta^{\prime}\) and \(\epsilon^{\prime}\) denote the unit and counit of the adjunction \[\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathsf{sing}\,^{\mathscr{G}}.\] Let \(\epsilon^{\prime\prime}\) denote the counit of the adjunction \[\mathsf{so}_{2}\dashv\mathfrak{ex}_{2}.\] Let \(S=\mathsf{sing}\,\). Let \(S^{(2)}X(g)\) be the cubical set \[S^{(2)}X(g)=\operatorname{colim}_{\square[1]^{n}\to SX(g)}S\mid\square[1]^{n} \mathord{\upharpoonright}\,.\] natural in \(\mathscr{G}\)-streams \(X\) and \(\mathscr{G}\)-objects \(g\). Define \(\nu_{X}\) and \(\mu_{X}\) by commutative diagrams in which the unlabelled arrows are cannonically defined. The commutativity of the diagram implies that \(SX\) is a \(\mathscr{G}\)-cubcat. **Proposition 3.36**.: _For each \(\mathscr{G}\)-category \(\mathcal{X}\), \(\mathsf{ncr}\,\mathcal{X}\) is a \(\mathscr{G}\)-cubcat._ The proof approximates directed topological cubes by \(\square\)-objects. Proof.: Let \(S=\mathsf{sing}\,\) and \(N=\mathsf{ncr}\,\). In the left of the diagrams there exists a dotted cubical function \(\zeta_{n}:S\mid\square[1]^{n}\mathord{\upharpoonright}N[1]^{n}\), natural in \(\square\)-objects \([1]^{n}\) and unique by \([1]^{n}\) a poset, sending each object \(x\in\mathbb{I}^{n}\) to \(\min\,\mathsf{supp}_{|-|}(x,\square[1]^{n})\) [Lemma 3.29] and thereby making the left and hence also right squares commute. Define \(\nu_{\mathcal{X}}\) and \(\mu_{\mathcal{X}}\) by commutative diagrams the latter of which is natural in \(\square\)-objects \([1]^{n}\). The top left horizontal cubical functions, natural in such \((\square/\mathcal{X})\)-objects \(\phi\), induces the dotted vertical cubical function making the entire diagram commute. The commutativity of the diagram in which the left vertical arrow is induced by the unit of \(\,\mathord{\left\uparrow\right|\kern-1.0pt\left\downarrow\mathsf{sing}\,\right\rangle} \,\) implies that \(N\mathcal{X}\) is a \(\mathscr{G}\)-cubcat. Cubcats are algebras over the underlying pointed endofunctor of \(\mathsf{sing}\,\mathord{\left\uparrow\right|\kern-1.0pt\left\downarrow\mathsf{ up to}\,\mathfrak{so}_{3}\). **Lemma 3.37**.: _Fix a \(\mathscr{G}\)-cubcat \(C\). Then there exists a dotted \(\mathscr{G}\)-cubical function making_ _commute._ Proof.: Let \(S=\mathsf{sing}\,\). Let \(g\) denote a \(\mathscr{G}\)-object. Let \(C^{\sharp}(g)\) be the cubical set \[C^{\sharp}(g)=\operatorname{colim}_{\square[1]^{n}\to C(g)}\mathsf{sing}\,\ \mathord{\left\uparrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left \downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left| \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow \right|\kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left|\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left| \left\downarrow\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left| \downarrow\right|\kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt\left| \right|\kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right| \kern-1.0pt\left|\right|\kern-1.0pt\left\downarrow\right|\kern-1.0pt \left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\downarrow\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left|\right| \kern-1.0pt\left|\right|\kern-1.0pt\left|\right|\kern-1.0pt\left| \right|\kern-1. therefore follows that there exists a dotted \(\mathscr{G}\)-cubical function making the rightmost triangle commute in the diagram There exists a dotted \(\mathscr{G}\)-cubical function making the parallelogram commute [Lemma 3.21]. ## 4. Homotopy This section formalizes and compares different homotopy theories. Section SS4.1 fixes some definitions of homotopy in terms of an abstract _interval object_. Sections SS4.2, SS4.3, and SS4.4 explore specific instances of abstract homotopy, whether classical, directed, or categorical and whether continuous, cubical, or algebraic. Section SS4.5 compares the different homotopy theories. In particular, section SS4.5.2 gives the main results. Observations about the classical homotopy theory of cubical sets are essentially formal but included for completeness, given that our operating definition of cubical sets is not standard. Observations about classical homotopy theories of small categories and topological spaces, well-known, are included for comparison with their directed counterparts. ### Abstract The simplest way to discuss the variety of homotopy theories of interest is in terms of abstract interval objects. The purpose of this section is to fix notation and terminology for standard concepts at this level of abstraction. Fix a closed monoidal category \(\mathscr{X}\) with terminal unit. Fix an _interval object_\(i\) in \(\mathscr{X}\), which we take in this paper to mean a functor \(\square_{1}\to\mathscr{X}\) preserving terminal objects. **Example 4.1**.: The interval object in \(\mathbf{Top}\) naturally sending \(\delta_{\pm}\) to the functions \[\{\nicefrac{{1}}{{2}}\pm\nicefrac{{1}}{{2}}\}\hookrightarrow\mathbb{I}\] is the prototypical example of an interval object. Much of homotopy theory on \(\mathbf{Top}\) generalizes to a category equipped with an interval object. We fix some general terminology for standard concepts, like relative homotopy and homotopy equivalences, in terms of the interval object \(i\). For a pair of parallel \(\mathscr{X}\)-morphisms \(\zeta_{1},\zeta_{2}:o_{1}\to o_{2}\), _left and right \(i\)-homotopies_ from \(\zeta_{1}\) to \(\zeta_{2}\) are choices of dotted \(\mathscr{X}\)-morphisms respectively making I,II commute in Write \(\zeta_{1}\sim_{i}\zeta_{2}\) to denote a (left or right) \(i\)-homotopy from \(\zeta_{1}\) to \(\zeta_{2}\) or the existence of such an \(i\)-homotopy. Say that the dotted right \(i\)-homotopy on the right side is _relative_ a morphism \(\iota:o\to o_{1}\) if additionally III commutes for \(\zeta=\zeta_{1}\) or equivalently for \(\zeta=\zeta_{2}\). We will repeatedly use the formal fact that there exists an \(\mathfrak{i}\)-homotopy (relative a \(\mathscr{X}\)-morphism \(\zeta\) to \(o_{1}\)) between a pair of parallel \(\mathscr{X}\)-morphisms \(\zeta_{1},\zeta_{2}:o_{1}\to o_{2}\) (whose precomposites with \(\zeta\) coincide) natural in \(\zeta_{1},\zeta_{2}\) and a choice of dotted lift making IV (and V) commute in An \(\mathscr{X}\)-morphism \(\alpha:o_{1}\to o_{2}\) is an \(\mathfrak{i}\)-_equivalence_ if there exists an \(\mathscr{X}\)-morphism \(\beta:o_{2}\to o_{1}\) with \(1_{a}\leadsto_{i}\beta\alpha\) and \(1_{b}\leadsto_{i}\alpha\beta\). Define the interval object \(\mathfrak{i}_{n}\), informally the n-fold zig-zag of \(\mathfrak{i}\), by \(\mathfrak{i}_{0}=\mathfrak{i}\) and the following commutative diagrams among which the first is co-Cartesian: An \(\mathfrak{i}_{*}\)_-homotopy_ is a \(\mathfrak{i}_{n}\)-homotopy for some \(n\). Write \(\zeta_{1}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}}\zeta_{2}\) to denote an \(\mathfrak{i}_{*}\)-homotopy or the existence of such an \(\mathfrak{i}_{*}\)-homotopy from \(\zeta_{1}\) to \(\zeta_{2}\). In other words, \(\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}}\) is the congruence on \(\mathscr{X}\) generated by the relation \(\leadsto_{i}\) on morphisms. An \(\mathfrak{i}_{*}\)_-equivalence_ is an \(\mathfrak{i}_{n}\)-equivalence for some \(n\), or equivalently a \(\mathscr{X}\)-morphism representing an isomorphism in the quotient category \(\mathscr{X}/\!\!\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}}\). **Lemma 4.2**.: _Localization of \(\mathscr{X}\) by the \(\mathfrak{i}_{*}\)-equivalences is given by the quotient functor_ \[\mathscr{X}\to\mathscr{X}\left/\!\!\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt \hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}},\right. \tag{7}\] _for each closed monoidal category \(\mathscr{X}\) with terminal unit and interval object \(\mathfrak{i}\) in \(\mathscr{X}\)._ Proof.: Fix a functor \(F:\mathscr{X}\to\mathscr{Y}\) mapping the \(\mathfrak{i}_{*}\)-equivalences to isomorphisms. Consider a pair of \(\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\hbox{ \raise 0.0pt\hbox{$\sim$}}}}_{\mathfrak{i}}\)-equivalent \(\mathscr{X}\)-morphisms \(\alpha,\beta:o_{1}\to o_{2}\). Then there exists \(n\gg 0\) and \(\eta_{n}:\alpha\leadsto_{\mathfrak{i}_{n}}\beta\). In the diagram the left triangle commutes because \(\delta_{\pm}\) admit as retractions \(\sigma\) and hence the two solid diagonal morphisms, isomorphisms, admit a common retraction and hence coincide. The top and bottom triangle commute by our choice of \(\eta_{n}\). The right triangle, degenerate, commutes. Thus the outer square commutes and hence \(F\alpha=F\beta\). Thus \(F\) factors through the quotient functor (7). Let \([o_{1},o_{2}]_{\mathrm{i}}=\mathscr{X}(o_{1},o_{2})\left/\right.\right/_{ \left.\right<\leftrightarrows_{\mathrm{i}_{*}}}\), the **Set**-coequalizer A natural transformation \(\mathrm{i}^{\prime}\to\mathrm{i}^{\prime\prime}\) of interval objects implies that \[graph\left(\leadsto_{\mathrm{i}^{\prime}}\right)\subset graph\left(\leadsto_{ \mathrm{i}^{\prime\prime}}\right).\] **Example 4.3**.: We have the following chain \[graph\left(\leadsto_{\mathrm{i}_{0}}\right)\subset graph\left(\leadsto_{ \mathrm{i}_{1}}\right)\subset graph\left(\leadsto_{\mathrm{i}_{2}}\right) \subset\cdots graph\left(\leadsto_{\mathrm{i}}\right)\] for each interval object \(\mathrm{i}\) in a cocomplete closed monoidal category. Define the interval object \(\mathfrak{d}\) by the commutative diagram **Example 4.4**.: The interval object defining classical homotopy [Example 4.1] is \[|\mathfrak{d}|:\square_{1}\to\mathbf{Top}.\] The different homotopies in the classical setting coincide: \(|\mathfrak{d}|\cong|\mathfrak{d}|_{1}\cong|\mathfrak{d}|_{2}\cdots\) and \[\leadsto_{|\mathfrak{d}|}=\leadsto_{|\mathfrak{d}|_{1}}=\leadsto_{|\mathfrak{d }|_{2}}=\cdots=\leftrightarrows\searrow_{|\mathfrak{d}|}.\] We recall and compare homotopy theories based on the interval objects \(\mathfrak{d}\), \(|\mathfrak{d}|\), \(|\mathfrak{d}|\) [Example 4.1], \(\mathfrak{h}=(\mathbf{Top}\hookrightarrow\mathbf{DiTop})|\mathfrak{d}|\), \(\mathrm{T}_{1}\mathfrak{d}:\square_{1}\hookrightarrow\mathbf{Cat}\), \(\Pi_{1}\mathfrak{d}\). ### Continuous We recall some homotopy theories for the continuous setting. Let \(\pi_{0}X\) denote the set, natural in topological spaces \(X\), of path-components in \(X\). #### 4.2.1. Classical We have the natural identification \[[X,Y]_{|\mathfrak{d}|}=\pi_{0}Y^{X}.\] A continuous function \(f:X\to Y\) is a classical weak equivalence if \[\pi_{0}f^{|C|}:\pi_{0}X^{|C|}\cong\pi_{0}Y^{|C|}.\] for all cubical sets \(C\). The classical weak equivalences and maps having the right lifting property against all maps of the form \(|\square[\delta_{+}\otimes 1_{[1]n}]|:\mathbb{I}^{n}\to\mathbb{I}^{n+1}\) define the weak equivalences and fibrations of the _q-model structure_ on \(\mathbf{Top}\). #### 4.2.2. Directed We can make, by cocontinuity of \(|-|\), the identifications \[|\mathfrak{d}_{n}|=|\mathfrak{d}|_{n},\quad n=0,1,\ldots.\] A \(|\mathfrak{d}\mid_{*}\)-homotopy is sometimes referred to in the literature as a _d-homotopy_ (eg. [30].) Intuitively, a d-homotopy is a homotopy through stream maps that is additionally piecewise monotone and anti-monotone in its homotopy coordinate. The following natural convexity structure on directed hypercubes makes it possible to construct d-homotopies. **Lemma 4.5**.: _There exists a \(\left\lvert\mathfrak{d}_{1}\right\rvert\)-homotopy between both projections of the form_ \[\left\lvert\square[1]^{n}\right\rvert^{2}{\rightarrow}\left\lvert\square[1]^{n}\right\rvert\] _natural in \(\square\)-objects \([1]^{n}\)._ Proof.: Let \(\pi_{1;n}\) and \(\pi_{2;n}\) denote the projections \[\left\lvert\square[1]^{n}\right\rvert^{2}{\rightarrow}\left\lvert\square[1]^{n}\right\rvert\] onto first and second factors, respectively. Linear interpolation defines \(\left\lvert\mathfrak{d}\right\rvert\)-homotopies \[\pi_{1;n}\wedge_{\left\lvert\square[1]^{n}\right\rvert}\pi_{2;n}\leadsto_{ \left\lvert\mathfrak{d}\right\rvert}\pi_{1;n},\pi_{2;n}\] natural in \(\square\)-objects \([1]^{n}\) because \(\left\lvert\square[-]\right\rvert\colon\square\rightarrow\mathbf{DiTop}\) sends each \(\square\)-morphism to a linear map of hypercubes that defines a lattice homomorphism between compact Hausdorff connected topological lattices in \(\mathbf{Pos}\). Concatenating these \(\left\lvert\mathfrak{d}\right\rvert\)-homotopies yields the desired \(\left\lvert\mathfrak{d}_{1}\right\rvert\)-homotopy. A simple consequence is that \(\epsilon_{C}:\mathfrak{so}_{3}C\to C\) defines a natural cubical approximation to \(\varphi_{C;3}\). **Lemma 4.6**.: _There exists a \(\left\lvert\mathfrak{d}_{1}\right\rvert\)-homotopy_ \[\left\lvert\epsilon_{C}\right\rvert\hook\hookrightarrow_{\left\lvert \mathfrak{d}_{1}\right\rvert}\varphi_{C;3}:\left\lvert\mathfrak{so}_{3}C \right\rvert{\rightarrow}\left\lvert C\right\rvert\] _natural in cubical sets \(C\)._ Proof.: There exists the desired \(\left\lvert\mathfrak{d}_{1}\right\rvert\)-homotopy natural in representable cubical sets \(C\) [Lemma 4.5] and hence natural in general cubical sets \(C\) by naturality of \(\left\lvert\epsilon_{C}\right\rvert\) and \(\varphi_{C;3}\). Nearby stream maps to directed realizations are \(\left\lvert\mathfrak{d}_{*}\right\rvert\)-homotopic. **Lemma 4.7**.: _There exists a \(\left\lvert\mathfrak{d}_{*}\right\rvert\)-homotopy between stream maps_ \[f,g:X_{(f,g)}\rightarrow\left\lvert\mathfrak{so}_{9}C_{(f,g)}\right\rvert,\] _natural in objects \(f\times g\) in the full subcategory of \((\mathbf{Str}/\left\lvert\mathfrak{so}_{9}-\right\rvert^{2})\) consisting of those objects \(f\times g:X_{(f,g)}\rightarrow\left\lvert\mathfrak{so}_{9}C_{(f,g)}\right\rvert ^{2}\) for which \(X_{(f,g)}\) is covered by open substreams each of which has images under \(f\) and \(g\) that lie in the open star of the same vertex._ Proof.: For a stream map \(e:X\rightarrow\left\lvert\mathfrak{so}_{9}C\right\rvert\) and substream \(U\subset X\), let \[e_{U}=e(U\hookrightarrow X):U\hookrightarrow\left\lvert\mathfrak{so}_{9}C \right\rvert.\] Let \(\mathscr{X}\) denote the category defined by the proposition. Let \(f\times g:X_{(f,g)}\rightarrow\left\lvert\mathfrak{so}_{3}^{2}C_{(f,g)} \right\rvert^{2}\) denote a \(\mathscr{X}\)-object. Let \(\mathscr{O}_{(f,g)}\) be the category whose objects are all substreams of \(X_{(f,g)}\) whose images under \(f\) and \(g\) lie in the open star of the same vertex and whose morphisms are all inclusions between such substreams. Consider a commutative square of the form in which the vertical arrows are \(\mathscr{X}\)-objects. The image of each \(\mathscr{O}_{(f_{1},g_{1})}\)-object \(U\) under the top horizontal stream map is a \(\mathscr{O}_{(f_{2},g_{2})}\)-object because the bottom horizontal stream map, the directed realization of a cubical function, maps open stars of vertices into open stars of vertices. It is in this sense that the subcategory \(\mathscr{O}_{(f,g)}\) of \(\mathbf{DiTop}\) is natural in \(\mathscr{X}\)-objects \(f\times g\). Let \(U\) denote a \(\mathscr{O}_{(f,g)}\)-object. Thus \(f_{U},g_{U}\) corestrict to directed realizations of closed stars in \(\mathfrak{so}_{9}C_{(f,g)}\) [Proposition 3.26]. It therefore follows that \(\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\, \mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\, \mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright} \,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\,\mathord{\upharpoonright}\, \mathord{\upharpoonright}\,\mathord{\upharpoon Define \(\mathscr{R}\)-morphisms \(\theta_{f;k}\) and \(\rho_{f;k}\), natural in \(f\), by commutative diagrams There exists a unique dotted stream map \(f_{k}\), natural in \(f\) by uniqueness, making the following diagram, in which \(f_{I}\) is a suitable restriction and corestriction of the composite of the bottom row, commute: (8) _convexity structure on \(\mathopen{|}C^{*}_{f;k}\mathclose{|}\)_: Let \(\pi_{f;k;1}\) and \(\pi_{f;k;2}\) denote the respective projections \[\pi_{f;k;1},\pi_{f;k;2}:\mathopen{|}C^{*}_{f;k}\mathclose{|}^{2}\mathclose{ \rightarrow}\mathopen{|}C^{*}_{f;k}\mathclose{|}\] onto first and second factors. Define \(s_{f;k}\) by the commutative diagram (9) natural in \(f\). For each \(x\in\mathopen{|}C^{*}_{f;k}\mathclose{|}\), \(s_{f;k}(x)\) and \(x\) both lie in the same closed cell in \(\mathopen{|}C^{*}_{f;k}\mathclose{|}\), the directed realization of an atomic subpresheaf of \(C^{*}_{f;k}\) and hence the directed realization of a representable up to isomorphism by our assumption on \(C\). Thus there exists a \(\mathopen{|}\mathfrak{d}_{1}\mathclose{|}\)-homotopy \(s_{f;k}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\mathopen{\restriction}_{ \mathopen{|}C^{*}_{f;k}\mathclose{|}}\) natural in \(f\) [Lemma 4.5]. The stream maps \(\pi_{f;k;1}s_{f;k},\pi_{f;k;2}s_{f;k}\) both naturally factor through \(\mathopen{|}\Box[1]^{n_{f}}\mathclose{|}\). Thus there exists a \(\mathopen{|}\mathfrak{d}_{1}\mathclose{|}\)-homotopy \(\pi_{f;k;1}s_{f;k}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\mathopen{\restriction}_{ \mathopen{|}\mathfrak{d}\mathclose{|}}\pi_{f;k;2}s_{f;k}\) natural in \(f\) [Lemma 4.5]. Concatenating the \(\mathopen{|}\mathfrak{d}_{1}\mathclose{|}\)-homotopies \[\pi_{f;k;1}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\pi_{f;k;1}s_{f;k}\mathrel{ \mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\pi_{f;k;2}s_{f;k}\mathrel{ \mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\pi_{f;k;2}\] yields a \(\mathfrak{d}_{3}\)-homotopy \(h^{*}:_{f;k}:\pi_{f;k;1}\mathrel{\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \sim$}}} \limits_{\mathopen{|}\mathfrak{d}\mathclose{|}}}\pi_{f;k;2}\) natural in \(f\). _constructing the requisite directed homotopy_: Consider the solid arrows in the diagram (10) _contr The top triangle commutes by construction of \(h^{*}_{f;k}\) and the left triangle commutes up to \(\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{ \restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction} \,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\, \mathord{\restriction}\,\mathord{\restriction}\,\mathord{\restriction}\,\mathord{ weak equivalences the _classical homotopy category of \(\hat{\square}\)_. Call the fibrant cubical sets in the test model structure simply _fibrant_. The fundamental groupoid \(\Pi_{1}\) is a classical homotopy invariant in the sense of the following proposition, whose proof is given at the end of SS4.5.1. **Proposition 4.10**.: _For each classical weak equivalence \(\psi:A\to B\) of cubical sets,_ \[\Pi_{1}\psi:\Pi_{1}A\to\Pi_{1}B\] _is a categorical equivalence._ As a consequence, cubical nerves of small groupoids are fibrant (cf. Proposition 3.36.) **Corollary 4.11**.: _For each small groupoid \(\mathcal{G}\), \(\mathfrak{ner}\,\mathcal{G}\) is fibrant._ Proof.: Consider the solid functors in the left of the diagrams Suppose \(A\hookrightarrow B\) is an acyclic cofibration in the test model structure. There exists a dotted functor \(\phi\) making the entire right diagram commute by \(\Pi_{1}(A\hookrightarrow B)\) a faithful equivalence of small categories. Therefore there exists exists a dotted functor making the left diagram commute. Classical weak equivalences and monos form the respective weak equivalences and cofibrations of a model structure on presheaves over the minimal variant of \(\square\). In this model structure, the _set_ of inclusions (2) generate the acyclic cofibrations [12, SS8.4.34]. It therefore follows that each inclusion (2) is an acyclic cofibrations in the test model structure on \(\hat{\square}\). For each fibrant cubical set \(C\) having vertex \(v\), let \[\pi_{n}(C,v)=\pi_{0}\Omega^{n}(C,v).\] The set \(\pi_{n+1}(C,v)\) naturally admits the extra structure of a group whose operations come from the right lifting properties of the fibration \(C\to\star\) against (2). The groups \(\pi_{1}(C,v),\pi_{2}(C,v),\dots\) are analogous to combinatorial homotopy groups on Kan simplicial sets [41]. **Example 4.12**.: For each group \(G\), \(\pi_{n}(\mathfrak{ner}\,G,\star)=\begin{cases}G&n=1\\ 0&n\neq 1\end{cases}\). Write \(H^{1}(C;\pi)\) for classical cubical 1-cohomology \[H^{1}(C;\pi)=[C,\mathfrak{ner}\,\pi]_{\mathfrak{g}}=\pi_{0}(\mathfrak{ner}\, \pi)^{C},\] an Abelian group natural in Abelian groups \(\pi\) and cubical sets \(C\). Classical cubical 1-cohomology sends classical weak equivalences to isomorphisms by \(\mathfrak{ner}\,\pi\) fibrant [Corollary 4.11]. The higher cubical cohomology groups are obtained by generalizing the cubical nerve of a (discrete cubical) Abelian group \(\pi\) to a suitable iterated fibrant cubical delooping construction \(W^{n}\pi\) (cf. [41].) #### 4.3.2. Directed Just as classical cubical homotopy theory is the \(\mathfrak{d}\)-homotopy theory of fibrant cubical sets, we can take _directed cubical homotopy_ to mean the \(\mathfrak{d}\)-homotopy theory of cubcats. This cubical directed theory extends classical cubical homotopy theory by the following proposition, whose proof is given in SS4.5.2. **Proposition 4.13**.: _For each fibrant cubical set \(B\), there exists a monic cubical function_ \[B\to C\] _and retraction \(\rho:C\to B\) such that \(1_{C}\rightsquigarrow(B\hookrightarrow C)\rho\)._ We can generalize \(\pi_{n}\) as follows. For each cubcat \(C\) having vertex \(v\), let \[\tau_{n}(C,v)=\pi_{0}\Omega^{n}(C,v).\] The set \(\tau_{n+1}(C,v)\) admits the extra structure of a monoid whose products are induced by \(\infty\)-fold compositions on \(C\) compatible with the extension of \(C\) to \(\blacksquare\)\({}^{\operatorname{op}}\). **Example 4.14**.: For each monoid \(M\), \(\tau_{n}(\operatorname{\mathfrak{n}\mathfrak{e}\mathfrak{r}}M,\star)=\begin{cases} M&n=1\\ 0&n\neq 1\end{cases}\). Extend first classical cohomology to a first _directed \(1\)-cohomology_ \[H^{1}(C;\tau)=[C,\operatorname{\mathfrak{n}\mathfrak{e}\mathfrak{r}}\tau]_{ \mathfrak{d}}=\pi_{0}(\operatorname{\mathfrak{n}\mathfrak{e}\mathfrak{r}} \tau)^{C},\] a commutative monoid natural in commutative monoids \(\tau\) and cubical sets \(C\). **Example 4.15**.: For a cubical model \(\square[1]/\partial\square[1]\) of the circle, \[H^{1}(\square[1]/\partial\square[1];\tau)=[\mathbb{N},\tau]_{\operatorname{ T}_{1}\mathfrak{d}}=\tau\,/_{\equiv}\] where \(\equiv\) is the smallest monoid congruence on \(\tau\) equating two elements if they coincide after adding a common element to both of them. This congruence \(\equiv\) is trivial precisely when \(\tau\) is cancellative. Group-completion induces a monoid homomorphism \[H^{1}(C;\tau\to\tau[\tau]^{-1}):H^{1}(C;\tau)\to H^{1}(C;\tau[\tau]^{-1})\] from directed cohomology to classical cohomology, natural in commutative monoid coefficients \(\tau\). Directed \(1\)-cohomology and this natural comparison homomorphism generalize to higher \(n>1\) by representing \(H^{n}(-;\tau)\) with a suitable iterated delooping construction \(W^{n}\tau\) on the (discrete cubical) commutative monoid \(\tau\). ### Algebraic We recall three homotopy theories on the category \(\mathbf{Cat}\) of small categories and functors between them, in order of increasing refinement. All three of these homotopy theories coincide on the full subcategory \(\mathbf{Gpd}\) of small groupoids. #### 4.4.1. Classical The class of _Thomason weak equivalences_ is the smallest retract-closed class \(\mathscr{W}\) of \(\mathbf{Cat}\)-morphisms having the \(2\)-out-of-\(3\) property and containing all terminal functors such that a functor \(\alpha:\mathcal{X}\to\mathcal{Y}\) lies in \(\mathscr{W}\) whenever the induced functor \(\beta\alpha/o\to\beta/o\) lies in \(\mathscr{W}\) for each functor \(\beta:\mathcal{Y}\to\mathcal{Z}\) and \(\mathcal{Z}\)-object \(o\)[11, Theorem 2.2.11]. The localization of \(\mathbf{Cat}\) by the Thomason weak equivalences exists [65] and will be referred to as the _classical homotopy category of \(\mathbf{Cat}\)_. **Example 4.16**.: A sufficient and intrinsic condition for a \(\mathbf{Cat}\)-morphism \[\zeta:\mathcal{X}\to\mathcal{Y}\] to be a Thomason weak equivalence is if \(o/\zeta\) has a terminal object for each \(\mathcal{X}\)-object \(o\) by Quillen's Theorem A. It is difficult to give a complete characterization of the Thomason weak equivalences that is at once explicit and intrinsic, at least without reference to the simplex category \(\Delta\) (cf. [16].) We write \(h(\mathbf{Cat})\) and \(h(\mathbf{Gpd})\) for the respective localizations of \(\mathbf{Cat}\) and \(\mathbf{Gpd}\) by their Thomason weak equivalences. Thomason weak equivalences can be defined more generally for \(n\)-fold functors between \(n\)-fold categories. These weak equivalences, part of Thomason model structures on categories of \(n\)-fold small categories for each \(n=1,2,\ldots\)[22], model classical homotopy theory in terms of strict (higher) categorical structure. #### 4.4.2. Directed Let \(\mathrm{T}_{1}\mathfrak{d}_{n}\) denote the interval object \[\mathrm{T}_{1}\mathfrak{d}_{n}=\mathrm{T}_{1}(\mathfrak{d}_{n})=(\mathrm{T}_{ 1}\mathfrak{d})_{n}:\square_{1}\to\mathbf{Cat}.\] In particular, \(\mathrm{T}_{1}\mathfrak{d}\) is the cannonical interval object \[\mathrm{T}_{1}\mathfrak{d}:\square_{1}\hookrightarrow\mathbf{Cat}.\] The homotopy theory in which weak equivalences are the \((\mathrm{T}_{1}\mathfrak{d})_{*}\)-equivalences [56], as well a slightly weaker homotopy theory [36] in which homotopy is defined by a single path object in terms of \(\mathfrak{d}_{1},\mathfrak{d}_{2},\ldots\), have been studied previously. The \((\mathrm{T}_{1}\mathfrak{d})_{*}\)-equivalences, while not the weak equivalences of a model structure, are the weak equivalences of a _\(\Lambda\)-cofibration category_[36] structure on \(\mathbf{Cat}\). While each \((\mathrm{T}_{1}\mathfrak{d})_{*}\)-equivalence is a Thomason weak equivalence, not each Thomason weak equivalence is a \((\mathrm{T}_{1}\mathfrak{d})_{*}\)-equivalence. Write \(d(\mathbf{Cat})\) for the quotient of \(\mathbf{Cat}\) by the congruence relation \(\rightsquigarrow_{\mathrm{T}_{1}\mathfrak{d}}\). **Example 4.17**.: For parallel \(\mathbf{Cat}\)-morphisms \(\alpha,\beta\), a \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy \[\alpha\leadsto\beta\] is exactly a natural transformation \(\alpha\to\beta\). In particular, a (left or right) adjoint in \(\mathbf{Cat}\) is a \(\mathrm{T}_{1}\mathfrak{d}_{1}\)-equivalence. **Example 4.18**.: Consider a functor of small categories \[F:\mathcal{X}\to\mathcal{Y}.\] The functor \(F\) is sometimes referred to as a _future equivalence_[31] if \(F\) is a \(\mathrm{T}_{1}\mathfrak{d}\)-equivalence and a _past equivalence_[31] if \(F\) is a \((\mathrm{T}_{1}\mathfrak{d})^{\mathrm{op}}\)-equivalence. Future equivalences and past equivalences preserve certain properties of interest in state space analyses, such as terminal objects and initial objects respectively [27]. #### 4.4.3. Categorical There exist natural isomorphisms \[\Pi_{1}\mathfrak{d}\cong\Pi_{1}(\mathfrak{d}_{n})=(\Pi_{1}\mathfrak{d})_{n},\quad n=0,1,2,\ldots\] A categorical equivalence between small categories is exactly a \(\Pi_{1}\mathfrak{d}\)-equivalence. Every categorical equivalence is a \(\mathrm{T}_{1}\mathfrak{d}\)-equivalence because localization defines a natural transformation \(\mathrm{T}_{1}\mathfrak{d}\to\Pi_{1}\mathfrak{d}\). ### Comparisons The different homotopy theories can be compared. The classical homotopy theories of small categories, cubical sets, simplicial sets, and topological spaces are all equivalent to one another. The directed homotopy theories of cubical sets and streams are equivalent to one another, with the directed homotopy theory of small categories acting as a special case. #### 4.5.1. Classical We can compare different classical homotopy theories. **Proposition 4.19**.: _Topological realization defines the left map of a Quillen equivalence_ \[|-|:\hat{\Box}\leftrightarrow\mathbf{Top}\] _between \(\hat{\Box}\) equipped with its test model structure and \(\mathbf{Top}\) equipped with its q-model structure._ A simple consequence is a cubical description of homotopy groups. **Corollary 4.20**.: _For each \(n\), the function_ \[\tau_{n}(C,v)\to\tau_{n}(|C|,|v|)\] _induced from the unit of the adjunction with left adjoint \(|-|:\hat{\Box}\to\mathbf{Top}\) is bijective, and in particular is a group isomorphism in the case \(n>0\), for all fibrant cubical sets \(C\) and vertices \(v\in C_{0}\)._ Previously established equivalences [[11, Theorem 2.2.11], [60, SSII.3, SSVI.3.3.1]] and a categorical equivalence between classical homotopy categories of simplicial sets and cubical sets in the sense of this paper [Proposition C.4] imply the following. **Corollary 4.21**.: _The functor \(\mathfrak{n}\mathfrak{e}\) induces a categorical equivalence_ \[h(\mathbf{Cat})\simeq h(\hat{\Box}).\] proof of Proposition 4.10.: Consider a cubical function \(\psi:A\to B\). The diagram in which \(\Pi_{1}\) in the bottom row denotes the fundamental groupoid of a topological space and the vertical arrows are inclusions of fundamental groupoids induced by topological realization, commutes. The vertical arrows are categorical equivalences because every point in a topological realization is path-connected to a vertex. Thus if \(\psi\) is a classical weak equivalence, \(|\psi|\) is a classical homotopy equivalence [Proposition 4.19], hence \(\Pi_{1}|A|\to\Pi_{1}|B|\) is a categorical equivalence, and hence \(\Pi_{1}\psi:\Pi_{1}A\to\Pi_{1}B\) is a categorical equivalence. #### 4.5.2. Directed We now give the main results. **Theorem 4.22**.: _There exist \(\left\lvert\mathfrak{hol}\right\rvert_{*}\)-homotopies_ \[\left\lvert\mathfrak{hol}\right\rvert\,\epsilon_{\left\lvert C\right\rvert} \left\lvert\mathfrak{hol}\right\rvert_{*}\,\,\mathfrak{l}_{\left\lvert \mathfrak{sing}\,\,\left\lvert C\right\rvert\right\rvert}\] _natural in cubical sets \(C\)._ Proof.: Write \(S\) for \(\mathfrak{sing}\). Write \(\eta^{\prime},\epsilon^{\prime}\) for the respective unit and counit of \(\left\lvert-\right\rvert\cdot\left\lvert\mathfrak{sing}\right\rvert\). Let \(\mathscr{R}\) denote the full subcategory of \(\hat{\Box}\) consisting of cubical sets whose atomic subpresheaves are all isomorphic to representables. Consider the solid stream maps in the diagram Let \(\theta\) denote a \((\square/S\upharpoonright-\!\!\!\upharpoonright)\)-object. For \(k\gg 0\), \(\mathfrak{adj}(\theta)\varphi_{\square[1]^{n_{\theta}};2^{k}}:\mathopen{\left. \left\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{ \left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left. \mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{\left.\mathopen{ \left.\mathopen{\mathopen{\leftleft.\mathopen{\left.\mathopen{\mathopen{\leftleftleft. \mathopen{\mathopen{\leftleftleft.\mathopen{\mathopen{\leftleftleftleft.{ \mathopen{\mathopen{\mathopen{\mathopen{\mathopen{ \mathopen{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}} {\}}}}{\}{\}{\}{\}\}\}\}\}\}\}\}\ \{\}\}\}\{\}}\{\{\\\\\\\\\\\\\\\}}}}}}{\{\{ 1. \(\mathsf{sing}\) \(f\) _is a_ \(\mathfrak{d}_{*}^{\mathscr{G}}\)_-equivalence_ 2. \(\left\lvert\mathsf{sing}\,f\right\rvert\) _is a_ \(\left\lvert\mathfrak{d}\right\rvert^{\mathscr{G}}\)_-equivalence_ Proof.: Let \(\eta\) denote the unit of the adjunction \(\left\lvert-\right\rvert^{\mathscr{G}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: If (3) then \([\mathsf{n}\mathsf{r}\,\zeta,C]_{\mathfrak{G}^{\mathscr{G}}}\) is a bijection for each \(\mathscr{G}\) cubcat \(C\) and hence (2) [Corollary 4.25] because cubical nerves are cubcats [Proposition 3.36]. If (2) then (1) because \(\mathrm{T}_{1}^{\mathscr{G}}\) sends \(\mathfrak{d}_{*}^{\mathscr{G}}\)-equivalences to \((\mathrm{T}_{1}\mathfrak{d})_{*}^{\mathscr{G}}\)-equivalences. If (1) then (3) because \(\mathord{\upharpoonright}\mathsf{n}\mathsf{r}\mathsf{e}-\mathord{\upharpoonright} ^{\mathscr{G}}\) sends \((\mathrm{T}_{1}\mathfrak{d})_{*}^{\mathscr{G}}\)-equivalences to \(\mathord{\upharpoonright}\mathsf{d}_{*}^{\mathscr{G}}\)-equivalences. Our main result, when specialized for the case \(\mathscr{G}=\star\) of trivial diagrams, is a directed analogue of the classical Quillen equivalence between cubical sets and topological spaces. Recall that a class \(\mathscr{W}\) of morphisms in a category \(\mathscr{X}\) for which the localization \(\mathscr{X}[\mathscr{W}^{-1}]\) exists is _saturated_ if it coincides with the isomorphisms in the localization \(\mathscr{X}[\mathscr{W}^{-1}]\) of \(\mathscr{X}\) by \(\mathscr{W}\). **Corollary 4.27**.: _There exist dotted localizations in the diagram_ _by the following respective saturated classes of morphisms: the \(\mathrm{T}_{1}\mathfrak{d}^{\mathscr{G}}\)-equivalences; those \(\mathscr{G}\)-cubical functions \(\psi\) for which \(\mathord{\upharpoonright}\psi\mathord{\upharpoonright}^{\mathscr{G}}\) is a \(\mathord{\upharpoonright}\mathord{\upharpoonright}^{\mathscr{G}}\)-equivalence; and those \(\mathscr{G}\)-stream maps \(f\) for which \(\mathsf{s}\mathsf{s}\mathsf{n}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s}\mathsf{s} f\) is a \(\mathord{\upharpoonright}^{\mathscr{G}}\)-equivalence. There exist dotted horizontal functors making the entire diagram commute up to natural isomorphism with the left dotted horizontal functor a fully faithful embedding and the right dotted horizontal functor an adjoint categorical equivalence. A \(\mathscr{G}\)-cubical function \(\psi\) represents an isomorphism in \(d(\hat{\Box}^{\mathscr{G}})\) if and only if \([\psi,C]_{\mathfrak{d}^{\mathscr{G}}}\) is a bijection for all cubcats \(C\)._ Proof.: Let \(\hat{d}(\hat{\Box}^{\mathscr{G}})\) and \(\hat{d}(\mathbf{DiTop}^{\mathscr{G}})\) denote the quotient categories \[\hat{d}(\hat{\Box}^{\mathscr{G}})=\hat{\Box}^{\mathscr{G}}\mathord{\upharpoonright }\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright}\mathord{\upharpoonright}\mathord{\upharpoonright} \mathord{\upharpoonright **Corollary 4.28**.: _The unit of the adjunction \(\left\uparrow\!-\!\right\downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Example 4.33**.: Fix a cancellative commutative monoid \(\tau\). Then \[H^{1}(T;\tau)=\tau^{2},\] where \(T\) is the unique underlying stream of a time-oriented Lorentzian torus [Figure 2], by identifying \(T\) with the directed realization of \((\square[1]/\partial\square[1])^{\otimes 2}\), whose fundamental category is \(\mathbb{N}^{2}\). **Example 4.34**.: Fix a cancellative commutative monoid \(\tau\). Then \[H^{1}(K;\tau)=\tau\times_{2\tau}\tau,\] where \(K\) is the unique underlying stream of a time-oriented Lorentzian Klein bottle [Figure 2], by identifying \(K\) with the directed realization of the quotient \(C\) of \(\square[1]^{2}\) by the smallest equivalence relation identifying \(\delta_{\pm 1;2}\) with \(\delta_{\mp 2;2}\) and calculating \(\operatorname{T}_{1}C\) to be the monoid \(\langle x,y\mid x^{2}=y^{2}\rangle\). Small categories are insufficient for modelling all directed homotopy types. **Example 4.35**.: For each \(n>1\) and small category \(\mathcal{X}\), every stream map \[\upharpoonright[1]^{n}/\partial\square[1]^{n}|\to\upharpoonright\mathsf{n} \mathsf{n}\mathsf{c}\mathcal{X}|\] is \(\leftrightsquigarrow_{\upharpoonright[\mathfrak{d}]}\) -homotopic to a constant stream map. It therefore follows that higher directed spheres \(\upharpoonright[1]^{n}/\partial\square[1]^{n}|\) do not have the h-homotopy, much less d-homotopy type, type of directed realizations of cubical nerves of small categories. Intuitively, the cubical model \(\square[1]^{n}/\partial\square[1]^{n}\) of a directed sphere presents a cubcat freely generated by a single \(n\)-cell between a single vertex. In fact, these higher directed spheres likely do not have the h-homotopy type of directed realizations of cubical models of \((1,\infty)\)-categories [9, 17]. Thus directed homotopy types encode higher categories, albeit up to directed homotopy, more general than \((1,\infty)\)-categories (cf. [18]). The theorem in SS1 follows from Corollary 4.27, Example 4.35, and equivalent formulations of the classical homotopy category. ## 5. Conclusion Much early work in directed homotopy theory went into generalizing categorical equivalences between groupoids to notions of equivalences between small categories that preserve computational behavior of interest (eg. [21, 26, 27] and [Example 4.18]). These directed equivalences are stronger than \(\operatorname{T}_{1}\mathfrak{d}_{*}\)-equivalences but weaker than categorical equivalences. Unfortunately, these directed equivalences have poor formal properties compared to \(\operatorname{T}_{1}\mathfrak{d}_{*}\)-equivalences; \(\mathbf{Cat}\) admits a localization with respect to the latter but not the former. The typical application was to capture the behavior of executions in a concurrent program having a directed state space \(X\) by computing a minimal model of \(\operatorname{T}_{1}\mathsf{sing}X\) with respect to the relevant notion of directed equivalence. It is in this sense that many original applications of directed homotopy were \(1\)-categorical in nature, albeit up to generalizations of \(1\)-categorical equivalence. It is also in this sense that later applications have often [23, 62] been \((1,\infty)\)-categorical in nature. For example, more subtle computational behavior of executions in a concurrent program having directed state space \(X\) appears in the properties of _Moore path categories_ on \(X\), topological categories in which the morphisms form spaces of directed paths on \(X\). In fact, directed state spaces \(X\) have sometimes been _defined_ as topological categories of some sort [23]. Moore path categories on directed spaces in the applications have minimal models with tractable descriptions [62] and are in fact conjectured to model all \((1,\infty)\) categories [18]. Unfortunately, the class of stream maps preserving the relevant \((1,\infty)\)-categories of interest also have poor formal properties compared to stream maps \(f\) for which \(\mathsf{sing}\,f\) are \(\mathfrak{d}_{*}\)-equivalences; **DiTop** admits a localization with respect to the latter but likely not the former. Recent years have seen computations modelled abstractly by homotopy types. A (dependently) typed higher order programming language for reversible computations has been shown to admit semantics in \(\infty\)-groupoids (fibered over other \(\infty\)-groupoids) [3]. Objects represent states, 1-morphisms represent reversible executions, and higher order morphisms represent reversible transformations of those executions, or equivalently, concurrent executions of sequential computations. Since \(\infty\)-equivalences between \(\infty\)-groupoids ignore differences like subdivisions, state space reduction is built into the very syntax of the language. This language, higher order, can thus be used to reason efficiently about computations expressed in the same language. The recent literature has seen extensions of (dependent) type theory to synthetic theories of (fibered) higher categories [50, 64]. These more expressive languages model irreversible computations [50] because the morphisms in higher categories need not be invertible. Ideally, (dependent) type theory can be alternatively extended so that edges in (fibered) cubcats represent computations, higher cubes in (fibered) cubcats represent higher order transformations, and directed homotopy invariance is built into the syntax of the language (cf. [58]). Such a language ought to share both some of the efficiency of automated reasoning within dependent type theory as well as some of the expressiveness of synthetic higher category theory. ## 6. Acknowledgements This work was supported by AFOSR grant FA9550-16-1-0212. The author is grateful to Robert Ghrist for conceiving of and producing the visualizations of conal manifolds behind Figure 2. The author would like to thank Emily Rudman for pointing out some simplifications in earlier proofs of Lemmas 3.8 and 3.9. ## Appendix A Lattices A lattice \(L\) is _modular_ if for all \(x,y,z\in L\), \[(x\wedge_{L}y)\vee_{L}(x\wedge_{L}z)=((x\wedge_{L}y)\vee_{L}z)\wedge_{L}x.\] **Example A.1**.: Distributive lattices are modular. The following _Diamond Isomorphism Theorem_ characterizes modular lattices. **Diamond Isomorphism Theorem**.: _The following are equivalent for a lattice \(L\)._ 1. \(L\) _is modular._ 2. _For each_ \(x,y\in L\)_, the rules_ \(x\vee_{L}-\) _and_ \(y\wedge_{L}-\) _define respective bijections_ \([x\wedge_{L}y,y]\cong[x,x\vee_{L}y]\) _and_ \([x,x\vee_{L}y]\cong[x\wedge_{L}y,y]\)_, where_ \([x^{\prime},y^{\prime}]\) _denotes the smallest interval in_ \(L\) _containing_ \(x^{\prime}\) _as its minimum and_ \(y^{\prime}\) _as its maximum._ **Theorem**, [49].: _The following are equivalent for a finite lattice \(L\)._ 1. \(L\) _is distributive_ 2. _For all_ \(x,y,z\in L\) _with_ \(y,z\) _either both immediate successors to_ \(x\) _or both immediate predecessors to_ \(x\) _in_ \(L\)_,_ \(\{y\wedge_{L}z,y\lor_{L}z,y,z\}\) _is a Boolean interval in_ \(L\)_._ 3. _The smallest interval in_ \(L\) _containing Boolean intervals_ \(I,J\) _in_ \(L\) _with_ \(\max\,I=\max\,J\) _or_ \(\min\,I=\min\,J\) _is also Boolean._ **Lemma A.2**.: _For Boolean intervals \(I,J\) in a finite distributive lattice \(L\), the images_ \[I\vee_{L}J,I\wedge_{L}J\] _of \(I\times J\) under \(\vee_{L},\wedge_{L}\) are Boolean intervals in \(L\)._ Proof.: The intervals \(I\vee_{L}\min\,J,J\vee_{L}\min\,I\) are Boolean by the Diamond Isomorphism for Modular Lattices and hence \(I\vee_{L}J\) is Boolean [Theorem, [49]]. Thus \(I\wedge_{L}J\) is also Boolean by duality. While every finite poset, including every finite lattice, is a colimit of its (1-dimensional) Boolean intervals in the category of posets and monotone functions, not every finite lattice is such a colimit _in the category_ **Cat**_._ **Lemma A.3**.: _Every finite distributive lattice is a_ **Cat**_-colimit of its Boolean intervals._ Proof.: Consider a finite distributive lattice \(L\). Let \(\mathcal{X}\) be the **Cat**-colimit \[\mathcal{X}=\operatorname{colim}_{I\to L}I\] over the Boolean intervals \(I\) in \(L\). The object sets of \(\mathcal{X}\) and \(L\) coincide. There exists a relation \(x\leqslant_{L}y\) if and only if there exists a \(\mathcal{X}\)-morphism \(x\to y\) because \(\mathcal{X}\) admits as generators relations of the form \(x\leqslant_{L}y\) with \(y\) an immediate successor to \(x\) in \(L\). Consider parallel \(\mathcal{X}\)-morphisms \(\alpha,\beta:x\to y\). It thus suffices to show \[\alpha=\beta.\] We induct on the length \(k\) of a maximal chain in \(L\) having minimum \(x\) and maximum \(y\). In the base case \(k=1\), \(\alpha,\beta\) are identities and hence \(\alpha=\beta\). Inductively assume that \(\alpha=\beta\) when there exists a maximal chain in \(L\) having minimum \(x\) and maximum \(y\) with length less than \(k\). Consider the case \(k>1\). Then \(\alpha,\beta\) both factor as composites \(x\to a\to y\) and \(x\to b\to y\) with \(a\) and \(b\) both immediate successors to \(x\) in \(L\). Then \(\alpha\) and \(\beta\) are choices of dotted monotone function respectively making the left and bottom triangles commute in a diagram in \(\mathcal{X}\). There exists a dotted morphism making the top and right triangles commute by the inductive hypothesis. The outer square, whose elements form a Boolean interval in \(L\) [Theorem, [49]], commutes in \(\mathcal{X}\). Thus \(\alpha=\beta\). We can now give a proof of Lemma 3.12. proof of Lemma 3.12.: Suppose (1). Let \(I\) be a Boolean interval in \(L\). The restriction of \(\phi\) to \(I\) corestricts to a surjection \(\phi_{I}:I\to J_{I}\) with \(J_{I}\) a Boolean interval in \(M\) because \(\phi\) preserves Boolean intervals. The function \(\phi_{I}:I\to J_{I}\), surjective by construction, is a lattice homomorphism by \(I\hookrightarrow L\) and \(J\hookrightarrow M\) both inclusions of sublattices into lattices. Thus (2). Suppose (2). Consider \(x,y\in L\). It suffices to show that \[\phi(x\vee_{L}y)=\phi(x)\vee_{M}\phi(y). \tag{11}\] by double induction on the minimal lengths \(m,n\) of maximal chains in \(L\) having as their extrema \(x\wedge_{L}y\) and, respectively, \(x\) and \(y\). For then \(\phi\) preserves binary suprema, hence also binary infima by duality, and hence \(\phi\) is a lattice homomorphism, mapping Boolean intervals onto Boolean intervals by (2). In the case \(m=1\), \(x\wedge_{L}y=x\), hence \(x\lor_{L}y=y\), hence \(\phi(x)\leqslant_{M}\phi(x\lor_{L}y)=\phi(y)\), and consequently (11). The case \(n=1\) follows by symmetry. Consider the case \(m=n=2\). Then \(x,y,x\wedge_{L}y,x\lor_{L}y\) form the elements of a Boolean interval \(I\) in \(L\) [Theorem, [49]]. Then the restriction of \(\phi\) to \(I\) corestricts to a Boolean interval \(J_{I}\) in \(M\). It therefore follows from (2) and the preservation of finite non-empty suprema and infima by \(I\hookrightarrow L\) and \(J_{I}\hookrightarrow M\) that \[\phi(x\lor_{L}y)=\phi(x\lor_{I}y)=\phi(x)\lor_{J_{I}}\phi(y)=\phi(x)\lor_{M} \phi(y).\] Consider the case \(m\leqslant 2\). Suppose \(n>2\). Then there exists an immediate successor \(y^{\prime}\neq y\) to \(x\wedge_{L}y\) such that \(y^{\prime}\leqslant_{L}y\). Then \(y\wedge_{L}(x\lor_{L}y^{\prime})=(x\wedge_{L}y)\lor_{L}y^{\prime}=y^{\prime}\) by \(L\) distributive and hence the length of a maximal chain in \(L\) having as its extrema \(y\wedge_{L}(x\lor_{L}y^{\prime})\) and \(y\) is strictly less than \(n\). And \(x\wedge_{L}y^{\prime}=x\wedge_{L}y\) and hence the length of a maximal chain in \(L\) having as its extrema \(x\wedge_{L}y^{\prime}\) and \(y^{\prime}\) is \(m=2\). Inductively assume \(\phi(x\lor_{L}y^{\prime})=\phi(x)\lor_{L}\phi(y^{\prime})\) and \(\phi(y\lor_{L}(x\lor_{L}y^{\prime}))=\phi(y)\lor_{M}\phi(x\lor_{L}y^{\prime})\). It therefore follows that \(\phi(x\lor_{L}y)=\phi(x\lor_{L}y^{\prime}\lor_{L}y)=\phi(x\lor_{L}y^{\prime}) \lor_{M}\phi(y)=\phi(x)\lor_{M}\phi(y)\). Then (11) follows from induction on \(n\) for the case \(m\leqslant 2\). Thus (11) holds whenever \(\min(m,n)\leqslant 2\) by symmetry. Consider the general case. To show (11), it suffices to take the case \(m>2\). Then there exists an immediate successor \(x^{\prime}\neq x\) to \(x\wedge_{L}y\) such that \(x^{\prime}\leqslant_{L}x\). Then \(x\wedge_{L}(x^{\prime}\lor_{L}y)=(x\wedge_{L}y)\lor_{L}x^{\prime}=x^{\prime}\) by \(L\) distributive and hence the length of a maximal chain in \(L\) having as its extrema \(x\wedge_{L}(x^{\prime}\lor_{L}y)\) and \(x\) is strictly less than \(m\). And \(x^{\prime}\wedge_{L}y=x\wedge_{L}y\) and hence the length of a maximal chain from \(x^{\prime}\wedge_{L}y\) to \(x\) is \(2\). Inductively assume \(\phi(x\lor_{L}(x^{\prime}\lor_{L}y))=\phi(x)\lor_{M}\phi(x^{\prime}\lor_{L}y)\). Then \(\phi(x\lor_{L}y)=\phi(x)\lor_{L}x^{\prime}\lor_{L}y)=\phi(x)\lor_{M}\phi(x \lor_{L}y^{\prime})=\phi(x)\lor_{M}\phi(y)=\phi(x)\lor_{M}\phi(y)\). Hence (11). Besides the previous observations, the preservation of fully faithful embeddings by **C**at-pushouts[66] is used in the following proof of Proposition 3.13. proof of Proposition 3.13.: Let \(F_{k}\) denote the bottom left Kan extension. Uniqueness follows by the right vertical arrow an inclusion. To show existence, it suffices to show \(F_{k}\) preserves **Dis**-objects and **Dis**-morphisms. \(F_{k}\) _preserves_ **Dis**_-objects:_ Let \(I\) denote a Boolean interval in \(L\). Inclusions of the forms \((I\hookrightarrow L)^{[k]}:I^{[k]}\to L^{[k]}\) are fully faithful embeddings. It follows that the natural functor \(F_{k}L\to L^{[k]}\), an iterated pushout of inclusions of the form \((I\to L)^{[k]}\) [Lemma A.3], is a full and faithful embedding and hence can henceforth be regarded as an inclusion of posets. In other words, we can identify \(\mathfrak{so}_{k+1}L\) with the poset of all monotone functions \([k]\to L\) which corestrict to Boolean intervals in \(L\), with partial order \(\leqslant_{\mathfrak{so}_{k+1}L}\) defined by \(\alpha\leqslant_{\mathfrak{so}_{k+1}L}\beta\) if and only if \(\alpha(i)\leqslant_{L}\beta(i)\) for each \(0\leqslant i\leqslant k\). Consider \(\alpha,\beta\in\mathfrak{so}_{k+1}L\). The monotone functions \(\alpha\lor_{L}\beta\) and \(\alpha\wedge_{L}\beta\) corestrict to Boolean intervals in \(L\) [Lemma A.2]. Thus \(F_{k}L\) is a sublattice of the finite distributive lattice \(L^{[k]}\) and hence finite distributive. \(F_{k}\) _preserves_ **Dis**_-morphisms:_ Consider a general **Dis**-morphism \(\phi:L\to M\). To show that \(F_{k}\phi\) is a **Dis**-morphism, it suffices to take the case \(\phi\) a \(\square\)-morphism [Lemma 3.12]. Then \(\phi\) is an iterated a Cartesian monoidal product in **Cat** of \(\delta_{\pm},\sigma\). Then \(\phi^{[k]}\) is an iterated Cartesian monoidal product in **Cat** of \(\delta_{\pm}^{[k]}\) and \(\sigma^{[k]}\) by \((-)^{[k]}\) a right adjoint and hence product-preserving. The functions \(\delta_{\pm}^{[k]}\) and \(\sigma^{[k]}\) are monotone functions to or from a terminal object and hence **Dis**-morphisms. Hence \(\phi\) is a **Dis**-morphism. _last claim:_ In order to show that the natural transformation \(F_{m}\to F_{n}\) induced from \(\phi\) component-wise corestricts to the desired natural transformation \(\mathfrak{so}_{m+1}\to\mathfrak{so}_{n+1}\), it suffices to show that \(J^{\phi}\) is a **Dis**-morphism for each \(\square\)-object \(J\) [Lemma 3.12]. It therefore suffices to take the case \(J=[1]\) because \((-)^{\phi}\) is a Cartesian monoidal natural transformation \(\mathbf{Cat}^{\mathrm{op}}\to\mathbf{Cat}\). In that case, non-singleton Boolean intervals in \(J^{[m]}=[m+1]\) and \(J^{[n]}=[n+1]\) are intervals between elements and their immediate successors. Consider a non-singleton Boolean interval \(I\) in \(J^{[m]}=[1]^{[m]}\). Let \(\zeta_{-}=\min I\) and \(\zeta_{+}=\max I\). Then there exists \(0\leqslant j\leqslant m\) such that \(\zeta_{-}(i)=\zeta_{-}(i+1)=\zeta_{+}(i)=0\) for all \(i<j\), \(\zeta_{+}(i)=1\) for all \(i\geqslant j\), and \(\zeta_{-}(i)=1\) for all \(i>j\). The preimage of \(j\) under \(\phi\) is either a singleton or empty by \(\phi\) injective. In the case that the preimage is empty, then \(\zeta_{-}\phi=\zeta_{+}\phi\). In the case that the preimage contains the unique element \(j^{*}\), then \(\phi(i)<j\) for all \(i<j^{*}\), \(\phi(i)\geqslant j\) for all \(i\geqslant j^{*}\), and consequently \(\zeta_{-}\phi(i)=\zeta_{-}\phi(i+1)=\zeta_{+}(i)=0\) for all \(i<j^{*}\), \(\zeta_{+}\phi(i)=1\) for all \(i\geqslant j^{*}\), \(\zeta_{-}\phi(i)=1\) for all \(i>j^{*}\), and consequently \(\zeta_{+}\phi\) is an immediate successor to \(\zeta_{-}\phi\) in \([1]^{[n]}\). In either case, \(\{\phi\zeta_{-},\phi\zeta_{+}\}\) is a Boolean interval in \([1]^{[n]}\). Thus \(J^{\phi}\) is a **Dis**-morphism. ## Appendix B Pro-objects We recall a characterization of the data of pro-morphisms as follows. **Lemma B.1**.: _Fix a category \(\mathscr{X}\). Consider the following data:_ 1. _cofiltered diagrams_ \(X:\mathcal{X}\to\mathscr{X}\) _and_ \(Y:\mathcal{Y}\to\mathscr{X}\)_._ 2. _choices of_ \(\mathcal{X}\)_-object_ \(x_{y}\) _and_ \(\mathscr{X}\)_-morphism_ \(\zeta_{y}:X(x_{y})\to Y(y)\) _for each choice_ \(y\) _of_ \(\mathcal{Y}\)_-object such that for each_ \(\mathcal{Y}\)_-morphism_ \(v:y_{1}\to y_{2}\)_, there exist_ \(\mathcal{X}\)_-morphisms_ \(\chi_{1}:x\to x_{y_{1}}\) _and_ \(\chi_{2}:x\to x_{y_{2}}\)__ _Suppose that for each \(\mathcal{Y}\)-morphism \(v:y_{1}\to y_{2}\), there exist \(\mathcal{X}\)-morphisms \(\chi_{1}:x\to x_{y_{1}}\) and \(\chi_{2}:x\to x_{y_{2}}\) such that the left of the diagrams below commutes. Then there exists a unique \((\mathbf{pro}\)-\(\mathscr{X})\)-morphism \(\zeta:\lim X\to\lim Y\) such that the following diagram, in which the vertical arrows are canonically defined, commutes for each \(\mathcal{Y}\)-object \(y\)._ ## Appendix C Test Categories A _test model structure_ on a presheaf category \(\hat{\bigcirc}\) is a model structure on \(\hat{\bigcirc}\) in which the cofibrations are the monos and the weak equivalences are those \(\hat{\bigcirc}\)-morphisms \(\psi:A\to B\) for which \(\bigcirc/\psi\) are Thomason weak equivalences. A small category \(\hat{\bigcirc}\) is a _test category_ if there is a Thomason weak equivalence \(\hat{\bigcirc}\to\star\) and \(\hat{\bigcirc}\) admits a test model structure [Theorem 1.4.3, [12]]. The reader is referred elsewhere [12] for details. Test categories can be recognized by the following criteria. **Proposition, p.86 44(d), [35].**_A small category \(\bigcirc\) is a test category if there exist functors_ \[\zeta:\bigcirc\to\mathbf{Cat},\quad\mathfrak{i}:\mathscr{I}\to\hat{\bigcirc},\] _with \(\mathfrak{i}\) an interval object in \(\hat{\bigcirc}\), satisfying the following:_ 1. \(\bigcirc\to\star\) _is a Thomason weak equivalence_ 2. \(\zeta(o)\) _has a terminal object for each_ \(\bigcirc\)_-object_ \(o\)__ 3. _the equalizer of_ \(\bigcirc\![\mathfrak{i}(\delta_{-})],\bigcirc\![\mathfrak{i}(\delta_{+})]\) _is initial in_ \(\hat{\bigcirc}\)__ 4. \((\bigcirc\!/\mathfrak{i}([1]))\to\star\) _is a Thomason weak equivalence_ 5. _there exists a natural transformation_ \(\zeta_{\mathfrak{i}}\mathfrak{i}\to(\square_{1}\hookrightarrow\mathbf{Cat})\)_, where_ \(\zeta_{\mathfrak{i}}\) _denotes the left Kan extension of_ \(\zeta\) _along the Yoneda embedding_ \(\bigcirc\![-]\)_._ In abstract homotopical parlance [35], conditions (1), (4) require that \(\bigcirc,\mathfrak{i}\) are _aspherical_ and condition (3) requires that \(\mathfrak{i}\) be _separated_. **Proposition C.1**.: _Consider a small category \(\bigcirc\) contained in a chain_ \[\square\subset\bigcirc\subset\mathbf{Cat}\] _of subcategories, with \(\square\) wide in \(\bigcirc\). Then \(\bigcirc\) is a test category. In particular, \(\square\) is a test category._ Proof.: Let \(\zeta\) denote inclusion \(\bigcirc\hookrightarrow\mathbf{Cat}\). Let \(\mathfrak{i}\) be the interval object \(\bigcirc\![-](\mathscr{I}\hookrightarrow\bigcirc)\) in \(\hat{\bigcirc}\). The functor \(\bigcirc\to\star\) is a Thomason weak equivalence by \([0]\) a terminal object in \(\square\), \(\mathbf{Cat}\), and hence \(\bigcirc\). Each small category \(\zeta([1]^{n})=[1]^{n}\) has terminal object \((1,\cdots,1)\) for each \(\bigcirc\)-object \([1]^{n}\). The equalizer of \(\mathfrak{i}(\delta_{-})\) and \(\mathfrak{d}(\delta_{+})\), whose restriction to a cubical set is the initial cubical set because it defines the empty equalizer of \(\mathfrak{d}(\delta_{-})\) and \(\mathfrak{d}(\delta_{+})\), is the initial presheaf on \(\hat{\bigcirc}\). The category \((\bigcirc\!/\bigcirc\![1])\) has final object \(1_{\bigcirc\![1]}:\bigcirc\![1]\to\bigcirc\![1]\) by \(\square\) wide in \(\bigcirc\) and therefore admits a Thomason weak equivalence to \(\star\). There exists natural isomorphisms \(\zeta_{\mathfrak{i}}\mathfrak{i}\cong(\square\hookrightarrow\mathbf{Cat})_{ \mathfrak{i}}\mathfrak{d}\cong\mathrm{T}_{1}\mathfrak{d}:\square_{1} \hookrightarrow\mathbf{Cat}\). The desired conclusion follows [Proposition, p.86 44(d), [35]]. **Lemma C.2**.: _There exists a \(\mathrm{T}_{1}\mathfrak{d}_{1}\)-equivalence, natural in cubical sets \(C\), of the form_ \[(\Delta/(\mathfrak{tri}_{\bigcirc}C))\simeq(\bigcirc\!/C)\] _for each small subcategory \(\bigcirc\subset\mathbf{Cat}\) containing \(\square\) as a wide subcategory such that each \(\bigcirc\)-morphism admits a factorization, unique up to isomorphism, into a surjective \(\bigcirc\)-morphism followed by an injective \(\bigcirc\)-morphism and the poset of subobjects of every \(\bigcirc\)-object is a lattice._ Proof.: Let \(\bigtimes\) denote one of \(\Delta,\bigcirc\). It is possible to define an endofunctor \[E_{P}:(\bigcirc\!/P)\to(\bigcirc\!/P),\] natural in \(\hat{\bigtimes}\)-objects \(P\), naturally sending each \((\bigtimes\!/P)\)-object \(\theta\) to the terminal \((\bigtimes\!/P)\)-object having the same image as the \(\hat{\bigtimes}\)-morphism \(\theta\) because each \(\bigtimes\)-morphism admits a unique factorization up to isomorphism into a surjection followed by an injection. Then \(E_{P}\) is pointed, uniquely and hence naturally in \(\hat{\bigtimes}\)-objects \(P\). Thus there exists a \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy \(1_{\bigtimes\!/P}\rightsquigarrow_{\mathrm{T}_{1}\mathfrak{d}}E_{P}\). Define functors \(F_{C},G_{C}\), natural in \(\hat{\bigcirc}\)-objects \(C\), of the forms \[F_{C}:(\Delta/(\mathfrak{tri}_{\bigcirc}C))\to(\bigcirc\!/C)\quad G_{C}:( \bigcirc\!/C)\to\Delta/(\mathfrak{tri}_{\bigcirc}C)\] as follows. We can take the \((\bigcirc/C)\)-object \(F_{C}\psi\), natural in \((\Delta/(\mathsf{tri}_{\bigcirc}C))\)-objects \(\psi\), to be terminal among all \((\bigcirc/C)\)-objects \(\theta\) with \(\mathsf{im}\,\psi\subset\mathsf{im}\,\mathsf{tri}\,\theta\) because the poset of subobjects of each \(\bigcirc\)-object is a finite and hence complete lattice. The \((\Delta/(\mathsf{tri}_{\bigcirc}C))\)-object \(G_{C}\theta\), natural in \((\bigcirc/C)\)-objects \(\theta:\bigcirc[1]^{n}\to C\), is defined by the commutative diagram Dotted simplicial nerves of extrema-preserving monotone functions \([1]\to[n]\) make commute and therefore define the components of a natural transformation or equivalently a \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy \(G_{C}F_{C}\leadsto_{\mathrm{T}_{1}\mathfrak{d}}E_{\mathsf{tri}_{\bigcirc}C}\). Concatenating this \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy with \(1_{\bigcirc/\mathsf{tri}_{\bigcirc}C}\leadsto_{\mathrm{T}_{1}\mathfrak{d}}E_{ \mathsf{tri}_{\bigcirc}C}\) and the \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy \(1_{\bigcirc/C}\leadsto_{\mathrm{T}_{1}\mathfrak{d}}E_{C}=F_{C}G_{C}\) with a constant \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy yield the desired \(\mathrm{T}_{1}\mathfrak{d}_{1}\)-homotopies. **Lemma C.3**.: _There exists a \(\mathrm{T}_{1}\mathfrak{d}_{2}\)-equivalence, natural in simplicial sets \(S\), of the form_ \[(\mathsf{tri}_{\bigcirc}\bigcirc[-]/S)\simeq(\Delta/S)\] _for each small subcategory \(\bigcirc\subset\mathbf{Cat}\) containing \(\square\) as a wide subcategory._ Proof.: Define functors \(F_{S},G_{S}\), natural in simplicial sets \(S\), of the forms \[F_{S}:(\mathsf{tri}_{\bigcirc}\bigcirc[-]/S)\to(\Delta/S)\quad G_{S}:(\Delta/ S)\to(\mathsf{tri}_{\bigcirc}\bigcirc[-]/S)\] by natural commutative diagrams of the following forms: Let \(\mathfrak{diag}_{[1]}=1_{[1]}^{\times_{\mathbf{Cat}}n}:[1]\to[1]^{n}\) and \(\delta_{++}=\delta_{+1}\cdots\delta_{+1}:[1]\to[1]^{n}\). For all \(x\in[1]\), \[\mathfrak{diag}_{[1]}(x) =(x,\ldots,x)\] \[\leqslant_{[1]^{n}}(1,\ldots,1,x)\] \[=\delta_{++}(x)\] Therefore the function \(\phi:[1]^{2}\to[1]^{n}\) characterized by \[\phi\delta_{-1}=\mathfrak{diag}_{[1]}\quad\phi\delta_{+}=\delta_{++}\] is monotone. Hence there exists a dotted simplicial nerve of \(\phi\) making commute. The arrows in the top row define the components of \(3\)\(\mathrm{T}_{1}\mathfrak{d}\)-homotopies, which, when concatenated with a constant \(\mathrm{T}_{1}\mathfrak{d}\)-homotopy, yields a \(\mathrm{T}_{1}\mathfrak{d}_{2}\)-homotopy \(G_{S}F_{S}\leftrightsquigarrow_{\mathrm{T}_{1}\mathfrak{d}}1_{\bigcirc/S}\) Dotted simplicial nerves of extrema-preserving monotone functions \([1]\to[n]\) make commute and hence define the components of natural transformations \(F_{S}G_{S}\to 1_{(\Delta/S)}\) or equivalently \(\mathrm{T}_{1}\mathfrak{d}\)-homotopies \(F_{S}G_{S}\leftrightsquigarrow_{\mathrm{T}_{1}\mathfrak{d}}1_{(\Delta/S)}\). Fix a subcategory \(\bigcirc\) of \(\mathbf{Cat}\) defining a test category. Even though there exists a zig-zag of Quillen equivalences between \(\hat{\bigcirc}\) and \(\hat{\Delta}\), it is not necessarily the case that triangulation directly defines a Quillen equivalence between them. **Proposition C.4**.: _Triangulation defines the left map of a Quillen equivalence_ \[\mathsf{tri}_{\bigcirc}:\hat{\bigcirc}\leftrightarrows\hat{\Delta}\] _between presheaf categories equipped with test model structures, where \(\bigcirc\) is a subcategory of \(\mathbf{Cat}\) containing \(\square\) as a wide subcategory such that each \(\bigcirc\)-morphism admits a factorization, unique up to isomorphism, into a surjective \(\bigcirc\)-morphism followed by an injective \(\bigcirc\)-morphism and the poset of subobjects of each \(\bigcirc\)-object is a complete lattice._ Proof.: Both \(\hat{\Delta}\) and \(\hat{\bigcirc}\) admit test model structures [Proposition C.1]. There exist dotted Thomason weak equivalences making the left [Lemma C.2] and right [Lemma C.3] diagrams below commute for maps \(\psi\) of presheaves, where \(\mathsf{qua}\vdash\mathsf{tri}_{\bigcirc}\): In each of the commutative diagrams, the top horizontal arrow is a classical weak equivalence if and only if the bottom horizontal arrow is a classical weak equivalence. Thus \(\mathsf{tri}\,\bigcirc\) and its right adjoint both preserve and reflect weak equivalences in test model structures. Additionally \(\mathsf{tri}_{\bigcirc}\) preserves monos, cofibrations.
2309.11176
Effects of electrons on nuclear clock transition frequency in $^{229}$Th ions
We perform calculations of the energy shift of the nuclear clock transition frequency $^{229}$Th as a function of the number of electrons in Th ion. We demonstrate that the dependence of the nuclear frequency on electron configuration is significant. E.g., removing one electron from the atom leads to relative shift of the nuclear frequency $\sim 10^{-7}$, which is twelve orders of magnitude larger than expected relative uncertainty of the nuclear clock transition frequency ($\sim 10^{-19}$). This leads to difference of the nuclear clock frequencies in Th~IV, Th~III, Th~II and Th~I. The relative change of the nuclear frequency between neutral Th and its bare nucleus is 1\%. We also calculate the field shift constants for isotopic and isomeric shifts of atomic electron transitions in Th ions.
V. A. Dzuba, V. V. Flambaum
2023-09-20T09:52:14Z
http://arxiv.org/abs/2309.11176v1
# Effects of electrons on nuclear clock transition frequency in \({}^{229}\)Th ions ###### Abstract We perform calculations of the energy shift of the nuclear clock transition frequency \({}^{229}\)Th as a function of the number of electrons in Th ion. We demonstrate that the dependence of the nuclear frequency on electron configuration is significant. E.g., removing one electron from the atom leads to relative shift of the nuclear frequency \(\sim 10^{-7}\), which is twelve orders of magnitude larger than expected relative uncertainty of the nuclear clock transition frequency (\(\sim 10^{-19}\)). This leads to difference of the nuclear clock frequencies in Th IV, Th III, Th II and Th I. The relative change of the nuclear frequency between neutral Th and its bare nucleus is 1%. We also calculate the field shift constants for isotopic and isomeric shifts of atomic electron transitions in Th ions. Nucleus of the \({}^{229}\)Th isotope has a unique feature of having very low-energy excitation connected to the ground state by the magnetic dipole (M1) transition (see, e.g. Reviews [1; 2] and references therein). The latest, most precise measurements, give the value of 8.338(24) eV [3] (see also [4; 5; 6; 7; 8]) for the energy of this excitation, which is very small on nuclear scale. This feature attracted many researches for plans to build nuclear clock of exceptionally high accuracy - see e.g. [9; 10]. The projected relative uncertainty is expected to reach \(10^{-19}\)[11]. In addition, there are strong arguments that this nuclear clock would be very sensitive to physics beyond standard model including space-time variation of the fundamental constants, violation of the Lorentz invariance and Einstein equivalence principle, and search for scalar and axion dark matter fields [12; 13; 14; 15; 16; 17; 18; 19; 20]. There are plans to use Th ions of different ionisation degree [11; 21; 22] and even solid-state Th nuclear clock [23; 24; 25]. In this work we show that in all these systems the frequency of the nuclear clock will be different. This is due to the Coulomb interaction of atomic electrons with the nucleus, leading to the significant electronic shift of the nuclear transition frequency. There is also a smaller shift due to the magnetic interaction. This electronic shift depends on electron configuration and it is different in different systems, like Th IV, Th III, Th II and Th I, leading to different nuclear frequencies. This shift for electronic state \(a\) is given by \[\Delta E_{a}=F_{a}\delta\langle r^{2}\rangle, \tag{1}\] where \(F_{a}\) is the field shift constant of state \(a\) which can be obtained from atomic calculations; \(\delta\langle r^{2}\rangle\) is the change of the nuclear root-mean square radius between the excited and ground nuclear states. The most accurate value for \(\delta\langle r^{2}\rangle\) was recently derived in Ref. [22], \({}^{229m,229}\delta\langle r^{2}\rangle=0.0105(13)\) fm\({}^{2}\). This enables us to determine the electronic shift of nuclear transition frequency for different thorium systems by calculating the field shift constants \(F_{a}\) and using (1). For example, difference of the nuclear frequencies between Th III and Th IV is given by \[\Delta\omega_{N}=(F_{a}(\mbox{Th III})-F_{a}(\mbox{Th IV}))\delta\langle r^{2}\rangle, \tag{2}\] State \(a\) in this case is the ground electronic state of the ion. Note that these field shift constants \(F\) appear also in the calculations of the isotopic and isomeric field shifts of electronic transition frequencies. The difference is that in the isotopic and isomeric shifts we need difference of \(F\) for final state \(b\) and initial state \(a\) of the electronic transition. The nuclear state does not change in this electronic transition. For isotope shift it is usually the ground nuclear state. For isomeric shift it is isomeric (excited) state or ground state of the same nucleus. The isotopic and isomeric field shifts of the electronic transition frequency are given by \[\Delta\omega_{ab}=(F_{b}-F_{a})\delta\langle r^{2}\rangle, \tag{3}\] Numerical values of \(\Delta\omega_{N}\) and \(\Delta\omega_{ab}\) can be calculated using values of the constants \(F\) for different electron states in Th IV, Th III, Th II and Th I presented in the Table I. Note that we do not include a contribution of core electrons which cancels out in the difference of the values of \(F\). For the isomeric shifts one may use \({}^{229m,229}\delta\langle r^{2}\rangle=0.0105(13)\) fm\({}^{2}\) measured in Ref. [22]. We use the combination of the single-double coupled cluster and the configuration interaction methods (SD+CI, [26]) and random-phase approximation (RPA) method to perform the calculations. The SD+CI method gives us the wave functions, while the RPA method gives an effective operator of the field shift. Corresponding equations have a form (see e.g. [27]) \[(\hat{H}^{\rm HF}-\epsilon_{c})\delta\psi_{c}=-(\hat{F}+\delta V_{\rm core}), \tag{4}\] where \(H^{\rm HF}\) is the relativistic Hartree-Fock operator for the atomic core, index \(c\) numerates single-electron states in the core, \(\psi_{c}\) and \(\delta\psi_{c}\) are corresponding single-electron functions and corrections due to the field shift operator \(\hat{F}\), and \(\delta V_{\rm core}\) is the change of the self-consistent Hartree-Fock potential due to the change in all core functions. Solving Eqs. (4) self-consistently allows to determine \(\delta V_{\rm core}\). Note that the core is the same for Th IV, Th III, Th II and Th I. Therefore the SD+CI and RPA equations need to be solved only once. Then the field shift constant is given by \[F_{a}=\langle a|\hat{F}+\delta V_{\rm core}|a\rangle. \tag{5}\] We use hat to distinguish between the field shift constant \(F\) and the field shift operator \(\hat{F}=\delta V_{\rm nuc}/\delta\langle r^{2}\rangle\). The wave function \(|a\rangle\) in (5) is the many-electron wave function for valence electrons found in the SD+CI calculations. It has one, two, three or four valence electrons. The results of the calculations are presented in Table 1. We present energy levels and field shift constants for the ground and some excited states of Th IV, Th III, Th II, and Th I. We have chosen low-energy excited states and also some other states of Th III and Th I for which other calculations and experimental data on isotope shift are available [22]. The values of the field shift constants are compared with earlier calculations in Ref. [22]. The difference of the field shift constants between our calculations and calculations in Ref. [22] is few per cent. This difference may be used as an accuracy estimate since the calculations have been done by different methods. The largest difference is for the ground state of Th II, which is 10%. However, our number leads to more consistent results for values of \(\delta\langle r^{2}\rangle\) extracted from the isotope shift measurements in ions Th II and Th III. Indeed, using our numbers, \(F\)=49.6 GHz/fm\({}^{2}\) for the ground state and \(F\)=-29.1 GHz/fm\({}^{2}\) for the state at \(E\)=17122 cm\({}^{-1}\), for extracting the difference in root-mean-square radii \(\langle r^{2}\rangle^{232,229}\) from the isotope shift data [22] leads to the value \(\delta\langle r^{2}\rangle^{232,229}=0.321(32)\) fm\({}^{2}\) (we assume 10% uncertainty for the values of \(F\)), which is closer to the data extracted from four transitions in Th III (0.315(32), 0.312(42), 0.338(44), 0.322(53), see Table 1 in [22]). When all five numbers are taken into account, four numbers for Th III from Ref. [22] and our number for Th II, 0.321(32), the final result is \(\delta\langle r^{2}\rangle^{232,229}=0.320(15)\) fm\({}^{2}\) (the final value of [22] is \(\delta\langle r^{2}\rangle^{232,229}=0.299(15)\) fm\({}^{2}\)). Our result is in better agreement with the latest most accurate literature value \(\delta\langle r^{2}\rangle^{232,229}\)=0.334(8) fm\({}^{2}\) presented in Ref. [29]. The new value of \(\delta\langle r^{2}\rangle^{232,229}\) leads to slightly different value of \(\delta\langle r^{2}\rangle^{229m,229}\). Using the ratio of the isomeric and isotopic shifts from Ref. [10] we get \(\delta\langle r^{2}\rangle^{229m,229}=0.0112(13)\) fm\({}^{2}\). It is 7% larger but agrees within error bars wth the value \(\delta\langle r^{2}\rangle^{229m,229}=0.0105(13)\) fm\({}^{2}\) presented in [22]. We are going to use our new number in further analysis. It is instructive to explain why the field shift constants \(F\) have different signs for different electron states. Orbitals \(s_{1/2}\) and \(p_{1/2}\) penetrate nucleus and are highly sensitive to the nuclear radius (remind the reader that the lower component of the Dirac spinor of the relativistic \(p_{1/2}\) orbital has angular quantum numbers of \(s_{1/2}\) orbital). An increase of the nuclear radius leads to decrease of the attraction to the nucleus, therefore energies \(s_{1/2}\) and \(p_{1/2}\) move up and constant \(F\) is positive. Higher orbitals \(p_{3/2}\), \(d\) and \(f\) do not penetrate nucleus, so the direct term \(\hat{F}\) in Eq. (5) is negligible. The effect comes from the correction to the electron core potential \(\delta V_{\rm core}\) which is dominated by the Coulomb field of \(s_{1/2}\) electrons. Increase of the nuclear radius makes attraction to the nucleus weaker, increases the radii of the \(s_{1/2}\) orbitals and makes negative correction \(\delta V_{\rm core}\) to the core electron Coulomb potential. This is why \(F\) for \(p_{3/2}\), \(d\) and \(f\) electrons is negative. We may also explain this sign from another end. Adding valence \(p_{3/2}\), \(d\) or \(f\) electron increases positive Coulomb energy of the electron repulsion. As a result, the \(s_{1/2}\) electron energies and distances from the nucleus increase and their sensitivity to the change of the nuclear radius decreases. Thus, the effect of the higher wave valence electron is negative. Using the field shift constants for the ground states of each ion from Table 1 (we use our numbers for consistency), the value \(\delta\langle r^{2}\rangle^{229m,229}=0.0112(13)\) fm\({}^{2}\) (see above) and formula similar to (2) we obtain the differences between nuclear frequencies in different thorium ions. The results are presented in Table 2. We see that the difference is huge. It exceeds the projected relative uncertainty of the nuclear clocks by many orders of magnitude. It is worth noting that the shift does not contribute to the uncertainty budget. It only means that the frequency of the nuclear transition is different in different thorium systems. It is interesting to determine the nuclear frequency difference between neutral (or nearly neutral) \({}^{229}\)Th and bare \({}^{229}\)Th nucleus. This difference is strongly dominated by contributions from \(1s\) electrons. Using the RPA calculation (4) we get \(F(1s)=8.23\times 10^{8}\) MHz/fm\({}^{2}\). The \begin{table} \begin{tabular}{l l l c c} Atom & State & Expt. energy & \(F\) (GHz/fm\({}^{2}\)) \\ or ion & & & (cm\({}^{-1}\)) [28] & Present & Ref.[22] \\ \hline Th IV & \(5f\) & \({}^{2}\)F\({}^{0/2}_{5}\) & 0 & -55.0 & \\ & \(5f\) & \({}^{2}\)F\({}^{0/2}_{7/2}\) & 4325 & -53.0 & \\ & \(6d\) & \({}^{2}\)D\({}_{3/2}\) & 9193 & -23.3 & \\ & \(6d\) & \({}^{2}\)D\({}_{5/2}\) & 14586 & -20.5 & \\ & \(7s\) & \({}^{2}\)S\({}_{1/2}\) & 23130 & 92.1 & \\ & \(7p\) & \({}^{2}\)P\({}^{0}_{1/2}\) & 60239 & 2.7 & \\ & \(7p\) & \({}^{2}\)P\({}^{0}_{3/2}\) & 73055 & -5.3 & \\ Th III & \(5f\)6d & \({}^{3}\)H\({}^{0}_{4}\) & 0 & -68.0 & -68.7 \\ & \(6d\)\({}^{2}\) & F\({}_{2}\) & 63 & -39.9 & -36.6 \\ & \(5f^{2}\) & H\({}_{4}\) & 15148 & -83.3 & -89.5 \\ & \(5f\)\(6d\) & \({}^{1}\)P\({}^{0}_{1}\) & 20711 & -62.2 & -63.6 \\ & \(5f^{2}\) & \({}^{3}\)F\({}_{4}\) & 21784 & -86.5 & -85.5 \\ & \(5f^{2}\) & \({}^{3}\)P\({}_{0}\) & 29300 & -82.2 & -84.1 \\ Th II & \(6d^{2}\)\(7s\) & \({}^{2}\)D\({}_{3/2}\) & 0 & 49.6 & 54.6 \\ & \(5f\)\(6d\)\({}^{2}\) & \({}^{*}\)\({}_{3/2}\) & & -65.0 & \\ & \(5f\)\(6d\)\({}^{2}\) & \({}^{*}\)\({}_{3/2}\) & 15145 & -45.8 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{*}\)\({}_{3/2}\) & 15711 & -36.9 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{*}\)\({}_{3/2}\) & 17122 & -29.1 & -31.6 \\ & \(5f\)\(6d\)\(7s\) & \({}^{2}\)F\({}_{5/2}\) & 12472 & -18.3 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{*}\)\({}_{5/2}\) & & -36.3 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{4}\)D\({}^{*}_{5/2}\) & 14545 & -63.9 & \\ & \(5f\)\(6d\)\(7s\) & \({}^{*}\)\({}_{5/2}\) & 16033 & -46.8 & \\ Th I & \(6d^{2}\)\(7s^{2}\) & \({}^{3}\)F\({}_{2}\) & 0 & 58.6 & \\ \end{tabular} \end{table} Table 1: Field shift constant \(F\) for the ground and some excited states of Th IV, Th III, Th II, and Th I. total energy shift caused be two \(1s\) electrons is \(1.73\times 10^{7}\) MHz; the total shift from all core electrons is \(2.07\times 10^{7}\) MHz=8.57 \(\cdot 10^{-2}\) eV, which is \(\sim 1\%\) of the nuclear frequency. Electronic correction to the nuclear frequency comes also from magnetic interaction between electrons and nucleus. The first order gives ordinary magnetic hyperfine splitting of the transition frequencies. The magnetic shift is given by the second-order magnetic dipole hyperfine correction to the energy \[\delta E_{g}^{\rm hfs}=\sum_{n}\frac{\langle g|\hat{H}^{\rm hfs}|n\rangle^{2}} {E_{g}-E_{n}} \tag{6}\] Here index \(g\) stands for the ground electronic state, \(\hat{H}^{\rm hfs}\) is the magnetic dipole hyperfine structure operator. Values of \(\delta E_{g}^{\rm hfs}\) are different for the ground and isomeric nuclear states since their magnetic moments and spins are different. Magnetic moment values can be found in Ref. [10]. In addition, there is the second order contribution from the mixing of the ground and isomeric nuclear states by the magnetic filed of electrons. Preliminary estimations show that the second order magnetic shift is significantly smaller than the electronic shift considered in the present work. More detailed analysis might be a subject of further work. ###### Acknowledgements. This work was supported by the Australian Research Council Grants No. DP230101058 and DP200100150.
2309.05103
AGent: A Novel Pipeline for Automatically Creating Unanswerable Questions
The development of large high-quality datasets and high-performing models have led to significant advancements in the domain of Extractive Question Answering (EQA). This progress has sparked considerable interest in exploring unanswerable questions within the EQA domain. Training EQA models with unanswerable questions helps them avoid extracting misleading or incorrect answers for queries that lack valid responses. However, manually annotating unanswerable questions is labor-intensive. To address this, we propose AGent, a novel pipeline that automatically creates new unanswerable questions by re-matching a question with a context that lacks the necessary information for a correct answer. In this paper, we demonstrate the usefulness of this AGent pipeline by creating two sets of unanswerable questions from answerable questions in SQuAD and HotpotQA. These created question sets exhibit low error rates. Additionally, models fine-tuned on these questions show comparable performance with those fine-tuned on the SQuAD 2.0 dataset on multiple EQA benchmarks.
Son Quoc Tran, Gia-Huy Do, Phong Nguyen-Thuan Do, Matt Kretchmar, Xinya Du
2023-09-10T18:13:11Z
http://arxiv.org/abs/2309.05103v1
# _Agent_: A Novel Pipeline for Automatically Creating Unanswerable Questions ###### Abstract The development of large high-quality datasets and high-performing models have led to significant advancements in the domain of Extractive Question Answering (EQA). This progress has sparked considerable interest in exploring unanswerable questions within the EQA domain. Training EQA models with unanswerable questions helps them avoid extracting misleading or incorrect answers for queries that lack valid responses. However, manually annotating unanswerable questions is labor-intensive. To address this, we propose _AGent_, a novel pipeline that automatically creates new unanswerable questions by re-matching a question with a context that lacks the necessary information for a correct answer. In this paper, we demonstrate the usefulness of this _AGent_ pipeline by creating two sets of unanswerable questions from answerable questions in SQuAD and HotpotQA. These created question sets exhibit low error rates. Additionally, models fine-tuned on these questions show comparable performance with those fine-tuned on the SQuAD 2.0 dataset on multiple EQA benchmarks. 1 Footnote 1: Our code is publicly available at [https://github.com/sonqt/agent-unanswerable](https://github.com/sonqt/agent-unanswerable). ## 1 Introduction Extractive Question Answering (EQA) is an important task of Machine Reading Comprehension (MRC), which has emerged as a prominent area of research in natural language understanding. Research in EQA has made significant gains thanks to the availability of many challenging, diverse, and large-scale datasets Rajpurkar et al. (2016, 2018); Kwiatkowski et al. (2019); Yang et al. (2018); Trivedi et al. (2022). Moreover, recent advancements in datasets also lead to the development of multiple systems in EQA Huang et al. (2018); Zaheer et al. (2020) that have achieved remarkable performance, approaching or even surpassing human-level performance across various benchmark datasets. Matching the rapid progress in EQA, the sub-field of unanswerable questions has emerged as a new research area. Unanswerable questions are those that cannot be answered based only on the information provided in the corresponding context. Unanswerable questions are a critical resource in training EQA models because they allow the models to learn how to avoid extracting misleading answers when confronted with queries that lack valid responses. Incorporating unanswerable questions in the training set of EQA models enhances the overall reliability of these models for real-world applications Tran et al. (2023). Nevertheless, the manual annotation of unanswerable questions in EQA tasks can be prohibitively labor-intensive. Consequently, we Figure 1: Examples of an answerable question \(Q1\) from SQuAD 1.1, and two unanswerable questions \(Q2\) from SQuAD 2.0 and \(Q3\) from SQuAD _AGent_. In SQuAD 2.0, crowdworkers create unanswerable questions by replacing “large numbers” with “decimal digits.” On the other hand, our automated _AGent_ pipeline matches the original question \(Q1\), now \(Q3\), with a new context \(C3\). The pair \(C3-Q3\) is unanswerable as context \(C3\) does not indicate whether the **trial division** can **conveniently** test the primality of **large** numbers. present a novel pipeline to automate the creation of high-quality unanswerable questions given a dataset comprising answerable questions. This pipeline uses a retriever to re-match questions with paragraphs that lack the necessary information to answer them adequately. Additionally, it incorporates the concept of adversarial filtering for identifying challenging unanswerable questions. The key contributions of our work can be summarized as follows: 1. We propose _AGent_ which is a novel pipeline for automatically creating unanswerable questions. In order to prove the utility of _AGent_, we apply our pipeline on two datasets with different characteristics, SQuAD and HotpotQA, to create two different sets of unanswerable questions. In our study, we show that the two unanswerable question sets created using _AGent_ pipeline exhibit a low error rate. 2. Our experiments show that the two unanswerable question sets created using our proposed pipeline are challenging for models fine-tuned using human annotated unanswerable questions from SQuAD 2.0. Furthermore, our experiments show that models fine-tuned using our automatically created unanswerable questions show comparable performance to those fine-tuned on the SQuAD 2.0 dataset on various EQA benchmarks, such as SQuAD 1.1, HotpotQA, and Natural Questions. ## 2 Related Work ### Unanswerable Questions In the early research on unanswerable questions, Levy et al. (2017) re-defined the BiDAF model (Seo et al., 2017) to allow it to output whether the given question is unanswerable. Their primary objective was to utilize MRC as indirect supervision for relation extraction in zero-shot scenarios. Subsequently, Rajpurkar et al. (2018) introduced a crowdsourcing process to annotate unanswerable questions, resulting in the creation of the SQuAD 2.0 dataset. This dataset later inspired similar works in other languages, such as French (Heinrich et al., 2022) and Vietnamese (Nguyen et al., 2022). However, recent research has indicated that models trained on SQuAD 2.0 exhibit poor performance on out-of-domain samples (Sulem et al., 2021). Furthermore, apart from the adversarially-crafted unanswerable questions introduced by Rajpurkar et al. (2018), Natural Question (Kwiatkowski et al., 2019) and Tydi QA (Clark et al., 2020) present more naturally constructed unanswerable questions. While recent language models surpass human performances on adversarial unanswerable questions of SQuAD 2.0, natural unanswerable questions in Natural Question and Tidy QA remain a challenging task (Asai and Choi, 2021). In a prior work, Zhu et al. (2019) introduce a pair-to-sequence model for generating unanswerable questions. However, this model requires a substantial number of high-quality unanswerable questions from SQuAD 2.0 during the training phase to generate its own high-quality unanswerable questions. Therefore, the model introduced by Zhu et al. (2019) cannot be applied on the HotpotQA dataset for generating high-quality unanswerable questions. In contrast, although our _AGent_ pipeline cannot generate questions from scratch, it distinguishes itself by its ability to create high-quality unanswerable questions without any preexisting sets of unanswerable questions. ### Robustness of MRC Models The evaluation of Machine Reading Comprehension (MRC) model robustness typically involves assessing their performance against adversarial attacks and distribution shifts. The research on adversarial attacks in MRC encompasses various forms of perturbations (Si et al., 2021). These attacks include replacing words with WordNet antonyms (Jia and Liang, 2017), replacing words with words having similar representations in vector space (Jia and Liang, 2017), substituting entity names with other names (Yan et al., 2022), paraphrasing question (Gan and Ng, 2019; Ribeiro et al., 2018), or injecting distractors into sentences (Jia and Liang, 2017; Zhou et al., 2020). Recently, multiple innovative studies have focused on enhancing the robustness of MRC models against adversarial attacks (Chen et al., 2022; Zhang et al., 2023; Tran et al., 2023). On the other hand, in the research line of robustness under distribution shift, researchers study the robustness of models in out-of-domains settings using test datasets different from training dataset (Miller et al., 2020; Fisch et al., 2019; Sen and Saffari, 2020). Tasks and Models In the task of EQA, models are trained to extract a list of prospective outputs (answers), each accompanied by a probability (output of softmax function) that represents the machine's confidence in the answer's accuracy. When the dataset includes unanswerable questions, a valid response in the extracted list can be an "empty" response indicating that the question is unanswerable. The evaluation metric commonly used to assess the performance of the EQA system is the F1-score, which measures the average overlap between the model's predictions and the correct answers (gold answers) in the dataset. For more detailed information, please refer to the work by Rajpurkar et al. (2016). ### Datasets In our work, we utilize three datasets: SQuAD Rajpurkar et al. (2016, 2018), HotpotQA Yang et al. (2018), and Natural Questions Kwiatkowski et al. (2019). In the SQuAD dataset, each question is associated with a short paragraph from Wikipedia. HotpotQA is a dataset designed for multi-hop reasoning question answering where each question requires reasoning over multiple supporting paragraphs. Additionally, the Natural Questions dataset comprises real queries from the Google search engine, and each question is associated with a Wikipedia page. ### Models We employ three transformer-based models in our work: BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), and SpanBERT Joshi et al. (2020). **BERT** is considered the pioneering application of the Transformer model architecture Vaswani et al. (2017). BERT is trained on a combination of English Wikipedia and BookCorpus using masked language modeling and next-sentence prediction as pre-training tasks. Later, a replication study by Liu et al. (2019) found that BERT was significantly under-trained. Liu et al. (2019) built **RoBERTa** from BERT by extending the pre-training time and increasing the size of the pre-training data. Joshi et al. (2020) developed **SpanBERT** by enhancing BERT's ability to represent and predict text spans by masking random contiguous spans and replacing NSP with a span boundary objective. Each of these three models has two versions: base and large. Our study uses all six of these models. ## 4 Automatically Creating Unanswerable Questions ### Criteria In order to guarantee the quality of our automatically created unanswerable questions, we design our pipeline to adhere to the following criteria: **Relevance.** The created unanswerable questions should be closely related to the subject matter discussed in the corresponding paragraph. This criterion ensures that the unanswerability of the question is not easily recognizable by simple heuristic methods and that the created question "makes sense" regarding the provided context. **Plausibility.** Our pipeline also ensures that the created unanswerable questions have at least one plausible answer. For instance, when considering a question like "What is the name of one algorithm useful for conveniently testing the primality of large numbers?", there should exist a plausible answer in the form of the name of an algorithm in Mathematics that is closely linked to the primality within the corresponding context. See Figure 1 for an example showcasing an unanswerable question with strong plausible answer(s). **Fidelity.** Our pipeline adds an additional step to ensure a minimal rate of error or noise in the set of automatically created unanswerable questions. It is important that the newly created questions are genuinely unanswerable. This quality control measure bolsters the reliability of the pipeline. The effectiveness of this step is verified in the study in Section 4.3. ### _AGent_ Pipeline Figure 2 provides a summary of all the steps in the _AGent_ pipeline for automatically creating unanswerable questions corresponding to each dataset of answerable questions. Our proposed _AGent_ pipeline consists of three steps which align with the three criteria discussed in Section 4.1: **Step 1** **Matching questions with new contexts.** In the EQA task, the input consists of a question and a corresponding context. By matching the question with a new context that differs from the original context, we can create a new question-context pair that is highly likely to be unanswerable. This step prioritizes the criterion of **relevance**. We employ the term frequency-inverse document frequency (TF-IDF) method to retrieve the \(k\) most relevant paragraphs from the large corpus containing all contexts from the original dataset (while obviously discarding the context that was originally matched with this question). The outcome of this step is a set of **unanswerable candidates**. It's important to note that the unanswerable candidates created in this step may includes some answerable questions, and these answerable questions will be filtered out in step 3 of the pipeline. **Step 2** **Identifying hard unanswerable questions.** In this step, we give priority to both the **relevance** and **plausibility** criteria. We aim to identify unanswerable questions with a highly relevant corresponding context and at least one strong plausible answer. To achieve this, we leverage the concept of adversarial filtering where the adversarial model(s) is applied to filter out easy examples (Yang et al., 2018; Zellers et al., 2018; Zhang et al., 2018). We first fine-tune six models using a dataset comprising answerable questions from the original dataset and randomly selected unanswerable candidates. We acknowledge that some unanswerable questions in this training set may be answerable. Nevertheless, the percentage of answerable questions among the unanswerable candidates is minimal and within an acceptable range (Appendix A.2). To ensure training integrity, we then exclude all unanswerable questions utilized for training these six models from the set of unanswerable candidates. Then, we employ the six fine-tuned models to evaluate the difficulty of each sample in the set of unanswerable candidates. If at least two of the six models predict that a given question is answerable, we consider it to be a challenging unanswerable question and include it in our set of **challenging unanswerable candidates**. **Step 3** **Filtering out answerable questions.** The set of challenging unanswerable questions consists of questions that at least two out of the six models predict as answerable. Consequently, there may be a considerable percentage of questions that are indeed answerable. Therefore, this specific step in our pipeline aims to ensure the **fidelity** of the _AGent_ pipeline, ensuring that all questions created by our pipeline are genuinely unanswerable. We leverage the predicted answers and confidence scores from Figure 2: The _AGent_ pipeline for generating challenging high-quality unanswerable questions in Extractive Question Answering given a dataset with answerable questions. The six models used in this pipeline are the base and large versions of BERT, RoBERTa, and SpanBERT. In step 3 of the pipeline, the blue dots represent the calculated values (using formula discussed in §4.2) for unanswerable questions, while the red dots represent the calculated values for answerable questions. The threshold for discarding questions from the final extracted set of unanswerable questions is determined by finding the minimum value among all answerable questions. Any question with a calculated value greater than the threshold will not be included in our final extracted set. the six deployed models in the previous step to achieve this. Subsequently, we devise a filtering model with four inputs: \(c_{a}\), representing the cumulative confidence scores of the models attempting to answer (or predicting as answerable); \(c_{u}\), representing the cumulative confidence scores of the models not providing an answer (or predicting as unanswerable); \(n_{a}\), denoting the number of models attempting to answer; and \(n_{u}\), indicating the number of models not providing an answer. The output of this filtering model is a value \(V(q)\) for each question \(q\). The filtering models must be developed independently for different datasets. In order to determine the filtering threshold and develop the filtering model, we manually annotate \(200\) challenging unanswerable candidates from each dataset. The filtering threshold is established by identifying the minimum value \(V(q_{a})\) where \(q_{a}\) represents an answerable question from our annotated set. This approach ensures a precision of \(100\%\) in identifying unanswerable questions on the annotated 200 questions. The filtering model then acts to minimize the number of false positives (number of unanswerable candidates that are answerable) at the expense of tossing out some candidate questions that are unanswerable. However, as the filtering model is applied on unseen challenging unanswerable candidates, the precision of the filtering model in this step would not be \(100\%\) as on the \(200\) manually annotated samples. Therefore, in next section, we use human experts to evaluate the precision exhibited by the filtering model. Further details for the _AGent_ pipeline are outlined in Appendix A. ### Human Reviewing This section presents our methodology for evaluating the data quality of unanswerable questions automatically created by _AGent_. We use three experts to validate 100 random unanswerable questions from each development set of SQuAD _AGent_ and HotpotQA _AGent_. In order to prevent an overwhelming majority of unanswerable questions in our annotation set, which could potentially undermine the integrity of the annotation, we incorporate 20 manually annotated answerable questions during step 3 of the pipeline. Consequently, we provide a total of 120 questions to each expert for each set. The process of expert evaluation involves two distinct phases. During the first phase, each of the three experts independently assesses whether a given question is answerable and provides the reasoning behind their annotation. In the second phase, all three experts are presented with the reasons provided by the other experts for any conflicting samples. They have the opportunity to review and potentially modify their final set of annotations based on the reasons from their peers. We observe that the annotations provided by our three experts demonstrate exceptional quality. Table 1 presents the Fleiss' Kappa score [17] for our three experts after the completion of both phases, as well as the error rate of the _AGent_ development set. Notably, the Fleiss' Kappa score in phase 1 is remarkably high (\(0.76\) on SQuAD _AGent_, and \(0.83\) on HotpotQA _AGent_), suggesting that the annotations obtained through this process are reliable. Besides, after the second phase, all three experts agree that the \(20\) answerable questions we include in the annotation sets are indeed answerable. As demonstrated in Table 1, the high-quality annotations provided by three experts indicate an exceptionally low error rate for the unanswerable questions created using _AGent_ (\(6\%\) for SQuAD and \(5\%\) for HotpotQA). For comparison, this error rate is slightly lower than that of SQuAD 2.0, a dataset annotated by humans. ## 5 Experiments and Analysis We now shift our attention from the _AGent_ pipeline to examining the effectiveness of our _AGent_ questions in training and benchmarking EQA models. ### Training Sets The models in our experiments are trained using SQuAD 2.0, SQuAD _AGent_, and HotpotQA _AGent_. It is important to note that the two _AGent_ datasets includes all answerable questions from the original datasets and _AGent_ unanswerable questions. \begin{table} \begin{tabular}{c l l l} \hline \hline & & \multicolumn{1}{c}{**Phase**} & \multicolumn{1}{c}{**Phase**} \\ & & **1** & **2** \\ \hline **SQuAD** & Fleiss’ Kappa & \(0.76\) & \(0.95\) \\ _AGent_ & Data Error & \(0.10\) & **0.06** \\ \hline **HotpotQA** & Fleiss’ Kappa & \(0.83\) & \(0.97\) \\ _AGent_ & Data Error & \(0.09\) & **0.05** \\ \hline \hline \end{tabular} \end{table} Table 1: The Fleiss’ Kappa score and _AGent_ data error for the annotations collected from human experts after two distinct phases. ### Testing Sets In our experiments, we use eight sets of EQA questions as summarized in Table 2. In addition to two sets of _AGent_ unanswerable questions, we also incorporate the following six types of questions. **SQuAD.** We use all **answerable** questions from SQuAD 1.1. We use all **unanswerable** questions from SQuAD 2.0. **HotpotQA.** In preprocessing **answerable** questions in HotpotQA, we adopt the same approach outlined in MRQA 2019 (Fisch et al., 2019) to convert each dataset to the standardized EQA format. Specifically, we include only two supporting paragraphs in our answerable questions and exclude distractor paragraphs. In preprocessing **unanswerable** questions in HotpotQA, we randomly select two distractor paragraphs provided in the original HotpotQA dataset, which are then used as the context for the corresponding question. **Natural Questions (NQ).** In preprocessing **answerable** questions in NQ, we again adopt the same approach outlined in MRQA 2019 to convert each dataset to the standardized EQA format. This format entails having a single context, limited in length. Specifically, we select examples with short answers as our answerable questions and use the corresponding long answer as the context. For **unanswerable** questions in NQ, we select questions with no answer and utilize the entire Wikipedia page, which is the input of original task of NQ, as the corresponding context. However, in line with the data collection process of MRQA 2019, we truncate the Wikipedia page, limiting it to the first 800 tokens. ### Main Results Table 2 presents the results of our experiments. Firstly, our findings demonstrate that unanswerable questions created by _AGent_ pose significant challenges for models fine-tuned on SQuAD 2.0, a dataset with human-annotated unanswerable questions. The average performance of the six models fine-tuned on SQuAD 2.0 and tested on SQuAD _AGent_ is \(49.38\); the average score for testing these models on HotpotQA _AGent_ data is \(58.98\). Notably, unanswerable questions from HotpotQA _AGent_ are considerably more challenging compared to their unanswerable counterparts from HotpotQA. Secondly, models fine-tuned on two _AGent_ datasets exhibit comparable performance to models fine-tuned on SQuAD 2.0. On unanswerable questions from HotpotQA and NQ, models fine-tuned on _AGent_ datasets significantly outperform those fine-tuned on SQuAD 2.0. On answerable questions from SQuAD and HotpotQA, models fine-tuned on SQuAD _AGent_ also demonstrate significant improvement over those fine-tuned on SQuAD 2.0 (\(86.96-84.55\) on SQuAD and \(63.26-51.05\) on HotpotQA). This finding highlights the applicability of models fine-tuned on _AGent_ datasets to various question types. However, on answerable questions from NQ and unanswerable questions from SQuAD 2.0, models fine-tuned on _AGent_ datasets exhibit lower performance than those fine-tuned on SQuAD 2.0. On the one hand, the lower performance on unanswerable questions from SQuAD 2.0 of models fine-tuned on AGent datasets is due to the unfair comparision as models fine-tuned on AGent datasets are tested with out-of-domain samples, and models fine-tuned with SQuAD 2.0 are tested with in-domain samples.In the next section, we provide a comprehensive explanation for the lower performance on NQ answerable questions of models fine-tuned on AGent datasets. \begin{table} \begin{tabular}{|c|c c c|c c c|c c|} \hline _Test_\(\rightarrow\) & \multicolumn{3}{c|}{**SQuAD**} & \multicolumn{3}{c|}{**HotpotQA**} & \multicolumn{2}{c|}{**Natural Questions**} \\ _Train_\(\downarrow\) & answerable & unanswerable & _AGent_ & answerable & unanswerable & _AGent_ & answerable & unanswerable \\ \hline SQuAD & \(84.55\pm 3.43\) & \(\textbf{79.16}\pm 5.16\) & \(49.38\pm 5.21\) & \(51.05\pm 15.15\) & \(86.28\pm 2.68\) & \(58.98\pm 4.64\) & \(\textbf{44.30}\pm 6.36\) & \(60.55\pm 12.95\) \\ \hline SQuAD & \multirow{3}{*}{**86.96\(\pm 1.86\)**} & \multirow{3}{*}{\(29.63\pm 3.97\)} & \multirow{3}{*}{\(81.38\pm 4.52\)} & \multirow{3}{*}{\(63.26\pm 2.88\)} & \multirow{3}{*}{\(90.01\pm 4.20\)} & \multirow{3}{*}{\(50.61\pm 5.56\)} & \multirow{3}{*}{\(41.05\pm 6.81\)} & \multirow{3}{*}{\(78.66\pm 13.22\)} \\ _AGent_ & & & & & & & \\ \hline HotpotQA & \multirow{3}{*}{\(59.06\pm 5.26\)} & \multirow{3}{*}{\(46.13\pm 4.46\)} & \multirow{3}{*}{\(\textbf{87.61}\pm 2.72\)} & \multirow{3}{*}{\(\textbf{77.75}\pm 1.92\)} & \multirow{3}{*}{\(\textbf{99.70}\pm 0.06\)} & \multirow{3}{*}{\(\textbf{95.94}\pm 2.13\)} & \multirow{3}{*}{\(24.11\pm 7.04\)} & \multirow{3}{*}{\(\textbf{84.20}\pm 11.37\)} \\ _AGent_ & & & & & & & \\ \hline \end{tabular} \end{table} Table 2: Performance of 6 models fine-tuned on SQuAD 2.0, SQuAD _AGent_, and HotpotQA _AGent_ datasets evaluated on SQuAD, HotpotQA, and Natural Questions. Each entry in the table is the mean and standard deviation of the F1 scores of the six MRC models. The left column indicates the dataset used to train the six MRC models. The top row indicates the dataset used to test the six MRC models. _AGent_ refers to the unanswerable questions generated using the _AGent_ pipeline. For a more detailed version of this table, refer to Table 8. ### Analysis on Natural Questions To delve deeper into the underperformance of models fine-tuned on _AGent_ dataset on answerable questions of NQ, we analyze two sets of answerable questions. The first set is \(100\) answerable questions that models fine-tuned on SQuAD _AGent_ predict as unanswerable; the second one is 100 answerable questions that models fine-tuned on SQuAD 2.0 predict as unanswerable. For the sake of simplicity, we limit our reporting in this section to the analysis of models RoBERTa-base. Our analysis uncovers two potential issues that can arise when evaluating models with answerable questions from the NQ dataset. Table 3 summarizes our findings in this section. Firstly, a considerable difference between the original NQ dataset and the NQ used in the EQA task following a prevailing approach in the research community is the difference in the provided context. While the EQA task uses the long answer as the context (Fisch et al., 2019), NQ supplies an entire Wikipedia page as the context for a given question. This difference presents a potential problem of inadequate context for answering the question. For instance, in Table 3, we observe that the long answer associated with the question "Who dies in the lost city of z?" fails to mention "the lost city of z". Using a long answer as the context causes this question unanswerable due to the insufficient context provided. We find that most answerable questions predicted as unanswerable by models fine-tuned on SQuAD 2.0 and SQuAD _AGent_ belong to this specific question type (\(65\%\) and \(76\%\) respectively). This finding highlights the potential unreliability when comparing models using the NQ dataset in the same way as it is commonly done in multiple EQA studies. This analysis forms the basis for our decision not to employ our _AGent_ pipeline on the NQ dataset. Secondly, the questions in the NQ dataset are sourced from real users who submitted information-seeking queries to the Google search engine under natural conditions. As a result, a small portion of these questions may inevitably contain typographical errors or misspellings. In our analysis, we observe that models fine-tuned on our _AGent_ training set tend to predict the questions of this type as unanswerable more frequently. Nevertheless, due to the relatively small proportion of questions with typographical errors in our randomly surveyed sets, we refrain from drawing a definitive conclusion at this point. Therefore, in the subsequent section, we will delve further into this matter by adopting an adversarial attack on the EQA task. ## 6 Robustness against Syntactic Variations In this section, we apply the adversarial attack technique TextBugger into EQA. ### TextBugger Our adversarial attack in this section is inspired by the TextBugger attack (Li et al., 2019). We use black-box TextBugger in this section, which means that the attack algorithm does not have access to the gradients of the model. TextBugger generates attack samples that closely resemble the typographical errors commonly made by real users. We perform adversarial attacks on questions from the SQuAD 1.1 dataset. Algorithm 1 in Appendix E provides the pseudocode outlining the process of generating attacked questions. Table 4 provides examples of how \begin{table} \begin{tabular}{l|c c} \hline \hline & **SQuAD** & **SQuAD** \\ & **2.0** & _AGent_ \\ \hline \multirow{4}{*}{\begin{tabular}{l} Insufficient \\ context for \\ question \\ \end{tabular} } & \begin{tabular}{l} Murray survives and, in front of the RGS trustees, accuses Fawcett of abandoning him in \\ the junge. Fawcett effects to resign from the society rather than apologize. World War I \\ breaks out in Europe, and Fawcett goes to France to fight. Manley dies in the trenches at \\ the Battle of the Sonme, and Fawcett is temporarily blinded in a chlorine gas attack. Jack, \\ Fawcett’s edket son \(-\) who had long accused Fawcett of abandoning the family \(-\) reconciles \\ with his father as he recovers. \\ \cline{2-3} & **Question**: who dies in the lost city of z? \\ \hline \multirow{3}{*}{\begin{tabular}{l} typographical \\ errors of key \\ words \\ \end{tabular} } & \begin{tabular}{l} Gimme Gimme has broadcast three series and 19 episodes in total. The first series \\ premiered on BBC Two on 8 January 1999 and lasted for six episodes, concluding on 12 \\ February 1999. [-] \\ **Question**: when did gim me gim me gim me start? \\ \hline \hline \end{tabular} & \multirow{3}{*}{ \begin{tabular}{l} 3 \\ 3 \\ 3 \\ 3 \\ \end{tabular} } \\ \end{tabular} \end{table} Table 3: Examples of two types of answerable questions in Natural Questions that can pose challenges for EQA models fine-tuned solely on unanswerable questions. We conduct a survey to measure the failure rates of RoBERTa models fine-tuned on both SQuAD 2.0 and SQuAD _AGent_ for these question types. TextBugger generates bugs in a given token. ### Robustness against TextBugger We investigate the impact of TextBugger attacks on models fine-tuned using different datasets, namely SQuAD 1.1, SQuAD 2.0, and SQuAD _AGent_. To accomplish this, we generate attacked questions by modifying 1, 2, 3, and 4 tokens in the questions from the SQuAD 1.1 dataset. Figure 3 reports the performance of three models RoBERTa-base fine-tuned on SQuAD 1.1, SQuAD 2.0, and SQuAD _AGent_. Firstly, we see that the performance of the model fine-tuned on SQuAD 1.1 show small decreases (from \(92.2\) to \(71.9\)). Adversarial attack TextBugger does not present a significant challenge to the EQA model when the model is designed only to handle answerable questions. Secondly, we can observe that the model fine-tuned on unanswerable questions from SQuAD 2.0 demonstrates significantly better robustness compared to the model fine-tuned on SQuAD _AGent_ (\(86.1-56.8\) compared to \(88.6-34.5\)). This finding confirms our initial hypothesis that the lower performance of models fine-tuned on _AGent_ datasets for answering questions in the NQ dataset is partly attributable to misspelled keywords in the questions from the NQ dataset. ## 7 Conclusion and Future Works In this work, we propose _AGent_, a novel pipeline designed to automatically generate two sets of unanswerable questions from a dataset of answerable questions. We systematically apply _AGent_ on SQuAD and HotpotQA to generate unanswerable questions. Through a two-stage process of human reviewing, we demonstrate that _AGent_ unanswerable questions exhibit a low error rate. Our experimental results indicate that unanswerable questions generated using AGent pipeline present significant challenges for EQA models fine-tuned on SQuAD 2.0. We also demonstrate that models fine-tuned using _AGent_ unanswerable questions exhibit competitive performance compared to models fine-tuned on human-annotated unanswerable questions from SQuAD 2.0 on multiple test domains. The good performance of models fine-tuned on two _AGent_ datasets with different characteristics, SQuAD _AGent_ and HotpotQA _AGent_, demonstrate the utility of _AGent_ in creating high-quality unanswerable questions and its potential for enhancing the performance of EQA models. Furthermore, our research sheds light on two potential issues when utilizing EQA models designed to handle both answerable and unanswerable questions. Specifically, we identify the problems of insufficient context and typographical errors as considerable challenges in this context. In calling for further study on typographical errors, we propose the inclusion of the TextBugger adversarial attack in EQA. Our analysis reveals that TextBugger presents a novel challenge for EQA models designed to handle both answerable and unanswerable questions. It is important to address this challenge comprehensively before the real-world deployment of EQA models. By acknowledging and effectively tackling the influence of typographical errors, we can enhance the robustness and reliability of EQA models in practical applications. ## Limitations We acknowledge certain limitations in our work. Firstly, our study primarily focuses on evaluating the pipeline using multiple pre-trained transformers-based models in English, which can be prohibitively expensive to create, especially for \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Original** & Insert & Delete & Swap & \begin{tabular}{c} Substitute \\ Character \\ \end{tabular} \\ \hline **South** & Sou th & Souh & Souht & S0uth \\ \hline \multicolumn{5}{|c|}{What **Souh** African law **recognized** two **typ es**} \\ \multicolumn{5}{|c|}{of schools?} \\ \hline \end{tabular} \end{table} Table 4: Examples of how TextBugger generates bugs in a given token ”South” and a full question after the TextBugger attack. The attacked tokens are highlighted in red. Figure 3: Robustness of RoBERTa-base trained on SQuAD 1.1, SQuAD 2.0, SQuAD _AGent_ against TextBugger. languages with limited resources. Furthermore, given the empirical nature of our study, there is no guarantee that all other transformer-based models or other deep neural networks would demonstrate the same level of effectiveness when applied in the _AGent_ pipeline. Consequently, the impact of the AGent pipeline on low-resource languages may be challenged due to this limitation. Potential future research could complement our findings by investigating the effectiveness of implementing _AGent_ pipeline in other languages. Secondly, our analysis does not encompass a comprehensive examination of the models' robustness against various types of adversarial attacks in EQA when fine-tuned on _AGent_ datasets. We believe that such an analysis is crucial in determining the effectiveness of the _AGent_ pipeline in real-world applications, and its absence deserves further research. Finally, our study has not discussed underlying factors for the observed phenomenon: a model fine-tuned on SQuAD AGent is less robust against TextBugger attack than its peer model fine-tuned on SQuAD 2.0. The study in this direction requires remarkably intricate investigation, which we deem beyond the scope of our present research. We leave this for our future work where we will propose our hypotheses that may shed light on this phenomenon and potential solutions to improve the robustness of EQA models against TextBugger.
2303.00020
A shared accretion instability for black holes and neutron stars
Accretion disks around compact objects are expected to enter an unstable phase at high luminosity. One instability may occur when the radiation pressure generated by accretion modifies the disk viscosity, resulting in the cyclic depletion and refilling of the inner disk on short timescales. Such a scenario, however, has only been quantitatively verified for a single stellar-mass black hole. Although there are hints of these cycles in a few isolated cases, their apparent absence in the variable emission of most bright accreting neutron stars and black holes has been a lingering puzzle. Here we report the presence of the same multiwavelength instability around an accreting neutron star. Moreover, we show that the variability across the electromagnetic spectrum-from radio to X-ray-of both black holes and neutron stars at high accretion rates can be explained consistently if the accretion disks are unstable, producing relativistic ejections during transitions that deplete or refill the inner disk. Such new association allows us to identify the main physical components responsible for the fast multiwavelength variability of highly accreting compact objects.
F. M. Vincentelli, J. Neilsen, A. J. Tetarenko, Y. Cavecchi, N. Castro Segura, S. del Palacio, J. van den Eijnden, G. Vasilopoulos, D. Altamirano, M. Armas Padilla, C. D. Bailyn, T. Belloni, D. J. K. Buisson, V. A. Cuneo, N. Degenaar, C. Knigge, K. S. Long, F. Jimenez-Ibarra, J. Milburn, T. Muñoz Darias, M. Ozbey Arabaci, R. Remillard, T. Russell
2023-02-28T19:00:22Z
http://arxiv.org/abs/2303.00020v1
# A shared accretion instability for black holes and neutron stars ###### Abstract Accretion disks around compact objects are expected to enter an unstable phase at high luminosity[1]. One instability may occur when the radiation pressure generated by accretion modifies the disk viscosity, resulting in the cyclic depletion and refilling of the inner disk on short timescales[2]. Such a scenario, however, has only been quantitatively verified for a single stellar-mass black hole[3; 4; 5]. Although there are hints of these cycles in a few isolated cases[6; 7; 8; 9; 10], their apparent absence in the variable emission of most bright accreting neutron stars and black holes has been a lingering puzzle[11]. Here we report the presence of the same multiwavelength instability around an accreting neutron star. Moreover, we show that the variability across the electromagnetic spectrum--from radio to X-ray--of both black holes and neutron stars at high accretion rates can be explained consistently if the accretion disks are unstable, producing relativistic ejections during transitions that deplete or refill the inner disk. Such new association allows us to identify the main physical components responsible for the fast multiwavelength variability of highly accreting compact objects. Swift J1858.6\(-\)0814 (hereafter Swift J1858) is a low mass X-ray binary (LMXB) that was first detected in November 2018[12] and reached a maximum X-ray luminosity of \(\approx\) 10\({}^{37}\) erg s\({}^{-1}\) (0.6-79 keV)[13]. Spectral analysis showed peculiar properties, including significant obscuration[13, 14] (N\({}_{H}\)\(\approx\) 10\({}^{23}\) cm\({}^{-2}\)) and outflows in X-rays[15], optical[16] and UV[17]. Moreover, for more than a year after its discovery, the source showed remarkable flaring activity from radio to hard X-rays[13, 18, 15, 19]. The source returned to quiescence in 2020, but not before exhibiting X-ray eclipses[19] and Type-I X-ray bursts[20] indicating the presence of an accreting neutron star with an orbital inclination \(>\)70\({}^{\circ}\) at a distance of \(\approx\)13 kpc. On the 6th of August 2019, we coordinated a multiwavelength campaign to observe Swift J1858 simultaneously for \(\sim\)4 h with high time resolution in 5 bands: X-rays (3-79 keV) with _NuSTAR_ ; UV (150 nm) with the _Cosmic Origins Spectrograph_ onboard the Hubble Space Telescope; optical (_i+z sdss_ band, effective wavelength \(\lambda_{\rm eff}\)\(=\) 720 nm) with the _RISE_ at the Liverpool Telescope; near-IR (\(K_{s}\) band, \(\lambda_{\rm eff}\)\(=\) 2.2 \(\mu\)m) with HAWK-I on the Very Large Telescope; and radio (4.5 and 7.5 GHz) with the Karl G. Jansky Very Large Array. The source showed very strong variability with similar patterns in UV, optical, IR (UV/O/IR), and X-ray (see Figure 1-a-b). On long timescales, Swift J1858 exhibited a repetitive behaviour, alternating between quiet and active/variable phases (Figure 1 and Figure 2). The active phases showed oscillatory behavior on timescales of \(\approx\)100 s; we refer to these as "beats," given their visual similarity to the "heartbeat" variability pattern in GRS 1915+105[5]. On timescales of seconds, the source showed episodic fast flaring events (seen only in IR), which we refer to as "flares". To explore the multiwavelength temporal behavior, we computed the cross-correlation function (CCF) between _NuSTAR_ and HAWK-I for all the simultaneous segments in our dataset (see Methods). We measured a clear correlation between the two bands, but the IR lags the X-ray variability with a delay that changes from \(\approx\) 2.5 s to \(\approx\) 5.5 s (see Figure 1-c). The magnitude and orbital phase dependence of these lags are fully consistent with a model where the UV/O/IR beats originate from the irradiation of X-ray beats on a disk and donor star with high orbital inclination (\(\approx\) 80\({}^{\circ}\)) and the orbital period of Swift J1858 (\(\approx\)21.3 h[19]). Simple mass accretion rate variations in a hot inflow are not likely to explain the driving X-ray lightcurve[2]. The X-ray variability observed in Swift J1858 shows significant spectral evolution not compatible with standard variability of accreting compact objects[21, 3, 4]. In addition, similar variability has been seen in the archetypal high accretion rate stellar-mass black holes GRS 1915+105 and V404 Cyg[13]. These sources also share other important properties with Swift J1858, such as high luminosity (40% of the Eddington luminosity for Swift J1858), obscuration and ouflows[13, 14]. This association is strengthened by the remarkable similarity of the IR lightcurve of Swift J1858 and the X-ray lightcurve of the so-called "\(\beta\)" variability class of GRS 1915+105[21] (Figure 2). Even though the patterns are less discernible in the X-ray band for Swift J1858 (probably due to variable line-of-sight obscuration, given its high inclination[9, 13, 15, 16]), the irradiation origin of the UV/O/IR lightcurve strongly suggests a common physical mechanism for the driving variability in both sources. From a physical point of view, it is commonly accepted that the recurrent behaviour of GRS 1915+105 (i.e., heartbeats and other limit cycles) is due to a radiation pressure instability in the disk at high accretion rates[2, 3, 4, 5]. Although not fully confirmed by GRMHD simulations, this instability is believed to drive cyclic accretion or ejection and rebuilding of the inner disk, generating repeating patterns in X-rays on 10-1000 s timescales[3, 4, 5]. If this emission irradiates the disk and companion star, it will give rise to a delayed UV/O/IR lightcurve, such as the one observed in Swift J1858. The interpretation of beats as a disk instability can be tested: both models[4] and observations[5] of GRS 1915+105 need short-lived jet ejections near the peak luminosity (roughly coincident with the depletion of the disk). The fast IR flares in Swift J1858 appear to verify this hypothesis, giving credence to the radiation pressure instability interpretation of the limit cycles. Aligning and averaging the flares, including 200 s of data before and after each flare, reveals that they take place after the peak of the slower IR beats (see Figure 1-d). But these flares are inconsistent with a thermal origin (see Methods), and, given their red color, we interpret them as direct evidence of optically-thin synchrotron emission from transient/short-lived relativistic jet ejections expected to occur[4] during these beat oscillations. Swift J1858 also showed significant radio variability throughout our campaign[18], which requires explanation. The fast IR flares cannot be responsible for the observed low-frequency variability because their amplitude and duration would naturally lead to their radio emission being completely self-absorbed (\(\tau\gg 1\) at 10 GHz; see Methods). However, observations of GRS 1915+105 also show "baby jets": strong radio flares (though their synchrotron emission can contribute significantly in the IR band[22, 23]) that are consistent with emission from adiabatically expanding blobs[24] (although their launching mechanism is still not clear). To search for baby jets in Swift J1858 and make a comparison to GRS 1915+105, we modeled its variable radio emission as the sum of multiple ejecta[25], performing the same analysis on an archival radio observation of GRS 1915+105 (coincident with the \(\beta\)-class X-ray lightcurve shown in Figure 2). The results presented in Figure 3 show that the radio variability of both sources is well reproduced by our modelling. For Swift J1858, the model suggests baby jet ejection times (grey shaded areas in Figure 3) near quiet/active phase transitions; most of the ejecta in GRS 1915+105 occur during quiet phases but several fall close to quiet/active transitions as well. For self-consistency, we then tested whether Swift J1858's baby jets would be detectable in the IR as for GRS 1915+105. Past studies[24, 5] show accretion instabilities in GRS 1915+105 when the X-ray and radio luminosity are \(L_{\rm BH_{x}}\approx 10^{38}\) erg s\({}^{-1}\) and \(L_{\rm BH_{radio}}\approx 10^{30}\) erg s\({}^{-1}\), respectively. For Swift J1858, we find \(L_{\rm NS_{X}}\approx 10^{37}\) erg s\({}^{-1}\) and \(L_{\rm NS_{radio}}\approx 10^{29}\) erg s\({}^{-1}\)[18]. Even under the conservative assumption that the ratio between the IR and radio flux from the jet in Swift J1858 is the same as the one observed in GRS 1915+105 during the \(\beta\)-class instability (IR/radio \(\approx 1.4\))[24], then we expect an IR baby jet flux of only \(\approx\)0.24 mJy. This is almost a factor of two fainter than the reprocessed emission during the beats (\(\approx\)0.4 mJy). This indicates that the two sources share the same disk-jet coupling, despite having qualitatively different radio and IR lightcurves. More broadly, regardless of the jet launching mechanism, this shows how the appearance of accretion instabilities can depend not only on the accretion rate and disk-jet geometry, but also on the binary orbit and the mass of the compact object. There is growing evidence that high-accretion rate black hole sources such as GRS 1915+105, V4641 Sgr, Cyg X-3, and V404 Cygni all share common X-ray spectral variability properties[14]. However, multi-wavelength parallels have proven more difficult due to their different extinctions, hampering efforts to produce a unified physical scenario for this class of sources. Yet, as envisioned from our conclusions, Swift J1858 shows clear analogies with all these objects. Simultaneous multiwavelength observations of the 2015 outburst of V404 Cygni revealed repetitive optical/X-ray patterns with a lag consistent with reprocessing[26, 10, 27] and fast non-thermal flares[28]. Furthermore, its extreme radio variability is consistent with jet ejections taking place _during X-ray spectral transitions[25]_. Moreover, similar O-IR recurrent patterns with comparable timescales have also been observed in V4641 Sgr[29] and Cyg X-3[30]. Finally, we note that X-ray heartbeats have also been detected in sources like the LMXB IGR J17091\(-\)3624[7] and the ULX NGC 3261[31], which also shows significant line-of-sight obscuration despite having a lower inclination. Thus, the recent association of Swift J1858 as a low-luminosity Z-source[32], and the isolated presence of X-ray "GRS 1915-like" patterns in other accreting NSs such as the Rapid Burster[6] and the Bursting Pulsar[33], strongly indicate that Swift J1858 represents the missing link for multiwavelength variability in high accretion rate sources (Figure 2, and Extended Data Figure 1). It was also noted during review that while the limit cycle timescale is similar in GRS 1915+105 and Swift J1858 (despite their very different masses; see Methods), the beat timescale is much shorter around the black hole in the example lightcurves shown in Figure 2. In fact, GRS 1915+105 exhibits a wide range of beat durations in similar limit cycles[21], which suggests that the beats may represent a second instability timescale[4] or may be affected by other factors in the accretion flow. One possibility is the jet power, which is expected to have a significant impact on the disk structure, and thus on the observed X-ray lightcurve[4, 3]. A careful comparison of the time-dependent radio/O-IR properties in states or sources with different beat timescales[34] could further elucidate the role of jets in shaping these instabilities. Our results draw a new coherent picture that links together key aspects of the multiwavelength variability of both black holes and neutron stars at high accretion rate: recurrent repetitive patterns, radio oscillations and fast flaring. At these high accretion rates, the accretion disk becomes unstable, resulting in disk-jet cycles on timescales of \(\sim 10\) s to \(\sim 1000\) s. These have historically been observed in X-rays, but our work shows that given the right conditions (e.g., inclination, orbital period, obscuration, and the relative brightness of the jet), accretion instabilities may in fact be more readily observable at UV/O/IR wavelengths. These instabilities are also observationally associated with radio-emitting discrete ejections: therefore, for the first time we can define a consistent physical scenario which can _quantitatively_ account for most of the multiwavelength variability observed from accreting compact objects at high luminosity. We argue that accretion instabilities, irradiation/obscuration, and jet ejecta should be seen as three fundamental pillars that can be used to study other classes of objects accreting near the Eddington limit. With this insight, future time-resolved multiwavelength campaigns on compact objects will lead to better constraints on the physics of these instabilities and their hosts, independently of the nature of the central object[8].
2310.00118
Transforming Materials Discovery for Artificial Photosynthesis: High-Throughput Screening of Earth-Abundant Semiconductors
We present a highly efficient workflow for designing semiconductor structures with specific physical properties, which can be utilized for a range of applications, including photocatalytic water splitting. Our algorithm generates candidate structures composed of earth-abundant elements that exhibit optimal light-trapping, high efficiency in \ce{H2} and/or \ce{O2} production, and resistance to reduction and oxidation in aqueous media. To achieve this, we use an ionic translation model trained on the Inorganic Crystal Structure Database (ICSD) to predict over thirty thousand undiscovered semiconductor compositions. These predictions are then screened for redox stability under Hydrogen Evolution Reaction (HER) or Oxygen Evolution Reaction (OER) conditions before generating thermodynamically stable crystal structures and calculating accurate band gap values for the compounds. Our approach results in the identification of dozens of promising semiconductor candidates with ideal properties for artificial photosynthesis, offering a significant advancement toward the conversion of sunlight into chemical fuels.
Sean M. Stafford, Alexander Aduenko, Marcus Djokic, Yu-Hsiu Lin, Jose L. Mendoza-Cortes
2023-09-29T20:12:08Z
http://arxiv.org/abs/2310.00118v1
# Transforming Materials Discovery for Artificial Photosynthesis: ###### Abstract We present a highly efficient workflow for designing semiconductor structures with specific physical properties, which can be utilized for a range of applications, including photocatalytic water splitting. Our algorithm generates candidate structures composed of earth-abundant elements that exhibit optimal light-trapping, high efficiency in H\({}_{2}\) and/or O\({}_{2}\) production, and resistance to reduction and oxidation in aqueous media. To achieve this, we use an ionic translation model trained on the Inorganic Crystal Structure Database (ICSD) to predict over thirty thousand undiscovered semiconductor compositions. These predictions are then screened for redox stability under Hydrogen Evolution Reaction (HER) or Oxygen Evolution Reaction (OER) conditions before generating thermodynamically stable crystal structures and calculating accurate band gap values for the compounds. Our approach results in the identification of dozens of promising semiconductor candidates with ideal properties for artificial photosynthesis, offering a significant advancement toward the conversion of sunlight into chemical fuels. ## I Introduction Alarmingly, humanity's consumption of fossil fuels continues to grow rapidly despite widespread awareness of their connection to the climate crisis. [1; 2; 3] The sun offers the best path to wean ourselves off these pollutants as it provides about as much energy to Earth every hour that humanity uses throughout an entire year. [2; 3; 4] Solar currently remains a discouraging 1.5% share of our energy consumption, but thanks to investment in the past decade, this share is growing exponentially. [1; 2; 3] The vast majority of investment in solar energy has been dedicated to the research and production of photovoltaic (PV) cells, primarily in the form of solar panels. As a result of this investment, the technology has matured significantly and become increasingly accessible. In fact, the price of solar panels has plummeded by over 99.6% since 1976, when their power generation capacity was a million times less than it is today. This data is supported by multiple sources, including solar panel price and uptake data. [5; 6; 7; 8] Photovoltaic (PV) cells, while a promising source of renewable energy, face a significant challenge due to their inherent intermittency. [9; 10; 11; 12] As they generate electricity by converting sunlight into a potential difference between photoelectrode components, [13] they do not store energy, resulting in an output that is dependent on sunlight availability. The power output of PV cells is, therefore, subject to daily and annual oscillations, as well as fluctuations in weather conditions and regional climate differences. [9; 10; 11; 12] A promising alternative to traditional solar technology is the photo-electrolyte. This cutting-edge system harnesses electricity generated by a PV material to power a water-splitting reaction on a catalyst. By separating the functions of trapping sunlight and generating fuel into two distinct components, the photo-electrolyte generates Hydrogen and Oxygen fuel from sunlight indirectly. This innovative approach circumvents the intermittency problem associated with conventional solar power systems, ensuring energy remains available even when sunlight is not. However, there are still a few hurdles to overcome. For instance, the current system requires wired connections, which can result in significant energy loss. Additionally, the high cost of the water-splitting catalyst (typically made of Platinum or other rare-earth elements) has been a significant barrier to the scalability of photo-electrolyte technology. A third, unrealized technology - a "no-wires" photo-electrolyte system that performs photovoltaic and catalytic functions in a single material - shows great promise. With a cost-effective material, this groundbreaking photocatalytic water-splitting process could address the efficiency and scalability problems of photo-electrolyzers, as well as the intermittency problem of PV cells. This paper outlines our quest for a breakthrough photocatalytic water-splitting material that meets the critical requirements of stability, efficiency, and scalability. Unfortunately, no existing material is currently able to meet all these essential criteria. Our search is guided by the demanding specifications of the artificial photosynthesis process we are striving to achieve. To effectively split water, a photocatalyst must possess discrete electronic excitations, which require a semiconductor material. The material's electronic structure governs photoabsorption, with the band gap \(E_{g}\) acting as a filter for lower energy photons that are unable to promote an electron to the conduction band and initiate an excitation. To achieve maximum photoabsorption rates, an efficient photocatalyst must be sensitive to light in the high solar availability range of approximately 1-3 eV. Furthermore, the band gap must be direct to ensure optimal performance. [13; 14; 15] In addition to electronic properties, the material must also exhibit excellent stability in an aqueous solution. The photocathode may undergo a reduction reaction with itself and decompose if its reduction potential \(\phi_{red}\) is positive relative to the Normal Hydrogen Electrode (NHE). Similarly, the photoanode may decompose if its oxidation potential \(\phi_{ox}\) is less than 1.23 V wrt. NHE, which is the oxidation potential of water. Consequently, the redox potentials of the material must be compatible with aqueous stability requirements. Finally, any successful arti ficial photosynthesis technology must be composed of Earth-abundant elements to keep the material cost-effective and accessible. This critical constraint ensures that the material is far cheaper than Platinum, making it more widely available for research and development.[14] In summary, our search for the ideal photocatalytic water-splitting material is restricted to Earth-abundant elements that possess compatible redox potentials and band gaps for both aqueous stability and efficient photocatalysis. In the past, searching for a material with a specific set of properties relied heavily on heuristic models, which often proved inadequate due to the vastness of structure space and the complexity of structure-property relationships. This made the search for an optimal material a daunting task. However, recent advancements in computational techniques, such as the use of modern processing power and sophisticated simulation software, have significantly improved the ability to search structure space more effectively.[16] This materials design revolution can be largely attributed to the substantial improvements in density functional theory (DFT), which can now predict the properties of previously unknown materials with reasonable reliability. Despite recent improvements in density functional theory (DFT), a brute-force approach to materials discovery remains impractical. However, researchers have developed strategic improvements over brute force methods, such as the use of large databases of known materials to identify patterns and make inferences about new materials to guide the search.[17; 18] One such tool in this vein is the substitution likelihood matrix. It was introduced by Hautier _et al._[19] about a decade ago to assess the plausibility of the existence of compounds that differ from known compounds by the swap of ionic components. Recently, this tool has been enhanced and updated by Stafford et al. (2023b, in preparation). Another strategic improvement is the use of structure prediction algorithms, which can significantly improve the efficiency of materials discovery. One such algorithm is the Universal Structure Predictor: Evolutionary Xtallography (USPEX), an evolutionary structure search algorithm that interfaces with a DFT code to generate stable crystal structures for a given composition.[20; 21; 22] By utilizing structure prediction algorithms like USPEX alongside other strategies and tools, such as large databases of known materials and substitution likelihood matrices, we have designed a novel and more efficient materials discovery process. This paper aims to not only introduce our novel materials discovery process but also to showcase its practical application in the field of artificial photosynthesis. In Section II, we present SALSA, our systematic approach to materials discovery that combines database mining, substitution likelihood analysis, and evolutionary structure prediction algorithms. In Section III, we demonstrate the efficacy of SALSA by applying it to the search for a photocatalytic water-splitter, a crucial component of artificial photosynthesis. In Section IV, we analyze and contextualize the results of our application, highlight Figure 1: Introducing the SALSA workflow: A Comprehensive Approach to Materials Discovery. Our novel workflow begins with a curated dataset of compounds with known structures and properties. Leveraging an enhanced substitution matrix we constructed from the full ICSD, we generate a vast library of candidate compounds. We then filter these candidates by identifying structural interpolations with desired properties, ultimately using the USPEX algorithm to determine their structures. Lastly, we employ the high-fidelity CRYSTAL software to perform accurate calculations of both structures and properties ing the benefits of our approach compared to traditional methods. Furthermore, in Section V, we provide more detailed descriptions of the computational techniques used in SALSA, including density functional theory and crystal structure prediction algorithms. Finally, in Section VI, we conclude with some reflections on the potential impact of SALSA on the development of materials for photocatalytic water-splitting and other important applications in materials science. ## II Salsa - (S)ubstitution, (A)proximation, evo(L)utionary (S)earch, and (A)b-initio calculations We developed a highly efficient and versatile materials discovery process, dubbed SALSA, which is an acronym for **S**ubstitution, **A**pproximation, evo**L**utionary **S**earch, and **A**b-initio calculations. An overview of SALSA is provided in Figure 1. The process starts by taking a target property or set of properties as input and returns a set of candidate structures as output. Instead of relying on brute-force approaches, SALSA harnesses the power of a large database of compounds with known structures and properties to rapidly search for new materials. The process begins with swapping ionic components between pairs of known compounds that have similar ionic species, as guided by a substitution likelihood matrix, to produce a dataset of hybrid compounds with defined compositions but undefined structures. We then infer approximate properties for these hybrid compounds using a weighted sum of properties of parent compounds and discard hybrids without desirable properties. Promising hybrids are then subjected to an evolutionary structure search using the USPEX algorithm, which generates stable crystal structures for a given composition whenever possible. High-fidelity DFT calculations are then used to recalculate the properties of the generated structures, and structures with undesirable properties are discarded. The process produces a set of undiscovered materials that are promising candidates for various applications, including the application to artificial photosynthesis discussed in Section III. Furthermore, SALSA is highly versatile and can be applied to other materials science problems as well. Substitution by Chemical SimilarityOur group re-constructed and expanded the scope of the substitution likelihood matrix introduced by Hautier _et al._[19] In our construction, we used the entirety of the Inorganic Crystal Structure Database (ICSD)[23] and do not restrict substitutions to preserve the space group of the crystal structure (Stafford et al., 2023b in prep will describe details of this construction.) High values of our matrix correspond to pairs of ionic species empirically observed to exist in similar chemical environments. Above a chosen threshold, a value designates substitution between an ion pair as likely. Applying these likely substitutions to compounds of our initial dataset forms a hypothetical set of new candidate compounds. The resulting candidate dataset is too large for us to feasibly calculate properties of all compounds unless we are overly restrictive with unit cell size or substitution threshold. Therefore, we narrow the scope of our investigation to a subset for which we can efficiently approximate properties. Approximation by Linear InterpolationWe examine the class of candidate compounds which are compositional interpolations between two initial compounds, i.e. hybrid compounds. We derive estimates for the properties of hybrids by summing the properties of parent compounds with the same ratio used in the corresponding hybrid composition. Next, we define the boundary of a target region of property space appropriate for our application. Finally, we eliminate hybrids that do not lie within this region. This step allows us to filter out the sizeable portion of our candidate compounds that are far removed from the target region before proceeding to intensive calculations. While this is an extremely simplistic model of property space, it is a computationally cheap way to approximate values close enough to eliminate most of the unsuitable candidates without a high risk of eliminating suitable ones. Note that we reduce this risk by extending the boundary of our target region beyond the ideal region of property space by enough to include some tolerance for the error that comes with our interpolation method. See Figure 2 for a summary of this scheme. Evolutionary Search of Structure SpaceUntil this point, we have defined our hybrid compounds by their composition alone, but reliable property calculations require structural information. Crystal structure prediction from first principles is prohibitively difficult using just composition. Instead, we turn to an evolutionary structure search code, USPEX, to generate crystal structures for our hybrids. We provide USPEX with a hybrid composition and enable all available stochastic variation operations, which includes variation of the space group. If USPEX is unable to converge a structure for a given composition, that indicates the composition is unlikely to have a thermodynamically stable structure and is eliminated from further consideration. See Section V.5 for a more detailed look at our USPEX methodology. Figure 2: SALSA’s composition-property interpolation scheme illustrated for generic properties \(\alpha\), \(\beta\) and \(\gamma\). Parent and hybrid compounds are represented by points outside and within a target region, respectively. Target region represented by green cuboid. For simplicity in depiction, each property has an upper and lower bound here, but this is not required. Ab-initio Property CalculationsOur candidate set is now vastly narrowed down and contains structural information so high fidelity property calculations are computationally feasible. Therefore we perform geometry optimization and property calculation with another DFT code, CRYSTAL17, at the hybrid functional level of theory.[24; 25] Some candidate compounds located within the target region according to interpolation-inferred values shift outside the region upon replacement by CRYSTAL17-calculated values while others do not converge with CRYSTAL17 at all. We discard these and are left with the final products of SALSA - the structures which CRYSTAL17 converges and determines to have properties in the target region. ## III SALSA applied to photocatalytic water-splitting We found that millions of candidate compounds could be generated from our initial dataset with the ion exchanges suggested by our substitution matrix. Of these, about 13,600 were compatible with our structural interpolation scheme, that is, they could be constructed as hybrids of compounds within our initial dataset of known semiconductors. See Section V.2 for details on this dataset construction. \begin{table} \begin{tabular}{l c c c} Compound & Band gap (eV) & Oxidation (V) & Reduction (V) \\ \hline Ag\({}_{2}\)Te - AgBr & & & \\ \hline Ag\({}_{3}\)TeBr & 1.26 & 1.69 & \(-\)0.41 \\ Ag\({}_{4}\)TeBr\({}_{2}\) & 1.72 & 1.83 & \(-\)0.27 \\ Ag\({}_{5}\)TeBr\({}_{3}\) & 1.98 & 1.90 & \(-\)0.20 \\ \hline Ag\({}_{25}\) - AgBr & & & \\ \hline Ag\({}_{3}\)SB\({}_{3}\) & 2.23 & 1.61 & 0.03 \(\mathbf{\downarrow}\) \\ Ag\({}_{3}\)SBr & 1.70 & 1.17 \(\mathbf{\parallel}\) & \(-\)0.01 \\ Ag\({}_{4}\)SBr\({}_{2}\) & 2.04 & 1.45 & 0.01 \(\mathbf{\downarrow}\) \\ \hline TiO\({}_{2}\) - CuO & & & \\ \hline Ti\({}_{2}\)CuO\({}_{5}\) & 2.55 & 1.30 & \(-\)0.48 \\ Ti\({}_{3}\)CuO\({}_{7}\) & 2.67 & 1.42 & \(-\)0.58 \\ TiCuO\({}_{3}\) & 2.28 & 1.03 \(\mathbf{\parallel}\) & \(-\)0.28 \\ \hline TiO\({}_{2}\) - PbO & & & \\ \hline Ti\({}_{2}\)PbO\({}_{5}\) & 2.93 \(\mathbf{\parallel}\) & 1.58 & \(-\)0.56 \\ TiPb\({}_{2}\)O\({}_{4}\) & 2.83 \(\mathbf{\parallel}\) & 1.36 & \(-\)0.22 \\ TiPbO\({}_{3}\) & 2.88 \(\mathbf{\parallel}\) & 1.48 & \(-\)0.40 \\ \end{tabular} \end{table} Table 1: A selection of ternary hybrid compounds including silver telluride-bromides, silver sulfide-bromides, titanium cuprates and titanium-lead oxides. All interpolated band gaps and redox potentials lie within target ranges. One \(\mathbf{\downarrow}\)-symbol appears next to a value for each 0.05 eV/V it lies outside of the ideal range (rounded down). Figure 3: a) A visualization of band gap - oxidation potential - reduction potential space from a perspective that highlights possible interpolations into the ideal property space. Any compound in our initial dataset that could produce one or more interpolations of interest is represented here. Those which had suitable \(\phi_{ox}\), \(\phi_{red}\) or both are labeled with “O”, “R” and “&”, respectively. Lines represent interpolations, with line thickness proportional to a distance within the ideal region. Dashed oval identifies an influential high-\(\phi_{ox}\) cluster. Extra 0.2 eV/V boundary region not depicted here. b) A “top-down” 2D projection of this space excluding the \(\phi_{red}\) dimension. “R” indicates a compound with suitable \(\phi_{red}\). ### Candidate Compounds Overall, we found about 1250 hybrid compounds within our target region, including 484 within our ideal region. This corresponds to roughly one out of every 10 and 30 of all possible hybrids, respectively. Most interpolation pairings involved binary compounds with no elements in common so more hybrids were quaternary rather than ternary. Furthermore, the binary parents of ternary compounds tended to be located more closely to each other in property space, without any portion of the target region between them, so ternary compounds were relatively underrepresented in the regions of interest. The quaternary:ternary ratio was about 5:1 overall, 7:1 in the target region, and 8:1 in the ideal region. Figure 3 provides insight into how certain interpolation patterns emerged as dominant. These patterns can be understood in relation to the initial distribution of compounds in property space. Few initial compounds had acceptable \(E_{g}\) or \(\phi_{ox}\) and none had both simultaneously; however, acceptable \(\phi_{red}\) was much more common. This combination advantageously positioned those with relatively high \(\phi_{ox}\), especially the circled Figure 4: a) and d) A visualization of band gap - oxidation potential - reduction potential space from a perspective which highlights some interpolations that yielded USPEX-converged structures. Depicted hybrid compounds are represented by blue points. Initial compounds that were parents to the depicted hybrid compounds are represented by peach-colored points. Extra 0.2 eV/V boundary region depicted in translucent green. b) and c) Crystal structures we found for the hybrid compounds. Atom sizes are consistent throughout figure for a given element, except atoms in the legend, which are 2 times as large in diameter. For each structure, dashed gray lines indicate the extent of a single conventional unit cell. cluster containing the five highest \(\phi_{ox}\) compounds, because many partners were located across the ideal region from them. In fact, compounds from this cluster constituted one partner in nearly all interpolation pairings depicted in Figure 3, with the other partner being out-of-cluster and usually low \(\phi_{red}\). These pairings had the largest interpolation distance within the ideal region when the out-of-cluster partner was among the highest \(\phi_{ox}\) of the low-\(E_{g}\) compounds. Larger interpolation distance correlates with a greater number of possible hybrid compounds so this was the most dominant type of interpolation in our hybrid compound dataset. Thus we can roughly understand the interpolation opportunities available to our dataset by focusing on just a small subset of low-\(E_{g}\) and high-\(E_{g}\) compounds which are least oxidizable. The four highest \(\phi_{ox}\) compounds in the high-\(E_{g}\) cluster were AgBr, TiO\({}_{2}\), AgCl, and CuCl, ordered by the number of hybrids derived from them. 95% of hybrid compounds had a parent in this group, including 42% from AgBr alone. The four highest \(\phi_{ox}\) compounds with low-\(E_{g}\) were the binary combinations of Pb and Ag with Se and Te. 40% of hybrids had a parent in this group and 36% that had one parent from each group. Table 1 provides some example hybrid compounds from the target region including three hybrids of different composition from pairs of AgBr and TiO\({}_{2}\) with lower \(E_{g}\) compounds. The variety of hybrids included represents how different parents produced hybrids in different regions, e.g. TiO\({}_{2}\) - PbO hybrids tended to have low \(E_{g}\). ### Candidate Structures We used the procedure for USPEX and VASP laid out in Section V.5 to search for the crystal structures of hybrids in our target region. USPEX was able to converge structures for about 50 hybrid compounds. The elemental composition of these structures mostly coincides with the composition in the hybrid compounds highlighted in the previous Section. For example, Ag has the greatest occurrence by far, due to its presence in both the low and high \(E_{g}\) groups. However, Br has a surprisingly much lower occurrence and Cd has a relatively higher occurrence. Figure 4 and Table 2 show example results of USPEX converged structures. Figure 4 also connects shifts in composition to changes in structure and property space. ## IV Discussion Figure 6 (a) presents the interpolations into property space from our initial compounds which yielded our final structures, as well as shifts from interpolated predictions. Trends within this subset of interpolations suggest certain paths are favored for producing a photocatalytic water-splitter. Our final materials can be divided into two groups. One group is made up of materials containing Silver, halides and group 16 compounds. Among the few compounds in our initial dataset which had good oxidation potentials, most contained Silver, so this group emerged from the interpolation between a pair of materials which had good oxidation potentials, but which had band gaps that were too low and too high respectively. Consequently, these materials are robust to hole oxidation - all interpolated oxidation potentials are at least 0.2 V greater than the ideal minimum. However, their interpolated reduction potentials lie close to the threshold for rejection - none are more than 0.01 V under the ideal maximum. Additionally, these structures have low symmetry and are expensive due to their Silver content. The other group contains Lead instead of Silver. Redox suitability of these Lead compounds is inverted relative to the Silver group. That is, these compounds are robust to electron reduction due to Lead's especially negative reduction potential - all have reduction potentials more than 0.2 V under the ideal maximum. However, none have an interpolated oxidation potential that is 0.03 V greater than the ideal minimum. The Lead structures are also higher in symmetry and relatively cheap. Figure 6 (b) highlights how compounds from different groups lie near different planes of the desired property space, demonstrating the strengths and weaknesses of these groups. The Lead group is about 50 times cheaper so it may offer more scalability.[27, 28, 29, 30, 31] However, the Silver group follows more closely to regular compositional formula. This means it may be easier to find more compounds in this group with different interpolation ratios if the ones we have discovered do not prove to be as effective as they appear to be. Both paths should be investigated experimentally. Furthermore, we envision the materials design approach we used to be generalizable. In a different scheme, we would see a picture similar to this, but with different starting compounds and different boundaries than our target property space. Figure 5: Crystal structures of final compounds with properties suitable for photocatalytic water-splitting. Atom sizes are consistent throughout figure for a given element, except atoms in the legend, which are 2 times as large in diameter. For each structure, dashed gray lines indicate the extent of a single conventional unit cell. \begin{table} \begin{tabular}{c c c c c c} Compound & Band Gap (eV) & Oxidation Potential (V) & Reduction Potential (V) & Space Group & Price (USD/kg) \\ \hline Ti\({}_{2}\)O\({}_{4}\)Pb\({}_{3}\)Se\({}_{3}\) & 2.333 & 1.257 & \(-\)0.717 & 1 & 8 \\ PbCuSeCl & 1.512 & 1.225 \(\pm\) & \(-\)0.246 & 156 & 7 \\ Ag\({}_{4}\)Br\({}_{2}\)S & 2.741 & 1.451 & 0.014 \(\pm\) & 1 & 307 \\ Ag\({}_{4}\)Cl\({}_{2}\)Se & 1.058 \(\pm\) & 1.907 & \(-\)0.007 & 1 & 299 \\ Ag\({}_{4}\)Cl\({}_{2}\)S & 1.060 \(\pm\) & 1.527 & 0.099 \(\pm\) & 1 & 301 \\ \end{tabular} \end{table} Table 3: Final compounds with band gaps and redox potentials suitable for photocatalytic water-splitting. One \(\pm\)-symbol appears next to a value for each 0.05 eV/V it lies outside of the ideal range (rounded down). ## V Methods ### Initial Dataset We first collected a dataset of experimentally determined semiconductor band gaps. We then applied the method described in Stafford et al. 2023c in prep to calculate reduction and oxidation potentials. This formed an initial dataset containing 34 compounds. We sought compounds with band gaps between 1.23-2.80 eV to enable efficient photoabsorption. We also sought reduction potentials below 0.00 V and oxidation potentials above 1.23 V, with respect to the NHE, for materials which are stable in an aqueous environment. None of our original materials had suitable values for all three properties. Figure 7 presents an overview of the collection of initial compounds and the region of property space described above. Table 4 lists each initial compound, its band gap and its redox potentials. Figure 8 in the Supplementary Material presents a closer look at the swarm of compounds that hover just outside of the target space. Few compounds are stable to photo-catalyzed decomposition. This is mainly because most have oxidation potentials that are too low, leaving them prone to hole oxidation (Figure 8 (d)). However, some of the few with acceptable oxidation potentials have reduction potentials that are too high (Figure 8 (b)). Additionally, compounds are roughly evenly divided into three groups which cannot absorb sunlight efficiently, cannot absorb it at all in the regions of higher solar intensity, and which have an acceptable band gap (Figure 8 (e) and (c)). No matter which angle we look at this property space, we see there is great room for improvement. ### Parameters Used for Candidate Generation Substitution ThresholdWe used a substitution threshold of 0, that is, we did not consider substitutions associ \begin{table} \begin{tabular}{l c c c} \hline \hline Compound & Band gap (eV) & Oxidation (V) & Reduction (V) \\ \hline Figure 5 & & & \\ \hline Ag\({}_{2}\)S & 0.90 \(\downarrow\) & 0.50 \(\downarrow\) & \(-\)0.07 \(\checkmark\) \\ Ag\({}_{2}\)Se & 0.15 \(\downarrow\) & 1.39 \(\checkmark\) & \(-\)0.31 \(\checkmark\) \\ Ag\({}_{2}\)Te & 0.17 \(\downarrow\) & 1.38 \(\checkmark\) & \(-\)0.74 \(\checkmark\) \\ AgBr & 2.89 \(\uparrow\) & 2.16 \(\checkmark\) & 0.07 \(\uparrow\) \\ AgCl & 3.28 \(\uparrow\) & 2.30 \(\checkmark\) & 0.22 \(\uparrow\) \\ CuCl & 3.40 \(\uparrow\) & 1.69 \(\checkmark\) & 0.12 \(\uparrow\) \\ PbSe & 0.27 \(\downarrow\) & 0.76 \(\downarrow\) & \(-\)0.61 \(\checkmark\) \\ TiO\({}_{2}\) & 3.00 \(\uparrow\) & 1.75 \(\checkmark\) & \(-\)0.83 \(\checkmark\) \\ \hline Other & & & \\ \hline AlAs & 2.20 \(\checkmark\) & \(-\)1.11 \(\downarrow\) & 0.64 \(\uparrow\) \\ AlN & 6.00 \(\uparrow\) & \(-\)0.53 \(\downarrow\) & \(-\)0.90 \(\checkmark\) \\ AlPb & 2.80 \(\uparrow\) & \(-\)0.94 \(\downarrow\) & \(-\)0.62 \(\checkmark\) \\ BN & 6.00 \(\uparrow\) & \(-\)0.06 \(\downarrow\) & \(-\)0.70 \(\checkmark\) \\ CdS & 2.58 \(\checkmark\) & 0.35 \(\downarrow\) & \(-\)0.67 \(\checkmark\) \\ CdSe & 1.85 \(\checkmark\) & 0.78 \(\downarrow\) & \(-\)0.83 \(\checkmark\) \\ CdTe & 1.61 \(\checkmark\) & 0.51 \(\downarrow\) & \(-\)0.99 \(\checkmark\) \\ Cu\({}_{2}\)O & 2.10 \(\checkmark\) & 0.64 \(\downarrow\) & 0.44 \(\uparrow\) \\ Cu\({}_{2}\)S & 1.20 \(\downarrow\) & 0.89 \(\downarrow\) & \(-\)0.30 \(\checkmark\) \\ CuO & 1.20 \(\downarrow\) & \(-\)0.05 \(\downarrow\) & 0.54 \(\uparrow\) \\ GaN & 3.40 \(\uparrow\) & \(-\)0.06 \(\downarrow\) & \(-\)0.36 \(\checkmark\) \\ GaSb & 0.73 \(\downarrow\) & \(-\)0.38 \(\downarrow\) & \(-\)0.65 \(\checkmark\) \\ InzS\({}_{3}\) & 1.98 \(\checkmark\) & 0.49 \(\downarrow\) & \(-\)0.57 \(\checkmark\) \\ InAs & 0.41 \(\downarrow\) & \(-\)0.03 \(\downarrow\) & \(-\)0.42 \(\checkmark\) \\ InP & 1.42 \(\checkmark\) & 0.05 \(\downarrow\) & \(-\)0.31 \(\checkmark\) \\ InSb & 0.23 \(\downarrow\) & \(-\)0.13 \(\downarrow\) & \(-\)0.60 \(\checkmark\) \\ MgO & 7.80 \(\uparrow\) & 0.18 \(\downarrow\) & \(-\)1.73 \(\checkmark\) \\ PbO & 2.70 \(\checkmark\) & 1.07 \(\downarrow\) & 0.24 \(\uparrow\) \\ PbS & 0.37 \(\downarrow\) & 0.29 \(\downarrow\) & \(-\)0.37 \(\checkmark\) \\ PbTe & 0.32 \(\downarrow\) & 0.60 \(\downarrow\) & \(-\)0.88 \(\checkmark\) \\ SnO\({}_{2}\) & 3.50 \(\uparrow\) & 1.56 \(\checkmark\) & \(-\)0.12 \(\checkmark\) \\ SnS & 1.00 \(\downarrow\) & 0.42 \(\downarrow\) & \(-\)0.37 \(\checkmark\) \\ ZnO & 3.37 \(\uparrow\) & 0.48 \(\downarrow\) & \(-\)0.45 \(\checkmark\) \\ ZnS & 3.84 \(\uparrow\) & 0.35 \(\downarrow\) & \(-\)0.90 \(\checkmark\) \\ ZnSe & 2.83 \(\uparrow\) & 0.40 \(\downarrow\) & \(-\)0.93 \(\checkmark\) \\ ZnTe & 2.39 \(\checkmark\) & 0.29 \(\downarrow\) & \(-\)1.25 \(\checkmark\) \\ \hline \hline \end{tabular} \end{table} Table 4: The compounds of our initial dataset with their known band gaps and redox potentials. \(\checkmark\), \(\downarrow\), and \(\uparrow\) symbols indicate whether property values are suitable, too low or too high for photocatalytic water splitting, respectively. Figure 6: a) Property space diagram with undiscovered semiconductors with desirable band gaps (eV), oxidation potentials (V) and reduction potentials (V) produced by SALSA depicted in blue. Interpolated predictions depicted as unfilled circles. Original compounds are depicted as peach-colored points. b) A 2D \(\phi_{ox}\)-\(\phi_{red}\) projection that demonstrates which boundaries final Pb and Ag compounds lie near. Final compounds are lettered in correspondance with a) to conserve space. This projection is 1.0 V\(\times\)1.0 V in extent. ated with negative values in our substitution likelihood matrix. This parameter can be adjusted as governed by the computational resources available to a search. A lower threshold enables a more thorough exploration of composition space, but is more computationally expensive and less efficient at finding suitable materials. Substitution ImplementationWe allowed substitution to constitute a complete or partial replacement of the original ion. For example, \(\mathrm{Br}^{-}\leftrightarrow\mathrm{I}^{-}\) is a matrix-allowed substitution and AgBr is in our initial dataset so compounds of the form \(\mathrm{Ag}_{n}\,\mathrm{Br}_{n-m}\,\mathrm{I}_{m}\) with \(n,m\in\mathbb{Z}\), are in our candidate dataset. We limited substitutions to be first or second order, i.e at most two substitutions could be used to generate an individual candidate. In Section III, first and second-order substitutions correspond to ternary and quaternary compounds, respectively. Theoretically, a second-order substitution could consist of exchanging a single, original ionic component for two new ions. However, second-order substitutions that formed hybrid compounds consisted of a single substitution of each of the original components as this is the only way a second-order substitution could correspond to interpolation between two binary compounds. Building on the previous example, the substitution \(\mathrm{Ag}^{+}\leftrightarrow\mathrm{Cu}^{+}\) could be used in second-order substitutions to produce quaternary compounds of the form \(\mathrm{Ag}_{n-p}\,\mathrm{Cu}_{p}\,\mathrm{Br}_{n-m}\,\mathrm{I}_{m}\) with \(n,p,m\in\mathbb{Z}\). For the purpose of enumerating a complete dataset, the new components of candidate compounds were limited to half or less the final composition. So all second-order substitutions were partial and you could not generate Cu I (\(n=p=m=1\)) from just AgBr. However, this limitation does not affect hybrid compounds. Unit Cell Size LimitIn practice, the enumeration of candidate compounds requires some constraint on values of \(n,m,p\). Results presented in Section III implemented this constraint by imposing a maximum of 20 atoms in a unit cell. This is equivalent to the constraints \(1\leq n,p,m\leq 10\) in the previous example. ### Property Space Selection Criteria With our interpolation scheme we filtered compounds that did not meet the following criteria: \(1.03<\) band gap (\(\mathrm{eV}\)) \(<\) 3.00, oxidation potential (\(\mathrm{V}\)) \(>\) 1.03, and reduction potential (\(\mathrm{V}\)) \(<\) 0.2. This includes an extra window of 0.20 eV for the band gaps and 0.20 V for the potentials to allow for materials that might ultimately arrive in the desired region of property space by deviating slightly from their linear interpolation. To illustrate this process, consider PbSe and CuCl. PbSe's band gap is too small, at 0.27 eV, and its oxidation potential is too low, at 0.76 eV, while CuCl has too high a band gap at 3.40 eV. However, the 50:50 interpolation between these two, PbCuSeCl, has band gap, oxidation and reduction potentials of 1.84 eV, 1.23 V and \(-\)0.25 V, respectively, which places it just inside the threshold of our target region. ### Interpolation We construct hybrid compositions which are integer ratios of two parent compositions. We then estimate the properties of the corresponding hybrid compounds to be linear interpolations of the parent compounds on a per-atom basis. In other words, we weight the initial property values by the number of atoms contributed to the hybrid. Furthermore, we don't restrict our interpolations to be single-substitution. For example, both \(\mathrm{Pb}^{2+}\leftrightarrow\mathrm{Cu}^{+}\) and \(\mathrm{Se}^{2-}\leftrightarrow\mathrm{Cl}^{-}\) are matrix-allowed substitutions so if we start with initial compounds PbSe and CuCl, we generate interpolated compositions such as PbCuSeCl. To better understand the per-atom weighting procedure, consider an illustrative example in which we have initial compounds Ag\({}_{2}\)S and AgBr that have a property with values 0 and \(P\). \(\mathrm{S}^{2-}\leftrightarrow\mathrm{Br}^{-}\) is a matrix-allowed substitution, so we consider composition ratios of Ag\({}_{2}\)S and AgBr such as 2:1, 1:1, and 1:2, which correspond to Ag\({}_{5}\)Br\({}_{5}\), Ag\({}_{3}\)BrS, and Ag\({}_{4}\)Br\({}_{2}\)S, respectively. According to our interpolation procedure, these new candidate compounds have estimated property values of \(\frac{4}{7}P\), \(\frac{2}{5}P\), and \(\frac{1}{4}P\), respectively. Note that Ag\({}_{3}\)BrS has a property value of \(\frac{2}{5}P\) rather than \(\frac{1}{2}P\), despite being a 1:1 ratio of initial compositions. To understand this potentially nonintuitive result, recognize that 2 of the 5 atoms in Ag\({}_{3}\)BrS were contributed by AgBr so its interpolation weight is \(\frac{2}{5}\) and accordingly, the interpolation weight of Ag\({}_{2}\)S is \(\frac{3}{5}\). Therefore, \(P_{new}=\frac{2}{5}\times P+\frac{3}{5}\times 0=\frac{2}{5}P\) ### USPEX Settings We provide USPEX a composition and allow it to perform all stochastic modifications it has at its disposal. We Figure 7: An overlook on the initial dataset of compounds used in our application of SALSA to artificial photosynthesis. Here they are depicted in a band gap – oxidation potential – reduction potential property space. The region of property space suitable for photocatalytic water-splitting is indicated. All 34 initial compounds are depicted. do not constrain the structure by space group. For energy evaluation, we elect USPEX's option to interface with the DFT code, Vienna Ab initio Simulation Package (VASP) [32, 33, 34]. All VASP calculations were performed in the plane-wave DFT framework at the Generalized Gradient Approximation (GGA) level of theory and used the Perdew, Burke, and Ernzerhof (PBE) functional [35]. Projector-augmented wave (PAW) pseudopotentials were used to represent the core electrons and ion-electron interactions [36, 37]. We used a plane-wave cutoff of 500 eV, an energy convergence criterion of \(10^{-4}\) eV, and force convergence of 0.02 eV/A. Dispersive interactions were accounted for using DFT-D3 corrections [38] with Becke-Jonson damping [39]. We also included spin polarization effects. ### CRYSTAL17 Settings We used the hybrid DFT code CRYSTAL17 to conduct higher fidelity geometry optimization on our candidate structures. CRYSTAL17 uses basis sets of atom-centered Gaussian-type functions [24, 25]. We used the hybrid Heyd-Scuseria-Ernzerhof (HSE06) functional [40, 41]. We also considered spin-polarization effects and used relativistic compact effective potentials and efficient, shared-exponent basis sets [42, 43]. The effective potentials were used for O, Cu, Se, Ag, Te and Pb. We included full geometry optimization of cell positions and lattice parameters. We sampled the reciprocal space for all the structures using a \(\Gamma\)-centered Monkhorst-Pack scheme with a resolution of \(a_{i}n_{k_{i}}\geq 40\)\(\AA\) where \(a_{i}\) and \(n_{k_{i}}\) represent a lattice constant along the \(i^{th}\) axis in real space and the number of k-points along the \(i^{th}\) axis in reciprocal space, respectively. We optimized geometry with an SCF energy convergence criterion of \(2.72\times 10^{-6}\) eV, an RMS force criterion of \(1.54\times 10^{-2}\) eV/A, a max force criterion of \(2.31\times 10^{-2}\) eV/A, an RMS displacement of \(6.35\times 10^{-4}\) A, a max displacement criterion of \(9.53\times 10^{-4}\) A and a between-geometry energy convergence criterion of \(2.72\times 10^{-6}\) A. For this application we also performed a single-point SCF optimization on the converged geometry to acquire a band gap, although this is not necessary for the SALSA workflow in general. ## VI Conclusions We have introduced a general materials design process that can be used for many applications. The process only requires a dataset of known compounds with known properties and the ability to calculate some of the properties from first-principles for a small set of structures. We applied our new process to an unrealized artificial photosynthesis technology and were able to discover materials that are good candidates for photocatalytic water-splitting. This includes PbCuSeCl, a material with a novel structure, which we were able to discover because our process allows for an expansive search of structure space. It also includes Ti\({}_{2}\)O\({}_{4}\)Pb\({}_{3}\)Se\({}_{3}\) which has band gap and interpolated redox potentials within the ideal range for photocatalytic water-splitting. Furthermore, work is underway to improve several methods used in the SALSA process. We may expand and enhance further the substitution matrix. We are also working on a way to generalize the redox potential calculation method with larger datasets. ###### Acknowledgements. SMS is supported by the Mendoza Lab start-up funds. JLMC acknowledges start-up funds from Michigan State University. This work was supported in part by computational resources and services provided by the Institute for Cyber-Enabled Research at Michigan State University. **Author Contributions**. AA and JLMC started the project in 2012-2013. JLMC conceived the idea and executed the first iterations of the search algorithms. AA and JLMC wrote the first draft. AA and JLMC implemented and developed the first iteration of the algorithms. SMS, MD, YL continued and finished the project. SMS implemented the next generation of the algorithm. Conceptualization: AA, JLMC. Methodology: AA, SMS, MD, YL, JLMC. Software: AA, SMS, MD, YL, JLMC. Validation: AA, SMS, MD, YL, JLMC. Formal Analysis: SMS, MD, JLMC. Investigation: AA, SMS, MD, JLMC. Resources: JLMC. Writing--original draft preparation: AA, JLMC. Writing--review and editing: SMS, AA, MD, YL, JLMC. Visualization: SMS, MD, JLMC. Supervision: JLMC. Project administration: JLMC. Funding Acquisition: JLMC. All authors have read and agreed to the published version of the manuscript. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.03572
Operator relations characterizing higher-order differential operators
Let $r$ be a positive integer, $N$ be a nonnegative integer and $\Omega \subset \mathbb{R}^{r}$ be a domain. Further, for all multi-indices $\alpha \in \mathbb{N}^{r}$, $|\alpha|\leq N$, let us consider the partial differential operator $D^{\alpha}$ defined by \[ D^{\alpha}= \frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\cdots \partial x_{r}^{\alpha_{r}}}, \] where $\alpha= (\alpha_{1}, \ldots, \alpha_{r})$. Here by definition we mean $D^{0}\equiv \mathrm{id}$. An easy computation shows that if $f, g\in \mathscr{C}^{N}(\Omega)$ and $\alpha \in \mathbb{N}^{r}, |\alpha|\leq N$, then we have \[ \tag{$\ast$} D^{\alpha}(f\cdot g) = \sum_{\beta\leq \alpha}\binom{\alpha}{\beta}D^{\beta}(f)\cdot D^{\alpha - \beta}(g). \] This paper is devoted to the study of identity $(\ast)$ in the space $\mathscr{C}(\Omega)$. More precisely, if $r$ is a positive integer, $N$ is a nonnegative integer and $\Omega \subset \mathbb{R}^{r}$ is a domain, then we describe those mappings $T_{\alpha} \colon \mathscr{C}(\Omega)\to \mathscr{C}(\Omega)$, $\alpha \in \mathbb{N}^{r}, |\alpha|\leq N$ that satisfy identity $(\ast)$ for all possible multi-indices $\alpha\in \mathbb{N}^{r}$, $|\alpha|\leq N$. Our main result says that if the domain is $\mathscr{C}(\Omega)$, then the mappings $T_{\alpha}$ are of a rather special form. Related results in the space $\mathscr{C}^{N}(\Omega)$ are also presented.
Włodzimierz Fechner, Eszter Gselmann, Aleksandra Świątczak
2023-09-07T09:05:01Z
http://arxiv.org/abs/2309.03572v1
# Operator relations characterizing higher-order differential operators ###### Abstract Let \(r\) be a positive integer, \(N\) be a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Further, for all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\), let us consider the partial differential operator \(D^{\alpha}\) defined by \[D^{\alpha}=\frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\cdots \partial x_{r}^{\alpha_{r}}},\] where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\). Here by definition we mean \(D^{0}\equiv\mathrm{id}\). An easy computation shows that if \(f,g\in\mathscr{C}^{N}(\Omega)\) and \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\), then we have \[D^{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}D^{\beta}(f) \cdot D^{\alpha-\beta}(g).\] ( \[\ast\] ) This paper is devoted to the study of identity \((\ast)\) in the space \(\mathscr{C}(\Omega)\). More precisely, if \(r\) is a positive integer, \(N\) is a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) is a domain, then we describe those mappings \(T_{\alpha}\colon\mathscr{C}(\Omega)\to\mathscr{C}(\Omega)\), \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\) that satisfy identity \((\ast)\) for all possible multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\). Our main result says that if the domain is \(\mathscr{C}(\Omega)\), then the mappings \(T_{\alpha}\) are of a rather special form. Related results in the space \(\mathscr{C}^{N}(\Omega)\) are also presented. ## 1 Introduction and preliminaries In this paper the set of real numbers is denoted by \(\mathbb{R}\), the set of complex numbers by \(\mathbb{C}\), and the set of nonnegative integers by \(\mathbb{N}\). Let \(r\) be a positive integer. Elements of \(\mathbb{N}^{r}\) will be called \(r\)-dimensional multi-indices. Sums and differences of multi-indices (of the same dimension) are meant to be componentwise, i.e., if \(\alpha,\beta\in\mathbb{N}^{r}\), then \[\alpha\pm\beta=(\alpha_{1}\pm\beta_{1},\ldots,\alpha_{r}\pm\beta_{r})\] Further, if \(\alpha,\beta\in\mathbb{N}^{r}\), then we write \(\alpha\leq\beta\) if for all \(i=1,\ldots,r\) we have \(\alpha_{i}\leq\beta_{i}\), where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\) and \(\beta=(\beta_{1},\ldots,\beta_{r})\). If for the multi-indices \(\alpha,\beta\in\mathbb{N}^{r}\) we have \(\alpha\leq\beta\) and \(\alpha\neq\beta\), we will write \(\alpha<\beta\). By the height of a multi-index \(\alpha\in\mathbb{N}^{r}\) we understand \(|\alpha|=\sum_{i=1}^{r}\alpha_{i}\), where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\). Finally, we will also use the notion of factorial and binomial coefficients in this multi-index setting. If \(\alpha,\beta\in\mathbb{N}^{r}\), then \[\alpha!=\alpha_{1}!\cdots\alpha_{r}!\] and \[\binom{\alpha}{\beta}=\binom{\alpha_{1}}{\beta_{1}}\cdots\binom{\alpha_{r}}{ \beta_{r}}=\frac{\alpha!}{\beta!(\alpha-\beta)!},\] where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\) and \(\beta=(\beta_{1},\ldots,\beta_{r})\). Let \(r\) be a positive integer, \(N\) be a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain (i.e. a nonempty, open and connected set). For all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\), let us consider the partial differential operator \(D^{\alpha}\) defined by \[D^{\alpha}=\frac{\partial^{|\alpha|}}{\partial x_{1}^{\alpha_{1}}\cdots \partial x_{r}^{\alpha_{r}}},\] where \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\). Here by definition we mean \(D^{0}=\mathrm{id}\). Let further \[\mathcal{C}^{N}(\Omega)=\{f\colon\Omega\to\mathbb{R}\,|\,f\text{ is $N$ times continuously differentiable}\}\,.\] An easy computation shows that if \(f,g\in\mathcal{C}^{N}(\Omega)\) and \(\alpha\in\mathbb{N}^{r},|\alpha|\leq N\), then we have \[D^{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}D^{\beta}(f) \cdot D^{\alpha-\beta}(g). \tag{1}\] The main aim of this paper will be about the converse in some sense. More precisely, in this paper, we will study the solutions \(T_{\alpha}\colon\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\) of the operator equation \[T_{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f)T _{\alpha-\beta}(g)\qquad\big{(}f,g\in\mathcal{C}^{N}(\Omega)\big{)}\] for all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\). Equations analogous to (1) have an important role not only in connection to characterization theorems related to differential operators but also in harmonic and spectral analysis. In the following, we will use the notations and the terminology of the monographs Szekelyhidi [7, 8] and while considering moment sequences of higher rank, the terminology of [2]. Let \((G,\cdot)\) be an Abelian semigroup. A nonzero function \(m\colon G\to\mathbb{C}\) is called _exponential_, if \[m(x\cdot y)=m(x)m(y)\] holds for all \(x,y\) in \(G\). Let \(N\) be a nonnegative integer. A function \(\varphi\colon G\to\mathbb{C}\) is termed to be a _moment function of order \(N\)_, if there exist functions \(\varphi_{k}\colon G\to\mathbb{C}\) such that \(\varphi_{0}=1\), \(\varphi_{N}=\varphi\) and \[\varphi_{k}(x\cdot y)=\sum_{j=0}^{k}\binom{k}{j}\varphi_{j}(x)\varphi_{k-j}(y) \tag{2}\] for all \(x\) and \(y\) in \(G\) and \(k=0,1,\ldots,N\). If \(G\) is a monoid with the neutral element \(1\), then this concept can be extended by relaxing the assumption \(\varphi_{0}\equiv 1\) to \(\varphi_{0}(1)=1\). In this case, \(\varphi_{0}\) is an arbitrary exponential function and we say that \(\varphi_{0}\)_generates the generalized moment sequence of order \(N\)_ and the function \(\varphi_{k}\) is a _generalized moment function of order \(k\)_, or, if we want to specify the exponential \(\varphi_{0}\), then we say that \(\varphi_{k}\) is a _generalized moment function of order \(k\) associated with the exponential \(\varphi_{0}\)_. **Definition 1**.: Let \(G\) be an Abelian semigroup, \(r\) a positive integer, and for each multi-index \(\alpha\) in \(\mathbb{N}^{r}\) let \(f_{\alpha}\colon G\to\mathbb{C}\) be a function. We say that \((f_{\alpha})_{\alpha\in\mathbb{N}^{r}}\) is a _generalized moment sequence of rank \(r\)_, if \[f_{\alpha}(x\cdot y)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}f_{\beta}(x)f_ {\alpha-\beta}(y) \tag{3}\] holds whenever \(x,y\) are in \(G\). The function \(f_{\mathbf{0}}\), where \(\mathbf{0}\) is the zero element in \(\mathbb{N}^{r}\), is called the _generating function_ of the sequence. _Remark 1_.: For \(r=1\), instead of multi-indices, we have nonnegative integer indices. Thus generalized moment functions of rank \(1\) are simply moment sequences. _Remark 2_.: Assume now that \((G,\cdot)\) is an Abelian group (not only a semigroup). For \(\alpha=\mathbf{0}\) we have \[f_{\mathbf{0}}(x\cdot y)=f_{\mathbf{0}}(x)\cdot f_{\mathbf{0}}(y)\] for each \(x,y\) in \(G\), hence \(f_{\mathbf{0}}=m\) is always an exponential, or identically zero. It can be proved by induction on the height of the multi-index \(\alpha\in\mathbb{N}^{r}\) that if \(f_{\mathbf{0}}\) is the identically zero function, then for all multi-index \(\alpha\), the mapping \(f_{\alpha}\) must be identically zero, too. In a rather natural way, the above notions can be extended from complex-valued mappings to mappings whose range is a (commutative) ring. Indeed, if \((G,\cdot)\) is an Abelian semigroup and \(Q\) is a commutative ring, \(r\) is a positive integer, and \(\alpha\in\mathbb{N}^{r}\) is a multi-index, then a function \(f\colon G\to Q\) is a generalized moment function of rank \(r\) and of order \(N\), where \(N=|\alpha|\), if for all multi-indices \(\beta\in\mathbb{N}^{r}\) with \(|\beta|\leq N\), there exists a function \(f_{\beta}\colon G\to Q\) such that \(f=f_{\alpha}\) and we have \[f_{\beta}(x\cdot y)=\sum_{\gamma\leq\beta}\binom{\beta}{\gamma}f_{\gamma}(x) f_{\beta-\gamma}(y) \tag{4}\] holds whenever \(x,y\in G\) and for all multi-indices \(\beta\in\mathbb{N}^{r}\) with \(|\beta|\leq N\). _Remark 3_.: Using the above definition this means that if \(N\geq 1\) and we consider \(\mathcal{C}^{N}(\Omega)\) with the pointwise product of functions, then it will become an Abelian semigroup and we take \(\mathcal{C}(\Omega)\) to be the range, then the sequence of mappings \((D^{\alpha})_{|\alpha|\leq N}\) forms a moment sequence of rank \(r\). ## 2 Characterizations of higher order differential operators The main aim of this paper is to investigate the following problem: Let \(r\) be a positive integer, \(N\) be a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain (i.e. a nonempty, open and connected set). Determine the mappings \(T_{\alpha}\colon\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\), \(\alpha\in\mathbb{N}^{r},|\alpha|\leq N\) if they fulfill \[T_{\beta}(f\cdot g)=\sum_{\gamma\leq\beta}\binom{\beta}{\gamma}T_{\gamma}(f) T_{\beta-\gamma}(g) \tag{5}\] for all \(f,g\in\mathcal{C}^{N}(\Omega)\) and for all multi-indices \(\beta\in\mathbb{N}^{r}\), \(|\beta|\leq N\). Observe that if \(\beta=\mathbf{0}=(0,\ldots,0)\), then the above identity becomes \[T_{\mathbf{0}}(f\cdot g)=T_{\mathbf{0}}(f)\cdot T_{\mathbf{0}}(g)\qquad\left( f,g\in\mathcal{C}^{N}(\Omega)\right).\] This means, that similarly to the group case, the first element of the sequence, i.e. \(T_{\mathbf{0}}\) is an 'exponential'. Recall again that if \((G,\cdot)\) is an Abelian _group_, then a nonzero function \(m\colon G\to\mathbb{C}\) is an exponential, if \[m(x\cdot y)=m(x)m(y)\] holds for all \(x,y\) in \(G\). In the case of this concept, the fact that the range of \(m\) is the set of complex numbers plays a key role. Indeed, if \(m\) is an exponential on the Abelian group \(G\), then either \(m\) is identically zero, or nowhere zero. At the same time, as we will see below, analogous statements are not true for mappings \(T\colon\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\). The study of multiplicative maps between function spaces has quite extensive literature. Here we quote only two of them, but the interested reader can consult e.g. Artstein-Avidan-Faifman-Milman [1], Milgram [4], Mircun [5] and Mrcun-Semrl [6]. A result from [6] concerning _bijective_ multiplicative mappings between the function spaces \(\mathcal{C}(X)\) and \(\mathcal{C}(Y)\) says that if we are given compact Hausdorff spaces \(X\) and \(Y\), \(\tau\colon\,Y\to X\) is a homeomorphism and \(p\in\mathcal{C}(Y)\) is a positive function, then the mapping \(\mathcal{M}\colon\,\mathcal{C}(X)\to\mathcal{C}(Y)\) defined by \[\mathcal{M}(f)(x)=|f(\tau(x))|^{p(x)}\cdot\mathrm{sgn}\left(f(\tau(x)) \right)\qquad(x\in Y,f\in\mathcal{C}(X))\] is a bijective and multiplicative map, i.e. we have \[\mathcal{M}(f\cdot g)(x)=\mathcal{M}(f)(x)\cdot\mathcal{M}(g)(x)\] for all \(x\in Y\) and \(f,g\in\mathcal{C}(X)\). In view of this, if \(K\subset\mathbb{R}^{r}\) is a _compact_ set and \(\tau\colon\,K\to K\) is a homeomorphism, then the mapping \(\mathcal{M}\colon\,\mathcal{C}(K)\to\mathcal{C}(K)\) defined by \[\mathcal{M}(f)(x)=|f(\tau(x))|^{p(x)}\cdot\mathrm{sgn}\left(f(\tau(x))\right) \qquad(x\in K,f\in\mathcal{C}(K))\] is a bijective and multiplicative map. Firstly observe that this is only one direction and not an 'if and only if' statement. Further, in general, we intend to work on a _domain_\(\Omega\subset\mathbb{R}^{r}\) and we cannot a priori assume that the mapping in question is _bijective_. A corollary of a result from Mrcun [5] describes _bijective_ multiplicative self-mappings of \(\mathcal{C}^{N}(\Omega)\), where \(N\) is a fixed positive integer. Let \(N,r\) be a positive integers and \(\Omega\subset\mathbb{R}^{r}\) be a \(\mathcal{C}^{N}\)-manifold. Then for any multiplicative bijection \(\mathcal{M}\colon\,\mathcal{C}^{N}(\Omega)\to\mathcal{C}^{N}(\Omega)\) there exists a unique \(\mathcal{C}^{N}\)-diffeomorphism \(\tau\colon\,\Omega\to\Omega\) such that \[\mathcal{M}(f)(x)=f(\tau(x))\qquad\left(x\in\Omega,f\in\mathcal{C}^{N}( \Omega)\right)\] holds. In the cases we intend to study, unfortunately, the range of the mappings is not \(\mathcal{C}^{N}(\Omega)\), but the much larger function space \(\mathcal{C}(\Omega)\). In addition, in general, it cannot be guaranteed that the mapping \(T_{\mathbf{0}}\) is bijective. However, without the assumption of bijectivity, we cannot expect to be able to describe the multiplicative mappings in these spaces. Thus, in this paper, we will determine the moment functions of the spaces in question in the case of some important multiplicative mappings. ### A non-bijective case Let \(r\) be a positive integer, \(N\) be nonnegative a integer, and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Then the mapping \(T_{\mathbf{0}}\colon\,\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\) defined by \[T_{\mathbf{0}}(f)(x)=1\qquad\left(x\in\Omega,f\in\mathcal{C}^{N}(\Omega)\right)\] is multiplicative (and non-bijective). Therefore, it can be suitable to generate a moment sequence. As we will see, this mapping generates a fairly trivial moment sequence. **Theorem 1**.: _Let \(r\) be a positive integer, \(N\) be nonnegative a integer, and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Assume further that for all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leqslant N\), we are given a mapping \(T_{\alpha}\colon\,\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\) such that_ \[T_{\mathbf{0}}(f)(x)=1\qquad\left(x\in\Omega,f\in\mathcal{C}^{N}(\Omega)\right)\] _and \((T_{\alpha})_{|\alpha|\leqslant N}\) forms a moment sequence of rank \(r\) and of order \(N\). Then for all multi-indices \(\alpha\in\mathbb{N}^{r}\) with \(\alpha\neq\mathbf{0}\) and \(|\alpha|\leqslant N\) we have_ \[T_{\alpha}(f)(x)=0\] _for all \(x\in\Omega\) and \(f\in\mathcal{C}^{N}(\Omega)\)._ Proof.: We prove the statement on induction of the height of the multi-index \(\alpha\in\mathbb{N}^{r}\). Accordingly, let \(\alpha\in\mathbb{N}^{r}\) be an arbitrary multi-index with \(|\alpha|=1\). Then \[T_{\alpha}(f\cdot g)=T_{\boldsymbol{0}}(f)T_{\alpha}(g)+T_{\alpha}(f)T_{ \boldsymbol{0}}(g)\] holds for all \(f,g\in\mathcal{C}^{N}(\Omega)\). Since \[T_{\boldsymbol{0}}(f)(x)=1\qquad\left(x\in\Omega,f\in\mathcal{C}^{N}(\Omega) \right),\] this means that \[T_{\alpha}(f\cdot g)=T_{\alpha}(f)+T_{\alpha}(g)\] all \(f,g\in\mathcal{C}^{N}(\Omega)\). Let \(f\) and \(g\) be the identically zero functions on \(\Omega\), we get that \[T_{\alpha}(0)=2T_{\alpha}(0),\] so \(T_{\alpha}(0)=0\). This however yields that \[T_{\alpha}(f\cdot 0)=T_{\alpha}(f)+T_{\alpha}(0)\] for all \(f\in\mathcal{C}^{N}(\Omega)\). Thus \[T_{\alpha}(f)(x)=0\] for all \(f\in\mathcal{C}^{N}(\Omega)\) and \(x\in\Omega\). Let now \(k\in\{1,\ldots,N-1\}\) be arbitrary, and suppose that \[T_{\beta}(f)(x)=0\qquad\left(f\in\mathcal{C}^{N}(\Omega),x\in\Omega\right)\] holds for all multi-indices \(\beta\) with \(|\beta|\leq k\). Let further \(\alpha\in\mathbb{N}^{r}\) be an arbitrary multi-index with \(|\alpha|=k+1\). Then \[T_{\alpha}(f\cdot g) =\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f)\cdot T_ {\alpha-\beta}(g)\] \[=T_{\boldsymbol{0}}(f)T_{\alpha}(g)+T_{\alpha}(f)T_{\boldsymbol{ 0}}(g)+\sum_{0<\beta<\alpha}\binom{\alpha}{\beta}T_{\beta}(f)\cdot T_{\alpha- \beta}(g)\] \[=T_{\alpha}(f)+T_{\alpha}(g)\] holds for all \(f,g\in\mathcal{C}^{N}(\Omega)\). This is exactly the same equation that we solved above. Thus \[T_{\alpha}(f)(x)=0\] for all \(f\in\mathcal{C}^{N}(\Omega)\) and \(x\in\Omega\). ### A bijective case Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a \(\mathcal{C}^{N}\)-manifold and \(\tau\colon\Omega\to\Omega\) be a \(\mathcal{C}^{N}\)-diffeomorphism. Define \(\tilde{T}_{\boldsymbol{0}}\colon\mathcal{C}^{N}(\Omega)\to\mathcal{C}(\Omega)\) through \[\tilde{T}_{\boldsymbol{0}}(f)(x)=f(\tau(x))\qquad\left(x\in\Omega,f\in \mathcal{C}^{N}(\Omega)\right).\] Then \(\tilde{T}_{\boldsymbol{0}}\) is a multiplicative mapping. Thus it can be an appropriate candidate to generate a moment sequence on \(\mathcal{C}^{N}(\Omega)\). **Lemma 1**.: _Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a \(\mathscr{C}^{N}\)-manifold and \(\tau\colon\Omega\to\Omega\) be a \(\mathscr{C}^{N}\)-diffeomorphism. Further, let us consider the mappings \(T_{\mathbf{0}},\tilde{T}_{\mathbf{0}}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C }(\Omega)\) defined by_ \[T_{\mathbf{0}}(f)(x)=f(x)\qquad\text{and}\qquad\tilde{T}_{\mathbf{0}}(f)(x)=f( \tau(x))\qquad\left(x\in\Omega,f\in\mathscr{C}^{N}(\Omega)\right),\] _respectively. Then the following statements are equivalent:_ 1. _the sequence of mappings_ \(T_{\alpha}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\)_,_ \(\alpha\in\mathbb{N}^{r}\)_,_ \(|\alpha|\leq N\) _is a moment sequence generated by_ \(T_{\mathbf{0}}\)__ 2. _the sequence of mappings_ \(\tilde{T}_{\alpha}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\)_,_ \(\alpha\in\mathbb{N}^{r}\)_,_ \(|\alpha|\leq N\) _is a moment sequence generated by_ \(\tilde{T}_{\mathbf{0}}\)__ Proof.: Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a \(\mathscr{C}^{N}\)-manifold and \(\tau\colon\Omega\to\Omega\) be a \(\mathscr{C}^{N}\)-diffeomorphism. Further, let is consider the mappings \(T_{\mathbf{0}},\tilde{T}_{\mathbf{0}}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{ C}(\Omega)\) defined by \[T_{\mathbf{0}}(f)(x)=f(x)\qquad\text{and}\qquad\tilde{T}_{\mathbf{0}}(f)(x)=f( \tau(x))\qquad\left(x\in\Omega,f\in\mathscr{C}^{N}(\Omega)\right),\] respectively. To prove the direction (i)\(\Rightarrow\)(ii), assume that the sequence of mappings \(T_{\alpha}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\), \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\) is a moment sequence generated by \(T_{\mathbf{0}}\). This means that for all \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\) we have \[T_{\alpha}(f\cdot g)(x)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}( f)(x)\cdot T_{\alpha-\beta}(g)(x)\] for all \(f,g\in\mathscr{C}^{N}(\Omega)\) and \(x\in\Omega\). Thus we also have \[T_{\alpha}(f\cdot g)(\tau(x))=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{ \beta}(f)(\tau(x))\cdot T_{\alpha-\beta}(g)(\tau(x))\qquad\left(f,g\in\mathscr{ C}^{N}(\Omega),x\in\Omega\right).\] For all multi-indices \(\alpha\in\mathbb{N}^{r}\), \(|\alpha|\leq N\), define the mapping \(\tilde{T}_{\alpha}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\) by \[\tilde{T}_{\alpha}(f)(x)=T_{\alpha}(f)(\tau(x))\qquad\left(f\in\mathscr{C}^{N }(\Omega),x\in\Omega\right)\] to deduce that \[\tilde{T}_{\alpha}(f\cdot g)(x)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta} \tilde{T}_{\beta}(f)(x)\cdot\tilde{T}_{\alpha-\beta}(g)(x)\] for all \(f,g\in\mathscr{C}^{N}(\Omega)\) and \(x\in\Omega\). Thus the sequence of mappings \((\tilde{T}_{\alpha})_{|\alpha|\leq N}\) is a moment sequence of rank \(r\) generated by \(\tilde{T}_{0}\). The proof of the implication (ii)\(\Rightarrow\)(i) is analogous. It is enough to consider a point \(x=\tau(y)\) with arbitrary \(y\in\Omega\) and use the fact that \(\tau\) is a diffeomorphism. As we saw above, if \(r\) and \(N\) are positive integers, \(\Omega\subset\mathbb{R}^{r}\) is a \(\mathscr{C}^{N}\)-manifold and \(\tau\colon\Omega\to\Omega\) is a \(\mathscr{C}^{N}\)-diffeomorphism, then the mapping \(\tilde{T}_{\mathbf{0}}\colon\mathscr{C}^{N}(\Omega)\to\mathscr{C}(\Omega)\) defined by \[\tilde{T}_{\mathbf{0}}(f)(x)=f(\tau(x))\qquad\left(x\in\Omega,f\in\mathscr{C}^ {N}(\Omega)\right),\] is a multiplicative mapping. Thus it can be an appropriate candidate to generate a moment sequence on \(\mathscr{C}^{N}(\Omega)\). Nevertheless, the previous lemma says that instead of multiplicative mappings of this form, it suffices to consider the identity mapping. Accordingly, below we will describe moment sequences generated by the identity mapping. Further, observe that while describing the solutions of equation (5), not only the generator, i.e., the operator \(T_{\mathbf{0}}\), but also the domain \(\mathscr{C}^{N}(\Omega)\) can play a crucial role. In the second part of this section, we focus on the largest possible domain, that is, we will work on \(\mathscr{C}(\Omega)\). During the proof of Theorem 2 we will use a corollary of [3, Theorem 3.5] and also [3, Theorem 7.1] which are the following statements. Before stating these results, however, we need two more notions from the theory of operator relations. **Definition 2**.: Let \(k\) be a nonnegative integer, \(r\) be a positive integer and \(\Omega\subset\mathbb{R}^{r}\) be an open set. An operator \(A\colon\mathscr{C}^{k}(\Omega)\to\mathscr{C}(\Omega)\) is _non-degenerate_ if for each nonvoid open subset \(U\subset\Omega\) and all \(x\in U\), there exist functions \(g_{1},g_{2}\in\mathscr{C}^{k}(\Omega)\) with supports in \(U\) such that the vectors \((g_{i}(x),Ag_{i}(x))\in\mathbb{R}^{2}\), \(i=1,2\) are linearly independent in \(\mathbb{R}^{2}\). **Definition 3**.: Let \(k\) and \(r\) be positive integers with \(k\geq 2\) and \(\Omega\subset\mathbb{R}^{r}\) be an open set. We say that the operator \(A\colon\mathscr{C}^{k}(\Omega)\to\mathscr{C}(\Omega)\)_depends non-trivially on the derivative_ if there exists \(x_{0}\in\Omega\) and there are functions \(f_{1},f_{2}\in\mathscr{C}^{k}(\Omega)\) such that \[f_{1}(x_{0})=f_{2}(x_{0})\quad\text{and}\quad Af_{1}(x_{0})\neq Af_{2}(x_{0})\] holds. **Proposition 1**.: _Let \(r\) be a positive integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Suppose that the operator \(T\colon\mathscr{C}(\Omega)\to\mathscr{C}(\Omega)\) satisfies the Leibniz rule, i.e.,_ \[T(f\cdot g)=f\cdot T(g)+T(f)\cdot g\qquad\left(f,g\in\mathscr{C}(\Omega) \right).\] _Then there exists a function \(c\in\mathscr{C}(\Omega)\) such that for all \(f\in\mathscr{C}(\Omega)\) and \(x\in\Omega\)_ \[T(f)(x)=c(x)\cdot f(x)\cdot\ln\left(\left|f(x)\right|\right).\] _Conversely, any such map \(T\) satisfies the Leibniz rule._ **Proposition 2**.: _Let \(r\) be a positive integer, \(k\) be a nonnegative integer and \(\Omega\subset\mathbb{R}^{r}\) be a domain. Assume that \(T,A\colon\mathscr{C}^{k}(\Omega)\to\mathscr{C}(\Omega)\) satisfy_ \[T(f\cdot g)=T(f)\cdot g+f\cdot T(g)+2A(f)\cdot A(g)\qquad\left(f,g\in\mathscr{C }^{k}(\Omega)\right)\] _and that in case \(k\geq 2\) the mapping \(A\) is non-degenerate and depends non-trivially on the derivative. Then there are continuous functions \(a\colon\Omega\to\mathbb{R}\) and \(b,c\colon\Omega\to\mathbb{R}^{r}\) such that we have_ \[T(f)(x) = \left\langle f^{\prime\prime}(x)c(x),c(x)\right\rangle+R(f)(x) \qquad\left(f\in\mathscr{C}^{k}(\Omega),x\in\Omega\right),\] \[A(f)(x) = \left\langle f^{\prime}(x),c(x)\right\rangle\] _where_ \[R(f)(x)=\left\langle f^{\prime}(x),b(x)\right\rangle+a(x)f(x)\ln\left(\left|f (x)\right|\right)\qquad\left(f\in\mathscr{C}^{k}(\Omega)\right).\] _If \(k=1\), then necessarily \(c\equiv 0\). Further, if \(k=0\), then necessarily \(b\equiv 0\) and \(c\equiv 0\)._ _Conversely, these operators satisfy the above second-order Leibniz rule._ Our main result for operators defined on \(\mathscr{C}(\Omega)\) is the following theorem. **Theorem 2**.: _Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a domain and assume that for all multi-indices \(\alpha\in\mathbb{N}^{r}\), with \(|\alpha|\leq N\) we are given a mapping \(T_{\alpha}\colon\mathscr{C}(\Omega)\to\mathscr{C}(\Omega)\) such that \(T_{\mathbf{0}}\) is the identity mapping and for all multi-indices \(\alpha\in\mathbb{N}^{r}\) with \(0\neq|\alpha|\leq N\) we have_ \[T_{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f) \cdot T_{\alpha-\beta}(g) \tag{6}\] _for all \(f,g\in\mathscr{C}(\Omega)\). Then there exist a family of functions \(\{c_{\alpha}\in\mathscr{C}(\Omega):0\neq|\alpha|\leq N\}\) such that_ \[\left[\sum_{\mathbf{0}\subset\beta<\alpha}\binom{\alpha}{\beta}c_{\beta}(x) \cdot c_{\alpha-\beta}(x)\right]=0 \tag{7}\] \[T_{\alpha}(f)(x)=c_{\alpha}(x)f(x)\ln(|f(x)|)\qquad(x\in\Omega,f\in\mathcal{C}( \Omega),0\neq|\alpha|\leq N)\,. \tag{8}\] _And also conversely, if \(T_{\mathbf{0}}\) is the identity mapping on \(\mathcal{C}(\Omega)\), we are given a family of functions that satisfies (7) and we define the mappings \(T_{\alpha}\) on \(\mathcal{C}(\Omega)\) by the formula (8), then they satisfy equation (6) for all multi-indices \(\alpha\) such that \(0\neq|\alpha|\leq N\)._ Proof.: Let \(r\) and \(N\) be positive integers, \(\Omega\subset\mathbb{R}^{r}\) be a domain and assume that for all multi-indices \(\alpha\in\mathbb{N}^{r}\), with \(|\alpha|\leq N\) we are given a mapping \(T_{\alpha}\colon\mathcal{C}(\Omega)\to\mathcal{C}(\Omega)\) such that \(T_{\mathbf{0}}\) is the identity mapping and for all multi-indices \(\alpha\in\mathbb{N}^{r}\) with \(0\neq|\alpha|\leq N\) we have \[T_{\alpha}(f\cdot g)=\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f) \cdot T_{\alpha-\beta}(g)\] for all \(f,g\in\mathcal{C}(\Omega)\). We prove the statement by induction on the multi-index \(\alpha\). Let \(\alpha\in\mathbb{N}^{r}\) be an arbitrary multi-index for which \(|\alpha|=1\) holds. Then \[T_{\alpha}(f\cdot g)=T_{\mathbf{0}}(f)T_{\alpha}(g)+T_{\alpha}(f)T_{\mathbf{0 }}(g)=f\cdot T_{\alpha}(g)+T_{\alpha}(f)\cdot g\qquad(f,g\in\mathcal{C}(\Omega ))\,,\] since \(T_{\mathbf{0}}=\operatorname{id}\) was assumed. Using Proposition 1, we obtain that there exists a continuous function \(c_{\alpha}\in\mathcal{C}(\Omega)\) such that \[T_{\alpha}(f)(x)=c_{\alpha}(x)f(x)\ln(|f(x)|)\qquad(x\in\Omega,f\in\mathcal{C }(\Omega))\,.\] Let now \(k\in\{1,\ldots,N-1\}\) be arbitrary and assume that the statement of the theorem holds for all multi-indices \(\beta\in\mathbb{N}^{r}\) for which we have \(|\beta|\leq k\). Let further \(\alpha\in\mathbb{N}^{r}\) be an arbitrary multi-index for which \(|\alpha|=k+1\). Then \[T_{\alpha}(f\cdot g) =\sum_{\beta\leq\alpha}\binom{\alpha}{\beta}T_{\beta}(f)\cdot T_{ \alpha-\beta}(g)\] \[=T_{\mathbf{0}}(f)\cdot T_{\alpha}(g)+T_{\alpha}(f)\cdot T_{ \mathbf{0}}(g)+\sum_{\mathbf{0}<\beta<\alpha}\binom{\alpha}{\beta}T_{\beta}(f )\cdot T_{\alpha-\beta}(g)\] \[=f\cdot T_{\alpha}(g)+T_{\alpha}(f)\cdot g+\sum_{\mathbf{0}< \beta<\alpha}\binom{\alpha}{\beta}c_{\beta}f\ln(|f|)\cdot c_{\alpha-\beta}g \ln(|g|)\] \[=f\cdot T_{\alpha}(g)+T_{\alpha}(f)\cdot g+\left[\sum_{\mathbf{0 }<\beta<\alpha}\binom{\alpha}{\beta}c_{\beta}(x)\cdot c_{\alpha-\beta}\right] \cdot f\ln(|f|)\cdot g(x)\ln(|g|)\] holds for all \(f,g\in\mathcal{C}(\Omega)\). Using Proposition 2, taking into account that \(k=0\), we obtain that there exists a continuous function \(c_{\alpha}\) such that \[T_{\alpha}(f)(x)=c_{\alpha}(x)\cdot f(x)\cdot\ln(|f(x)|)\] is fulfilled for all \(f\in\mathcal{C}(\Omega)\) and \(x\in\Omega\). Further, the family of functions \(\{c_{\alpha}\in\mathcal{C}(\Omega):0\neq|\alpha|\leq N\}\) necessarily satisfies (7). The converse implication is an easy computation. As we saw in the previous theorem, the moment sequences are quite poor on the \(\mathcal{C}(\Omega)\) space. We note that if \(N\geqslant 1\), then there are substantially more diverse moment sequences in the space \(\mathcal{C}^{N}(\Omega)\), see Remark 3. However, this will be dealt with in one of our future work. _Acknowledgment_.: The research of Eszter Gselmann has been supported by project no. K134191 that has been implemented with the support provided by the National Research, Development and Innovation Fund of Hungary, financed under the K_20 funding scheme. The work of Aleksandra Swiatczak is implemented under the project "Curriculum for advanced doctoral education & training - CADET Academy of TUL" co-financed by the STER Programme - Internationalization of doctoral schools. This article has been completed while one of the authors (Aleksandra Swiatczak), was the Doctoral Candidate in the Interdisciplinary Doctoral School at the Lodz University of Technology, Poland.
2309.10240
DProvDB: Differentially Private Query Processing with Multi-Analyst Provenance
Recent years have witnessed the adoption of differential privacy (DP) in practical database systems like PINQ, FLEX, and PrivateSQL. Such systems allow data analysts to query sensitive data while providing a rigorous and provable privacy guarantee. However, the existing design of these systems does not distinguish data analysts of different privilege levels or trust levels. This design can have an unfair apportion of the privacy budget among the data analyst if treating them as a single entity, or waste the privacy budget if considering them as non-colluding parties and answering their queries independently. In this paper, we propose DProvDB, a fine-grained privacy provenance framework for the multi-analyst scenario that tracks the privacy loss to each single data analyst. Under this framework, when given a fixed privacy budget, we build algorithms that maximize the number of queries that could be answered accurately and apportion the privacy budget according to the privilege levels of the data analysts.
Shufan Zhang, Xi He
2023-09-19T01:42:39Z
http://arxiv.org/abs/2309.10240v1
# DProvDB: Differentially Private Query Processing with Multi-Analyst Provenance+ ###### Abstract. Recent years have witnessed the adoption of differential privacy (DP) in practical database systems like PINQ, FLEX, and PrivateSQL. Such systems allow data analysts to query sensitive data while providing a rigorous and provable privacy guarantee. However, the existing design of these systems does not distinguish data analysts of different privilege levels or trust levels. This design can have an unfair apportion of the privacy budget among the data analyst if treating them as a single entity, or waste the privacy budget if considering them as non-colluding parties and answering their queries independently. In this paper, we propose DProvOB, a fine-grained privacy provenance framework for the multi-analyst scenario that tracks the privacy loss to each single data analyst. Under this framework, when given a fixed privacy budget, we build algorithms that maximize the number of queries that could be answered accurately and apportion the privacy budget according to the privilege levels of the data analysts. ## 1. Introduction With the growing attention on data privacy and the development of privacy regulations like GDPR (Zheng et al., 2017), companies with sensitive data must share their data without compromising the privacy of data contributors. Differential privacy (DP) (Krishnan et al., 2017) has been considered as a promising standard for this setting. Recent years have witnessed the adoption of DP to practical systems for data management and online query processing, such as PINQ (Zheng et al., 2017), FLEX (Zheng et al., 2017), PrivateSQL (Zheng et al., 2017), GoogleDP (Beng et al., 2017), and Chorus (Chorus, 2017). In systems of this kind, data curators or system providers set up a finite system-wise privacy budget to bound the overall extent of information disclosure. An incoming query consumes some privacy budget. The system stops processing new queries once the budget has been fully depleted. Thus, the privacy budget is a crucial resource to manage in such a query processing system. In practice, multiple data analysts can be interested in the same data, and they have different privilege/trust levels in accessing the data. For instance, tech companies need to query their users' data for internal applications like anomaly detection. They also consider inviting external researchers with low privilege/trust levels to access the same sensitive data for study. Existing query processing systems with DP guarantees would regard these data analysts as a unified entity and do not provide tools to distinguish them or track their perspective privacy loss. This leads to a few potential problems. First, a low-privilege external data analyst who asks queries first can consume more privacy budget than an internal one, if the system does not interfere with the sequence of queries. Second, if naively tracking and answering each analyst's queries independently of the others, the system can waste the privacy budget when two data analysts ask similar queries. The aforementioned challenges to private data management and analytics are mainly on account of the fact that the systems are "_stateless_", meaning none of the existing DP query-processing systems records the individual budget limit and the historical queries asked by the data analysts. That is, the **metadata** about _where the query comes from, how the query is computed, and how many times each result is produced_, which is related to the **provenance information** in database research (Chorus et al., 2017; Krishnan et al., 2017). As one can see, without privacy provenance, the query answering process for the multi-analyst use case can be unfair or wasteful in budget allocation. To tackle these challenges, we propose a "stateful" DP query processing system DProvOB, which enables a novel privacy provenance framework designed for the multi-analyst setting. Following the existing work (Zheng et al., 2017), DProvOB answers queries based on private synopses (i.e., materialized results for views) of data. Instead of recording all the query metadata, we propose a more succinct data structure -- a privacy provenance table, that enforces only necessary privacy tracing as per each data analyst and per view. The privacy provenance table is associated with privacy constraints so that constraint-violating queries will be rejected. Making use of this privacy provenance framework, DProvOB can maintain global (viz., as per view) and local (viz., as per analyst) DP synopses and update them dynamically according to data analysts' requests. DProvOB is supported with a new principled method, called _additive Gaussian approach_, to manage DP synopses. The additive Gaussian approach leverages DP mechanism that adds correlated Gaussian noise to mediating unnecessary budget consumption across data analysts and over time. This approach first creates a global DP synopsis for a view query; Then, from this global synopsis, it provides the necessary local DP synopsis to data analysts who are interested in this view by adding more Gaussian noise. In such a way DProvOB is tuned to answer as many queries accurately from different data analysts. Even when all the analysts collude, the privacy loss will be bounded by the budget used for the global synopsis. Adding up to its merits, we notice the provenance tracking in DProvOB can help in achieving a notion called proportional fairness. We believe most of if not all, existing DP query processing systems can benefit from integrating our multi-analyst privacy provenance framework -- DProvOB can be regarded as a middle-ground approach between the purely interactive DP systems and those based solely on synopses, from which we both provably and empirically show that DProvOB can significantly improve on system utility and fairness for multi-analyst DP query processing. The contributions of this paper are the following: * We propose a multi-analyst DP model where mechanisms satisfying this DP provide discrepant answers to analysts with different privilege levels. Under this setting, we ask research questions about tight privacy analysis, budget allocation, and fair query processing. (Section SS3) * We propose a privacy provenance framework that compactly traces historical queries and privacy consumption as per analyst and as per view. With this framework, the administrator is able to enforce privacy constraints, enabling dynamic budget allocation and fair query processing. (Section SS4) * We design new accuracy-aware DP mechanisms that leverages the provenance data to manage synopses and inject correlated noise to achieve tight collusion bounds over time in the multi-analyst setting. The proposed mechanisms can be seamlessly added to the algorithmic toolbox for DP systems. (Section SS5) * We implement DProvOB1, as a new multi-analyst query processing interface, and integrate it into an existing DP query system. We empirically evaluate DProvOB, and the experimental results show that our system is efficient and effective compared to baseline systems. (Section SS6) Footnote 1: The system code is available at [https://github.com/DProvDB/DProvDB](https://github.com/DProvDB/DProvDB). Paper RoadmapThe remainder of this paper is outlined as follows. Section 2 introduces the necessary notations and background knowledge on database and DP. Our multi-analyst DP query processing research problems are formulated in section 3 and a high-level overview of our proposed system is briefed in section 4. Section 5 describes the details of our design of the DP mechanisms and system modules. In section 6, we present the implementation details and an empirical evaluation of our system against the baseline solutions. Section 7 discusses extensions to the compromisation model and other strawman designs. In section 8 we go through the literature that is related and we conclude this work in section 9. ## 2. Preliminaries Let \(\mathcal{D}\) denotes the domain of database and \(D\) be a database instance. A relation \(R\in D\) consists of a set of attributes, \(attr(R)=\{a_{1},\ldots,a_{j}\}\). We denote the domain of an attribute \(a_{j}\) by \(Dom(a_{j})\) while \(|Dom(a_{j})|\) denotes the domain size of that attribute. We introduce and summarize the related definitions of differential privacy. Definition 1 (Differential Privacy (Henderson, 1993)).: _We say that a randomized algorithm \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{O}\) satisfies \((\epsilon,\delta)\)-differential privacy (DP), if for any two neighbouring databases \((D,D^{\prime})\) that differ in only 1 tuple, and \(O\subseteq\mathcal{O}\), we have_ \[\Pr[\mathcal{M}(D)\in O]\leq e^{\epsilon}\Pr[\mathcal{M}(D^{\prime})\in O]+\delta.\] DP enjoys many useful properties, for example, post-processing and sequential composition (K \(i\in[m]\), and all \(O_{i}\subseteq O_{i}\), we have_ \[\Pr[\mathcal{M}(D)\in O_{i}]\leq e^{\epsilon_{i}}\Pr[\mathcal{M}(D^{\prime})\in O _{i}]+\delta_{i},\] _where \(O_{i}\) are the outputs released to the \(i\)th analyst._ The multi-analyst DP variant supports the composition across different algorithms, as indicated by the following theorem. **Theorem 3.1** (Multi-Analyst DP Composition).: _Given two randomized mechanisms \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), where \(\mathcal{M}_{1}:\mathcal{D}\rightarrow(O_{1},\ldots,O_{m})\) satisfies \([(A_{1},\epsilon_{1},\delta_{1}),...,(A_{m},\epsilon_{m},\delta_{m})]\)-multi-analyst-DP, and \(\mathcal{M}_{2}:\mathcal{D}\rightarrow(O_{1}^{\prime},\ldots,O_{m}^{\prime})\) satisfies \([(A_{1},\epsilon_{1}^{\prime},\delta_{1}^{\prime}),...,(A_{m},\epsilon_{m}^{ \prime},\delta_{m}^{\prime})]\)-multi-analyst-DP, then the mechanism \(g(\mathcal{M}_{1},\mathcal{M}_{2})\) gives the \([(A_{1},\epsilon_{1}+\epsilon_{1}^{\prime},\delta_{1}+\delta_{1}^{\prime}),...,(A_{m},\epsilon_{m}+\epsilon_{m}^{\prime},\delta_{m}+\delta_{m}^{\prime})]\)-multi-analyst-DP guarantee._ Unlike prior DP work for multiple data analysts, our setup considers data analysts who are obliged under laws/regulations should not share their privacy budget/query responses with each other. We provide a detailed discussion on comparing other work in Section 8. Under our new multi-analyst DP framework, several, natural but less well-understood, research questions (RQs) are raised and problem setups are considered of potential interest. **RQ 1: worst-case privacy analysis across analysts.** Under this multi-analyst DP framework, what if all data analysts collude or are compromised by an adversary, how could we design algorithms to account for the privacy loss to the colluded analysts? When this happens, we can obtain the following trivial lower bound and upper bound for the standard DP measure. **Theorem 3.2** (Compromisation Lower/Upper Bound, Trivial).: _Given a mechanism \(\mathcal{M}\) that satisfies \([(A_{1},\epsilon_{1},\delta_{1}),...,(A_{m},\epsilon_{m},\delta_{m})]\)-multi-analyst-DP, when all data analysts collude, its DP loss is (i) lowered bounded by \((\max\epsilon_{i},\max\delta_{i})\), where \((\epsilon_{i},\delta_{i})\) is the privacy loss to the \(i\)-th analyst, and (ii) trivially upper bounded by \((\sum\epsilon_{i},\sum\delta_{i})\)._ The lower bound indicates the least amount of information that has to be released (to the analyst) and this upper bound is simply derived from sequential composition. Obviously, the trivial upper bound does not match the lower bound, rendering the question of designing multi-analyst DP mechanisms to close the gap. Closing this gap means _even if these data analysts break the law and collude, the overall privacy loss of the multi-analyst DP mechanism is still minimized._ In this paper, we would like to design such algorithms that achieve the lower bound, as shown in Section 5. **RQ 2: dynamic budget allocation across views.** The DP query processing system should impose constraints on the total privacy loss by all the analysts (in the worst case) and the privacy loss per analyst. When handling incoming queries, prior work either dynamically allocates the privacy budget based on the budget request per query (Sandhi, 2017, 2018) or query accuracy requirements (Sandhi, 2017) or predefine a set of static DP views that can handle the incoming queries (Sandhi, 2017). The dynamic budget per query approach can deplete the privacy budget quickly as each query is handled with a fresh new privacy budget. The static views spend the budget in advance among them to handle unlimited queries, but they may fail to meet the accuracy requirements of some future queries. Therefore, in our work, we consider the view approach but assign budgets dynamically to the views based on the incoming queries so that more queries can be answered with their accuracy requirements. Specifically, we would like to use the histogram view, which queries the number of tuples in a database for each possible value of a set of attributes. The answer to a view is called a synopsis. We consider a set of views that can answer all incoming queries. **Definition 6** (Query Answerability (Sandhi, 2017)).: _For a query \(q\) over the database \(D\), if there exists a query \(q^{\prime}\) over the histogram view \(V\) such that \(q(D)=q(V(D))\), we say that \(q\) is answerable over \(V\)._ **Example 1**.: Consider two queries \(q_{1}\) and \(q_{2}\) over a database for employees in Figure 1. They are answerable over the \(V_{1}\), a 3-way marginal contingency table over attributes (age, gender, education), via their respective transformed queries \(\hat{q}_{1}\) and \(\hat{q}_{2}\). \(\Box\) Given a set of views, we would like to design algorithms that can dynamically allocate privacy budgets to them and update their corresponding DP synopses over time. We show how these algorithms maximize the queries that can be handled accurately in Section 5. Since we can dynamically allocate budget to views, our system can add new views to the system as overtime. We discuss this possibility in Section 5.3. **RQ 3: fair query answering among data analysts.** A fair system expects data analysts with higher privacy privileges to receive more accurate answers or larger privacy budgets than ones with lower privacy privileges. However, existing DP systems make no distinctions among data analysts. Hence, it is possible that a low-privilege external analyst who asks queries first consumes all the privacy budget and receives more accurate query answers, leaving no privacy budgets for high-privilege internal data analysts. It is also impossible to force data analysts to ask queries in a certain order. In this context, we would like the system to set up the (available and consumed) privacy budgets for data analysts according to their privacy privilege level. In particular, we define privacy privilege levels as an integer in the range of 1 to 10, where a higher number represents a higher privilege level. We also define a fairness notion inspired by the literature on resource allocation (Bahdan et al., 2017; Bahdan et al., 2017; Bahdan et al., 2017). **Definition 7** (Proportional Fairness).: _Consider a DP system handling a sequence of queries \(Q\) from multiple data analysts with a mechanism \(\mathcal{M}\), where each data analyst \(A_{i}\) is associated with a privilege level \(l_{i}\). We say the mechanism \(\mathcal{M}\) satisfies proportional fairness, if \(\forall A_{i},A_{j}\ (i\neq j),l_{i}\leq l_{j}\), we have_ \[\frac{Err_{i}(M,A_{i},Q)}{\mu(l_{i})}\leq\frac{Err_{j}(M,A_{j},Q)}{\mu(l_{j})},\] _where \(Err_{i}(M,A_{i},Q)\) denotes the analyst \(A_{i}\)'s privacy budget consumption and \(\mu(\cdot)\) is some linear function._ This fairness notion suggests the quality of query answers to data analysts, denoted by \(Err_{i}(M,A_{i},Q)\) is proportional to their privilege levels, denoted by a linear function \(\mu(\cdot)\) of their privilege levels. We first consider the privacy budget per data analyst as the quality function, as a smaller error to the query answer is expected with a larger privacy budget. We show in Section 5.3 how to set up the system to achieve fairness when the analysts ask a sufficient number of queries, which means they finish consuming their assigned privacy budget. System Overview In this section, we outline the key design principles of DProvOB and briefly describe the modules of the system. ### Key Design Principles To support the multi-analyst use case and to answer the aforementioned research questions, we identify the four principles and propose a system DProvOB that follows these principles. **Principle 1: fine-grained privacy provenance.** The query processing system should be able to track the privacy budget allocated per each data analyst and per each view in a fine-grained way. The system should additionally enable a mechanism to compose privacy loss across data analysts and the queries they ask. **Principle 2: view-based privacy management.** The queries are answered based on DP views or synopses. Compared to directly answering a query from the database \(D\), view-based query answering can answer more private queries (Srivastava et al., 2016), but it assumes the accessibility of a pre-known query workload. In our system, the view is the minimum data object that we keep track of its privacy loss, and the views can be updated dynamically if higher data utility is required. The privacy budgets spent on different views during the updating process depend on the incoming queries. **Principle 3: dual query submission mode.** Besides allowing data analysts to submit a budget with their query, the system enables an accuracy-aware mode. With this mode, data analysts can submit the query with their desired accuracy levels regarding the expected squared error. The dual mode system supports data analysts from domain experts, who can take full advantage of the budgets, to DP novices, who only care about the accuracy bounds of the query. **Principle 4: maximum query answering.** The system should be tuned to answer as many queries accurately as possible without violating the privacy constraint specified by the administrator as per data analyst and per view based on their privilege levels. ### Privacy Provenance Table To meet the first two principles, we propose a privacy provenance table for DProvOB, inspired by the access matrix model in access control literature (Krishna et al., 2017), to track the privacy loss per analyst and per view, and further bound the privacy loss. Particularly, in our model, the state of the overall privacy loss of the system is defined as a triplet \((\mathcal{A},\mathcal{V},\mathcal{P})\), where \(\mathcal{A}\) denotes the set of data analysts and \(\mathcal{V}\) represents the list of query-views maintained by the system. We denote by \(\mathcal{P}\) the privacy provenance table, defined as follows. [Privacy Provenance Table] The privacy provenance table \(\mathcal{P}\) consists of (i) a provenance matrix \(P\) that tracks the privacy loss of a view in \(\mathcal{V}\) to each data analyst in \(\mathcal{A}\), where each entry of the matrix \(P[A_{i},V_{j}]\) records the current cumulative privacy loss \(S^{A_{i}}_{V_{j}}\), on view \(V_{j}\) to analyst \(A_{i}\); (ii) a set of row/column/table constraints, \(\Psi\): a row constraint for \(i\)-th row of \(P\), denoted by \(\psi_{A_{i}}\), refers to the allowed maximum privacy loss to a data analyst \(A_{i}\in\mathcal{A}\) (according to his/her privilege level); a column constraint for the \(j\)-th column, denoted by \(\psi_{V_{j}}\), refers to as the allowed maximum privacy loss to a specific view \(V_{j}\); the table constraint over \(P\), denoted by \(\psi_{P}\), specifies the overall privacy loss allowed for the protected database. The privacy constraints and the provenance matrix are correlated. In particular, the row/column constraints cannot exceed the overall table constraint, and each entry of the matrix cannot exceed row/column constraints. The correlations, such as the composition of the privacy constraints of all views or all analysts, depend on the DP mechanisms supported by the system. We provide the details of DP mechanisms and the respective correlations in privacy provenance table in Section 5. Figure 1 gives an example of the privacy provenance table for \(n\) views and \(m\) data analysts. When DProvOB receives query \(q_{1}\) from Bob, it plans to use view \(V_{1}\) to answer it. DProvOB first retrieves the previous cumulative cost of \(V_{1}\) to Bob from the matrix, \(P[Bob,V_{1}]\), and then computes the new cumulative cost \(S^{Bob}_{V_{1}}\) for \(V_{1}\) to Bob as if it answers \(q_{1}\) using \(V_{1}\). If the new cost \(S^{Bob}_{V_{1}}\) is smaller than Bob's privacy constraint \(\psi_{Bob}\), the view constraint \(\psi_{V_{1}}\), and the table constraint \(\psi_{P}\), DProvOB will answer \(q_{1}\) and update \(P[Bob,V_{1}]\) to \(S^{Bob}_{V_{1}}\); otherwise, \(q_{1}\) will be rejected. \(\Box\) Due to the privacy constraints imposed by the privacy provenance table, queries can be rejected when the cumulative privacy cost exceeds the constraints. DProvOB needs to design DP mechanisms that well utilize the privacy budget to answer more queries. Hence, we formulate the _maximum query answering problem_ based on the privacy provenance table. Given a privacy provenance table \((\mathcal{A},\mathcal{V},\mathcal{P})\), at each time, a data analyst \(A_{i}\in\mathcal{A}\) submits the query with a utility requirement \((q_{i},v_{i})\), where the transformed \(\hat{q}_{i}\in\mathcal{V}\), how can we design a system to answer as many queries as possible without violating the row/column/table privacy constraints in \(P\) while meeting the utility requirement per query? Figure 1. Illustration of the Histogram View, the Query Transformation and the Privacy Provenance Table: © Histogram view \(V_{1}\) over age, gender, and education is built on the database snapshot; © Forthcoming queries \(q_{1}\) and \(q_{2}\) are transformed into linearly answerable queries \(\hat{q}_{1}\) and \(\hat{q}_{2}\) over \(V_{1};\oplus\) Analysts (\(A_{1}\), \(A_{2}\) with low, \(A_{3}\) with high privilege) are recorded in the provenance table – the privacy loss for each view to each analyst is tracked for real-time queries. Hence we would like to build an efficient system to satisfy the aforementioned design principles and enable the privacy provenance table, and develop algorithms to solve the maximum query answering problem in DProvOB. We next outline the system modules in DProvOB, and then provide detailed algorithm design in Section 5. ### System Modules The DProvOB system works as a middle-ware between data analysts and existing DP DBMS systems (such as PINQ, Chorus, and PrivateSQL) to provide intriguing and add-on functionalities, including fine-grained privacy tracking, view/synopsis management, and privacy-accuracy translation. We briefly summarize the high-level ideas of the modules below. **Privacy Provenance Tracking.**DProvOB maintains the privacy provenance table for each registered analyst and each generated view, as introduced in Section 4.2. Constraint checking is enabled based on this provenance tracking to decide whether to reject an analyst's query or not. We further build DP mechanisms to maintain and update the DP synposes and the privacy provenance table. **Dual Query Submission Mode.**DProvOB provides two query submission modes to data analysts. _Privacy-oriented mode_(Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019): queries are submitted with a pre-apportioned privacy budget, i.e., \((A_{i},q_{i},\{e_{i},\delta_{i}\})\). _Accuracy-oriented mode_(Han et al., 2019; Wang et al., 2019): analysts can write the query with a desired accuracy bound, i.e., queries are in form of \((A_{i},q_{i},v_{i})\). We illustrate our algorithm with the accuracy-oriented mode. **Algorithm Overview.** Algorithm 1 summarizes how DProvOB uses the DP synopses to answer incoming queries. At the system setup phase (line 1-3), the administrator initializes the privacy provenance table by setting the matrix entry as 0 and the row/column/table constraints \(\Psi\). The system initializes empty synopses for each view. The data analyst specifies a query \(q_{i}\) with its desired utility requirement \(v_{i}\) (line 5). Once the system receives the request, it selects the suitable view and mechanism to answer this query (line 6-7) and uses the function privacyTranslate() to find the minimum privacy budget \(\epsilon_{i}\) for \(V\) to meet the utility requirement of \(q_{i}\) (line 8). Then, DProvOB checks if answering \(q_{i}\) with budget \(\epsilon_{i}\) will violate the privacy constraints \(\Psi\) (line 9). If this constraint check passes, we run the mechanism to obtain a noisy synopsis (line 10). DProvOB uses this synopsis to answer query \(q_{i}\) and returns the answer to the data analyst (line 11). If the constraint check fails, DProvOB rejects the query (line 13). We show concrete DP mechanisms with their corresponding interfaces in the next section. **Remark.** For simplicity, we drop \(\delta\) and focus on \(\epsilon\) as privacy loss/budget in privacy composition, but \(\delta\)'s of DP synopses are similarly composited as stated in Theorem 3.1. For the accuracy-privacy translation, we consider a fixed given small delta and aim to find the smallest possible epsilon to achieve the accuracy specification of the data analyst. ``` 1 Set \(\delta\) in the system 2Function\(\textsc{run}(P,A_{i},V,\epsilon_{i})\) : 3 Generate a synopsis \(V^{e_{i}}_{A_{i}}\) from view \(V\) 4 Update privacy provenance table \(P[A_{i},V]\gets P[A_{i},V]+\epsilon_{i}\) 5return\(r_{i}\gets V^{e_{i}}_{A_{i}}\) 6 end for 7Function privacyTranslate(\(q_{i},v_{i},V,p\)) : 8 Set \(u=\psi p\), \(l=0\) 9\(\triangleright\) calculateVariance(\(q_{i},v_{i},V\)) 10\(\epsilon\)\(=\) binARYSearch(\(l,u,\textsc{testAccurc}(\cdot,v,\Delta q_{i},\delta)\), \(p\)) 11return\(\epsilon\) 12 end for 13Function constraintCheck(\(P,A_{i},V_{j},\epsilon_{i},\Psi\)) : 14if(\(P\).composited(\(\epsilon\)+\(\epsilon_{i}\leq\Psi\).\(\psi\))\(\wedge\)(\(P\).composited(\(\epsilon\)axis-Row) + \(\epsilon_{i}\leq\Psi\).\(\psi_{A_{i}}\))\(\wedge\)(\(P\).composited(\(\epsilon\)axis-Column) + \(\epsilon_{i}\leq\Psi\).\(\psi_{V_{j}}\))then 15return True/Pass 16 end ``` **Algorithm 2**Vanilla Approach ## 5. DP Algorithm Design In this section, we first describe a vanilla DP mechanism that can instantiate the system interface but cannot maximize the number of queries being answered. Then we propose an additive Gaussian mechanism that leverages the correlated noise in query answering to improve the utility of the vanilla mechanism. Without loss of generality, we assume the data analysts do not submit the same query with decreased accuracy requirement (as they would be only interested in a more accurate answer). ### Vanilla Approach The vanilla approach is based on the Gaussian mechanism (applied to both the basic Gaussian mechanism (Han et al., 2017) and the analytic Gaussian mechanism (Blei et al., 2017)). We describe how the system modules are instantiated with the vanilla approach. #### 5.1.1. Accuracy-Privacy Translation This module interprets the user-specified utility requirement into the minimum privacy budget (Algorithm 2: line 7-11). Note that instead of perturbing the query result to \(q_{i}\) directly, we generate a noisy DP synopsis and use it to answer a query (usually involving adding up a number of noisy counts from the synopsis). Hence, we need to translate the accuracy bound \(v_{i}\) specified by the data analyst over the query to the corresponding accuracy bound \(v\) for the synopsis before searching for the minimal privacy budget (Algorithm 2: line 9). Here, \(v\) represents the variance of the noise added to each count of the histogram synopsis. Next, we search for the minimal privacy budget \(\epsilon\) that results in a noisy synopsis with noise variance not more than \(v\), based on the following analytic Gaussian translation. **Definition 9** (Analytic Gaussian Translation).: _Given a query \(q:\mathcal{D}\rightarrow\mathbb{R}^{d}\) to achieve an expected squared error bound \(v\) for this query, the minimum privacy budget for the analytic Gaussian mechanism should satisfy \(\Phi_{\mathcal{N}}\left(\frac{\Delta q}{2v}-\frac{\epsilon v}{\Delta q}\right)- \epsilon^{\epsilon}\Phi_{\mathcal{N}}\left(-\frac{\Delta q}{2v}-\frac{\epsilon v }{\Delta q}\right)\leq\delta\). That is, given \(\Delta q,\delta,v\), to solve the following optimization problem to find the minimal \(\epsilon\)._ \[\min_{\epsilon\in(0,\psi_{\mathcal{D}}]}\epsilon\text{ s.t. }\Phi_{\mathcal{N}}\left(\frac{\Delta q}{2v}-\frac{ \epsilon v}{\Delta q}\right)-\epsilon^{\epsilon}\Phi_{\mathcal{N}}\left(- \frac{\Delta q}{2v}-\frac{\epsilon v}{\Delta q}\right)\leq\delta \tag{1}\] Finding a closed-form solution to the problem above is not easy. However, we observe that the LHS of the constraint in Equation (1) is a monotonic function of \(\epsilon\). Thus, we use binary search (Algorithm 2: line 10) to look for the smallest possible value for \(\epsilon\). For each tested value of \(\epsilon\), we compute its analytic Gaussian variance (Bartos et al., 2016), denoted by \(v^{\prime}\). If \(v^{\prime}>v\), then this value is invalid and we search for a bigger epsilon value; otherwise, we search for a smaller one. We stop at an epsilon value with a variance \(v^{\prime}\leq v\) and a distance \(p\) from the last tested invalid epsilon value. We have the following guarantee for this output. **Proposition 5.1** (Correctness of Translation).: _Given a query \((q_{i},v_{i})\) and the view \(V\) for answering \(q_{i}\), the translation function (Algorithm 2, privacyTranslation) outputs a privacy budget \(\epsilon\). The query \(q_{i}\) can then be answered with expected square error \(v_{q}\) over the updated synopsis \(V_{A}^{\epsilon}\) such that: i) meets the accuracy requirement \(v_{q}\leq v_{i}\), and ii) \(\epsilon-\epsilon^{*}\leq p\), where \(\epsilon^{*}\) is the minimal privacy budget to meet the accuracy requirement for Algorithm 2 (RUN function)._ Proof Sketch.: First, line 9 derives the per-bin accuracy requirement based on the submitted per-query accuracy, and plugs it into the search condition (Equation (1)). Note that our DP mechanism and the accuracy requirement are data-independent. As long as the searching condition holds, the noise added to the query answer in run satisfies \(v_{q}\leq v_{i}\). Second, the stopping condition of the search algorithm guarantees 1) there is a solution, 2) the searching range is reduced to \(\leq p\). Thus we have \(\epsilon-\epsilon^{*}\leq p\). #### 5.1.2. Provenance Constraint Checking As mentioned, the administrator can specify privacy constraints in privacy provenance table. \(\mathsf{DProvOB}\) decide whether _reject or answer a query_ using the provenance matrix \(P\) and the privacy constraints \(\Psi\) in privacy provenance table, as indicated in Algorithm 2: 13-15 (the function constraintCheck). This function checks whether the three types of constraints would be violated when the current query was to issue. The composite function in this constraint-checking algorithm can refer to the basic sequential composition or tighter privacy composition given by Renyi-DP (Renyi and Renyi, 2016) or zCDP (Renyi and Renyi, 2016; Renyi and Renyi, 2016). We suggest to use advanced composition for accounting privacy loss over time, but not for checking constraints, because the size of the provenance table \(n*m\) is too small for a tight bound by this composition. #### 5.1.3. Putting Components All Together The vanilla approach is aligned with existing DP query systems in the sense that it adds independent noise to the result of each query. Hence, it can be quickly integrated into these systems to provide privacy provenance and accuracy-aware features with little overhead. Algorithm 2: 2-5 (the function run) outlines how the vanilla method runs. It first generates the DP synopsis \(V_{A_{i}}^{\epsilon_{i}}\) using analytic Gaussian mechanism for the chosen view \(V\) and updates the corresponding entry \(P[A_{i},V]\) in the privacy provenance table by compositing the consumed privacy loss \((\epsilon_{i},\delta_{i})\) on the query (depending on the specific composition method used). We defer the analysis for the accuracy and privacy properties of the vanilla mechanism to Section 5.4. ### Additive Gaussian Approach While ideas of using correlated Gaussian noise have been exploited (Zhu et al., 2017), we adopt similar statistical properties into an additive Gaussian DP mechanism, a primitive to build our additive Gaussian approach for synopses maintenance. Then, we describe how \(\mathsf{DProvOB}\) generates and updates the (local and global) DP synopses with this algorithm across analysts and over time. #### 5.2.1. Additive Gaussian Mechanism The additive Gaussian mechanism (additive GM or aGM) modifies the standard Gaussian mechanism, based on the nice statistical property of the Gaussian distribution \(-\) the sum of i.i.d. normal random variables is still normally distributed. We outline this primitive mechanism in Algorithm 3. This primitive takes a query \(q\), a database instance \(D\), a set of privacy budgets \(\mathcal{B}\) corresponding to the set of data analysts \(\mathcal{A}\) as input, and this primitive outputs a noisy query result to each data analyst \(A_{i}\), which consumes the corresponding privacy budget \((\epsilon_{i},\delta)\). Its key idea is only to execute the query (to get the true answer on the database) once, and cumulatively inject noises to previous noisy answers, when multiple data analysts ask the same query. In particular, we sort the privacy budget set specified by the analysts. Starting from the largest budget, we add noise w.r.t. the Gaussian variance \(\sigma_{i}^{2}\) calculated from the query sensitivity \(\Delta q\) and this budget \((\epsilon_{i},\delta)\). For the rest of the budgets in the set, we calculate the Gaussian variance \(\sigma_{j}^{2}\) in the same approach but add noise w.r.t \(\sigma_{j}^{2}-\sigma_{i}^{2}\) to the previous noisy answer. The algorithm then returns the noisy query answer to each data analyst. The privacy guarantee of this primitive is stated as follows. **Theorem 5.2**.: _Given a database \(D\), a set of privacy budgets \(\mathcal{B}\coloneqq(\epsilon_{1},\delta),(\epsilon_{2},\delta),\ldots,( \epsilon_{n},\delta)\) and a query \(q\), the additive Gaussian mechanism (Algorithm 4) that returns a set of noisy answers \(r_{1},r_{2},\ldots,r_{n}\) to each data analyst \(A_{i}\) satisfies \([(A_{1},\epsilon_{1},\delta),...,(A_{n},\epsilon_{n},\delta)]\)-multi-analyst-DP and \((\max\{\epsilon_{1},\ldots,\epsilon_{n}\},\delta)\)-DP._ Proof Sketch.: To each data analyst \(A_{i}\), the DP mechanism is equivalent to the standard Gaussian mechanism with a proper variance that satisfies \((\epsilon_{i},\delta)\)-DP. Since the data is looked at once, the \((\max\{\epsilon_{1},\ldots,\epsilon_{n}\},\delta)\)-DP is guaranteed by post-processing. **Discussion on \(\delta\)**. If the \(\delta\) is not a fixed parameter in the system, it could happen in privacy budget \((\epsilon_{i},\delta_{i})\) that \(\epsilon_{i}=\max\mathcal{E}\) but \(\delta_{i}=\min\mathcal{D}\). Algorithm 3 can be simply modified to handle this by not sorting \(\mathcal{B}\) based on descending \(\epsilon\)'s (line 4) but according to the ascending order of the calculated \(\sigma\) (line 6). ``` Input: Analysts \(\mathcal{A}=A_{1},\ldots,A_{n}\); A query \(q\); Database instance \(D\); A set of privacy budgets \(\mathcal{B}\coloneqq\{\mathcal{E},\mathcal{D}\}=\)\((\epsilon_{1},\delta),(\epsilon_{2},\delta),\ldots,(\epsilon_{n},\delta)\). Output: A set of noisy answers \(r_{1},r_{2},\ldots,r_{n}\). 1FunctionAdditiveGM\((\mathcal{A},\mathcal{B},q,D):\) 2\(r\leftarrow\textsc{queryExego}(q,D)\)\(\triangleright\) Obtain true query answer. \(\Delta q\leftarrow\textsc{sensCalc}(q)\)\(\triangleright\) Sensitivity calculation. \(\mathcal{B}^{\prime}\leftarrow\textsc{sort}(\mathcal{B},\epsilon_{i})\)\(\triangleright\) Sort \(\mathcal{B}\) on the desc order of \(\epsilon\)'s. \((\epsilon_{i},\delta)\leftarrow\textsc{pop}(\mathcal{B}^{\prime})\)\(\triangleright\) Pop the 1st element. \(\sigma_{i}\leftarrow\textsc{analyticGM}(\epsilon_{i},\delta,\Delta q)\)\(\triangleright\)[(2)] 3\(r_{i}\gets r+\eta_{i}\sim\mathcal{N}(0,\sigma_{i}^{2})\)\(\triangleright\) Add Gaussian noise. while\(\mathcal{B}^{\prime}\neq\varnothing\)do 4\((\epsilon_{j},\delta)\leftarrow\textsc{pop}(\mathcal{B}^{\prime})\)\(\sigma_{j}\leftarrow\textsc{analyticGM}(\epsilon_{j},\delta,\Delta q)\)\(\triangleright\)[(2)] 5\(r_{j}\gets r_{i}+\eta_{j}\sim\mathcal{N}(0,\sigma_{j}^{2}-\sigma_{i}^{2})\); 6 7 end while return\(\mathcal{R}\coloneqq\{r_{i}|i\in[n]\}\); 8 9 end while ``` **Algorithm 3**Additive Gaussian Noise Calibration #### 5.2.2. Synopses Management We introduce the concept of global and local DP synopses and then discuss the updating process in our additive GM. A DP synopsis (or synopsis for short) is a noisy answer to a (histogram) view over a database instance. We first use the privacy-oriented mode to explain the synopses management for clarity, and then elaborate the accuracy-oriented mode in the accuracy-privacy translation module (Section 5.2.3). **Global and Local DP Synopses.** To solve the maximum query answering problem, for each view \(V\in\mathcal{V}\), \(\textsc{DProvDB}\) maintains a _global DP synopsis_ with a cost of \((\epsilon,\delta)\), denoted by \(V^{\epsilon,\delta}(D)\) or \(V^{\epsilon}\), where \(D\) is the database instance. For simplicity, we drop \(\delta\) by considering the same value for all \(\delta\) and \(D\). For this view, \(\textsc{DProvDB}\) also maintains a _local DP synopsis_ for each analyst \(A_{i}\in\mathcal{A}\), denoted by \(V^{\epsilon^{\prime}}_{A_{i}}\), where the local synopsis is always generated from the global synopsis \(V^{\epsilon}\) of the view \(V\) by adding more noise. Hence, we would like to ensure \(\epsilon\geq\epsilon^{\prime}\). This local DP synopsis \(V^{\epsilon^{\prime}}_{A_{i}}\) will be used to answer the queries asked by the data analyst \(A_{i}\). The process of updating synopses consists of two parts. The first part is to update the local synopses based on the global synopses. The second part is to update the global synopses by relaxing the privacy guarantee, in order to answer a query with a higher accuracy requirement. We discuss the details below. **Generating Local Synopses from Global Synopses.** We leverage our additive GM primitive to release a local DP synopsis \(V^{\epsilon^{\prime}}_{A_{i}}\) from a given global synopsis \(V^{\epsilon}\), where \(V^{\epsilon}\) is generated by a Gaussian mechanism. Given the privacy guarantee \(\epsilon\) (and \(\delta\)) and the sensitivity of the view, the Gaussian mechanism can calculate a proper variance \(\sigma^{2}\) for adding noise and ensuring DP. The additive GM calculates \(\sigma^{2}\) and \(\sigma^{\prime 2}\) based on \(\epsilon\) and \(\epsilon^{\prime}\) respectively, and then generates the local synopsis \(V^{\epsilon^{\prime}}_{A_{i}}\) by injecting independent noise drawn from \(\mathcal{N}(0,\sigma^{\prime 2}-\sigma^{2})\) to the global synopsis \(V^{\epsilon}\). As the global synopsis is hidden from all the analysts, the privacy loss to the analyst \(A_{i}\) is \(\epsilon^{\prime}\). Even if all the analysts collude, the maximum privacy loss is bounded by the budget spent on the global synopsis. **Example 3**.: Alice and Bob are asking queries to \(\textsc{DProvDB}\). Alice asks the first query \(q_{1}\) (which is answerable on \(V_{1}\)) with _budget requirement_\(\epsilon_{V_{1},Alice}=0.5\). \(\textsc{DProvDB}\) generates a global synopsis \(V^{0.5}\) for \(V\) with budget \(0.5\) and then generate a local synopsis \(V^{0.5}_{Alice}\) from the global synopsis \(V^{0.5}_{Alice}\) for Alice. Bob next asks query \(q_{2}\) (which is also answerable on \(V_{1}\)) with budget \(\epsilon_{V_{1},Bob}=0.3\). Since the budget \(0.3<0.5\), we use the additive GM to generate a local synopsis \(V^{0.3}_{Bob}\) from the global synopsis \(V^{0.5}\) for Bob and return the query answer based on the local synopsis \(V^{0.3}_{Bob}\). This example follows "Case " in Fig. 2. **Updating Global Synopses by Combining Views.** When the global DP synopsis \(V^{\epsilon}\) is not sufficiently accurate to handle a local synopsis request at privacy budget \(\epsilon_{t}\), \(\textsc{DProvDB}\) spends additional privacy budget \(\Delta\epsilon\) to update the global DP synopsis to \(V^{\epsilon+\Delta\epsilon}\), where \(\Delta\epsilon=\epsilon_{t}-\epsilon\). We still consider Gaussian mechanism, which generates an intermediate DP synopsis \(V^{\Delta\epsilon}\) with a budget \(\Delta\epsilon\). Then we combine the previous synopses with this intermediate synopsis into an updated one. The key insight of the combination is to properly involve the fresh noisy synopses by assigning each synopsis with a weight proportional to the inverse of its noise variance, which gives the smallest expected square error based on UMVUE (Sutskever et al., 2017; Wang et al., 2018). That is, for the \(t\)-th release, we combine these two synopses: \[V^{\epsilon_{t}}=(1-w_{t})V^{\epsilon_{t-1}}+w_{t}V^{\Delta\epsilon}. \tag{2}\] The resulted expected square error for \(V^{\epsilon_{t}}\) is \(\sigma_{t}=(1-w_{t})^{2}w_{t-1}+w_{t}^{2}w_{\Delta}\), where \(w_{t-1}\) is the noise variance of view \(V^{\epsilon_{t-1}}\), and \(v_{\Delta}\) is derived from \(V^{\Delta\epsilon}\). To minimize the resulting error, \(w_{t}=\frac{w_{t-1}}{w_{\Delta}w_{t-1}}\). **Example 4**.: At the next time stamp, Bob asks a query \(q_{1}\) with budget \(\epsilon_{V_{1},Bob}=0.7\). Clearly the current global synopsis \(V^{0.5}\) is not sufficient to answer this query because \(0.7>0.5\). Then the system needs to update \(V^{0.5}\) and this is done by: 1) first generating a fresh global synopsis \(V^{0.2}\) using analytic GM from \(V(D)\); 2) then combining it with \(V^{0.5}\) to form \(V^{0.7}\) using Equation (2). This example is illustrated as steps 2a and 2b of "Case " in Fig. 2. **Lemma 5.3** (Correctness of View Combination).: _Releasing the combined DP synopsis in the \(t\)-th update is \((\epsilon_{t-1}+\Delta\epsilon,\delta_{t-1}+\Delta\delta)\)-DP._ Proof Sketch.: The \(t\)-th update combines an \((\epsilon_{t-1},\delta_{t-1})\)-DP synopsis and a fresh synopsis that is \((\Delta\epsilon,\Delta\delta)\)-DP. By sequential composition, the combined synopsis is \((\epsilon+\Delta\epsilon,\delta+\Delta\delta)\)-DP. The view combination is not _frictionless_. Although the combined synopsis \(V^{\epsilon+\Delta\epsilon}\) achieves \((\epsilon+\Delta\epsilon,\delta+\Delta\delta)\)-DP, if we spend the whole privacy budget on generating a synopsis all at once, this one-time synopsis \(V^{\epsilon}\) has the same privacy guarantee but is with less expected error than \(V^{\epsilon+\Delta\epsilon}\). We can show by sequentially combining and releasing synopses over time it is optimal among all possible linear combinations of synopses, however, designing a frictionless updating algorithm for Gaussian mechanisms is non-trivial in its own right, which remains for our future work. **Theorem 5.4** (Optimality of Linear View Combination).: _Given a sequences of views, \(V^{\epsilon_{i}},V^{\Delta\epsilon_{2}},\ldots,V^{\Delta\epsilon_{t}}\), the expected squared error of our \(t\)-th release is less than or equal to that of releasing \(w_{1}^{\epsilon_{i}}V^{\epsilon_{i}}+\sum_{i=2}^{t}w_{i}^{\epsilon_{i}}V^{ \Delta\epsilon_{i}}\) for all \(\{w_{i}\mid i=1,\ldots,t\}\) s.t. \(\sum_{i}w_{i}^{\epsilon}=1\)._ Intuitively, this theorem is proved by a reduction to the best unbiased estimator and re-normalizing weights based on induction. **Updating Local Synopses and Accounting Privacy.** When a local DP synopsis is not sufficiently accurate to handle a query, but the budget request for this query \(\epsilon_{i}\) is still smaller than or equal to the budget \(\epsilon_{t}\) for the global synopsis of \(V\), \(\mathtt{DProvDB}\) generates a new local synopsis \(V^{\epsilon_{i}}_{A_{i}}\) from \(V^{\epsilon_{t}}\) using additive GM. The analyst \(A_{i}\) is able to combine query answers for a more accurate one, but the privacy cost for this analyst on view \(V\) is bounded as \(\min(\epsilon_{t},P[A_{i},V]+\epsilon_{i})\) which will be updated to \(P[A_{i},V]\). **Example 5**.: Analysts' queries are always answered on local synopses. To answer Bob's query \((q_{1},e_{V_{1},Bob}=0.7),\mathtt{DProvDB}\) uses additive GM to generate a fresh local synopsis \(V^{0.7}_{Bob}\) from \(V^{0.7}\) and return the answer to Bob. Alice asks another query \((q_{1},e_{V_{1},Alice}=0.6)\). \(\mathtt{DProvDB}\) updates \(V^{0.5}_{Alice}\) by generating \(V^{0.6}_{Alice}\) from \(V^{0.7}\). Both analysts' privacy loss on \(V\) will be accounted as \(0.7\). This example complements the illustration of step 2c of "Case " in Fig. 2. #### 5.2.3. Accuracy-Privacy Translation The accuracy translation algorithm should consider the friction at the combination of global synopses. We propose an accuracy-privacy translation paradigm (Algorithm 4: 12) involving this consideration. This translator module takes the query \(q_{i}\), the utility requirement \(v_{i}\), the synopsis \(V\) for answering the query, and additionally the current global synopsis \(V^{\epsilon}\) (we simplify the interface in Algorithm 1) as input, and outputs the corresponding budget \(\epsilon_{i}\) for run (omitting the same value \(\delta\)). As the first query release for the view does not involve the frictional updating issue, we separate the translation into two phases where the first query release directly follows the analytic Gaussian translation in our vanilla approach. For the second phase, given a global DP synopsis \(V^{\epsilon_{\epsilon}}\) at hand (with _tracked_ expected error \(v^{\prime}\)) for a specific query-view and a new query is submitted by a data analyst with expected error \(v_{i}<v^{\prime}\), we solve an optimization problem to find the Gaussian variance of the fresh new DP synopsis. We first calculate the Gaussian variance of the current DP synopsis \(v^{\prime}\) (line 13) and solve the following optimization problem (line 14). \[\operatorname*{arg\,max}_{\sigma}\;v_{i}=w^{2}v^{\prime}+(1-w)^{2}v_{t}\;\; \text{s.t.}\;w\in[0,1] \tag{3}\] The solution gives us the minimal error variance \(v_{t}=\sigma^{2}\) (line 15). By translating \(\sigma^{2}\) into the privacy budget using the standard analytic Gaussian translation technique (Def. 9), we can get the minimum privacy budget that achieves the required accuracy guarantee (line 16). Note that when the requested accuracy \(v_{i}>v^{\prime}\), the solution to this optimization problem is \(w=0\), which automatically degrades to the vanilla translation. **Theorem 5.5**.: _Given a query \((q_{i},v_{i})\) and the view \(V\) for answering \(q_{i}\), the translation function (Algorithm 4, privacyTranslation) outputs a privacy budget \(\epsilon\). The query \(q_{i}\) can then be answered with expected square error \(v_{q}\) over the updated synopsis \(V^{\epsilon}_{A}\) such that: i) meets the accuracy requirement \(v_{q}\leq v_{i}\), and ii) \(\epsilon-\epsilon^{*}\leq p\), where \(\epsilon^{*}\) is the minimal privacy budget to meet the accuracy requirement for Algorithm 4 (run function)._ Proof Sketch.: The privacyTranslation in additive Gaussian approach calls the translation function in the vanilla approach as subroutine (Algorithm 4: 15). The correctness of the additive Gaussian privacyTranslation depends on inputting the correct expected square error, which is calculated based on the accuracy requirement \(v_{i}\) while considering the frictions, into the subroutine. The calculation of the expected squared error with an optimization solver has been discussed and analyzed above. #### 5.2.4. Provenance Constraint Checking The provenance constraint checking for the additive Gaussian approach (line 18) is similar to the module for the vanilla approach. We would like to highlight 3 differences. 1) Due to the use of the additive Gaussian mechanism, the composition across analysts on the same view is bounded as Figure 2. Illustration of the Additive Gaussian Approach tight as the \(\max P[A_{i},V],\forall A_{i}\in\mathcal{A}\). Therefore, we check the view constraint by taking the max over the column retrieved by the index of view \(V_{j}\). 2) To check table constraint, we composite a vector where each element is the maximum recorded privacy budget over every column/view. 3) The new cumulative cost is no longer \(\epsilon_{i}\), but \(\epsilon^{\prime}=\min(\epsilon,P[A_{i},V]+\epsilon_{i})-P[A_{i},V]\). Theorem 5.6 ().: _Given the same sequence of queries \(Q\) and at least 2 data analysts in the system and the same system setup, the total number of queries answered by the additive Gaussian approach is always more than or equal to that answered by vanilla approach._ Proof Sketch.: \(\mathsf{DProvDB}\) processes incoming queries with DP synopsis (vanilla approach) or local synopsis (additive Gaussian approach). For each query \(q\) in \(Q\), if it can be answered (w.r.t the accuracy requirement) with cached synopsis, both approaches will process it in the same manner; otherwise, \(\mathsf{DProvDB}\) needs to update the synopses. Comparing the cost of synopses update in both methods, \(\min(\epsilon,P[A_{i},V]+\epsilon_{i})-P[A_{i},V]\leq\epsilon_{i}\) always holds. Therefore, given the same privacy constraints, if a synopsis can be generated to answer query \(q\) with vanilla approach, the additive Gaussian approach must be able to update a global synopsis and generate a local synopsis to answer this query, which proves the theorem. We note that vanilla approach generates independent synopses for different data analysts, while in the additive Gaussian approach, we only update global synopses which saves privacy budgets when different analysts ask similar queries. We empirically show the benefit in terms of answering more queries with the additive Gaussian approach in Section 6. #### 5.2.5. Putting Components All Together The additive Gaussian approach is presented in Algorithm 4: 2-10 (the function run). At each time stamp, the system receives the query \(q\) from the analyst \(A_{i}\) and selects the view that this query can be answerable. If the translated budget \(\epsilon_{i}\) is greater than the budget allocated to the global synopsis of that view (line 3), we generate a delta synopsis (lines 4-5) and update the global synopsis (line 6). Otherwise, we can use additive GM to generate the local synopsis based on the (updated) global synopsis (lines 7-9). We update the provenance table with the consumed budget \(\epsilon_{i}\) (line 10) and answer the query based on the local synopsis to the analyst (line 11). #### 5.2.6. Discussion on Combining Local Synopses We may also update a local synopsis \(V^{\epsilon^{\prime}}_{A_{i}}\) (upon request \(\epsilon_{i}\)) by first generating an intermediate local synopsis \(V^{\Delta\epsilon}_{A_{i}}\) from \(V^{\epsilon\epsilon}\) using additive GM, where \(\Delta\epsilon=\epsilon_{i}-\epsilon^{\prime}\) and then combining \(V^{\Delta\epsilon}_{A_{i}}\) with the previous local synopsis in a similar way as it does for the global synopses, which leads to a new local synopsis \(V^{\epsilon+\Delta\epsilon}_{A_{i}}\). Example 6 ().: To answer Bob's query \((q_{1},\epsilon_{V,Bob}=0.7)\), we can use additive GM to generate a fresh local synopsis \(V^{0.4}_{Bob}\) from \(V^{0.7}\), and combine \(V^{0.4}_{Bob}\) with the existing \(V^{0.3}_{Bob}\) to get \(V^{0.7}_{Bob}\). However, unlike combining global synopses, these local synopses share correlated noise. We must solve a different optimization problem to find an unbiased estimator with minimum error. For example, given the last combined global synopsis (and its weights) \(V^{\prime}=w_{t-1}V^{\epsilon\epsilon-1}+w_{t}V^{\epsilon\epsilon-\epsilon_{t- 1}}\), if we know \(V^{\epsilon_{t-1}}_{A}\) is a fresh local synopsis generated from \(V^{\epsilon_{t-1}}\), we consider using the weights \(k_{t-1},k_{t}\) for local synopsis combination: \[V^{\prime}_{A} =k_{t-1}V^{\epsilon_{t-1}}_{A}+k_{t}V^{\Delta\epsilon}_{A}\] \[=k_{t-1}(V^{\epsilon_{t-1}}+\eta^{t-1}_{A})+k_{t}(w_{t-1}V^{ \epsilon_{t-1}}+w_{t}V^{\epsilon_{t-1}}+\eta^{t}_{A})\] \[=(k_{t-1}+k_{t}w_{t-1})V^{\epsilon_{t-1}}+k_{t}w_{t}V^{\epsilon \epsilon-\epsilon_{t-1}}+k_{t-1}\eta^{t-1}_{A}+k_{t}\eta^{t}_{A},\] where \(\eta^{t-1}_{A}\) and \(\eta^{t}_{A}\) are the noise added to the local synopses in additive GM with variance \(\sigma^{2}_{t-1}\) and \(\sigma^{2}_{t}\). We can find the adjusted weights that minimize the expected error for \(V^{\prime}_{A}\) is \(o_{A,t}=(k_{t-1}+k_{t}w_{t-1})^{2}o_{t-1}+k_{t}^{2}w_{t}^{2}o_{\Delta}+k_{t-1}^{ 2}o_{t-1}^{2}+k_{t}^{2}o_{t}^{2}\) subject to \(k_{t-1}+k_{t}w_{t-1}+k_{t}w_{t}=1\), using an optimization solver. Allowing the optimal combination of local synopses tightens the cumulative privacy cost of \(A_{i}\) on \(V\), i.e., \(\epsilon_{i}<\min(\epsilon_{i},P[A_{i},V]+\epsilon_{i})\). However, if the existing local synopsis \(V^{\epsilon_{t-1}}_{A}\) is a combined synopsis from a previous time stamp, the correct variance calculation requires a nested analysis on _from where the local synopses are generated and with what weights the global/local synopses are combined_. This renders too many parameters for \(\mathsf{DProvDB}\) to keep track of and puts additional challenges for accuracy translation since solving an optimization problem with many weights is hard, if not intractable. For practical reasons, \(\mathsf{DProvDB}\) adopts the approach described in Algorithm 4. ### Configuring Provenance Table In existing systems, the administrator is only responsible for setting a single parameter specifying the overall privacy budget. DProvOB, however, requires the administrator to configure more parameters. #### 5.3.1. Setting Analyst Constraints for Proportional Fairness We first discuss guidelines to establish per-analyst constraints in DProvOB, by proposing two specifications that achieve proportional fairness. Definition 10 (Constraints for Vanilla Approach).: _For the vanilla approach, we propose to specify each analyst's constraint by proportional indicated normalization. That is, for the table-level constraint \(\psi_{P}\) and analyst \(A_{i}\) with privilege \(l_{i}\), \(\psi_{A_{i}}=\frac{l_{i}}{\sum_{j\in[n]}l_{j}}\psi_{P}\)._ Note that this proposed specification is not optimal for the additive Gaussian approach, since the maximum utilized budget will then be constrained by \(\max\psi_{A_{i}}<\psi_{P}\) when more than \(1\) analyst is using the system. Instead, we propose the following specification. Definition 11 (Constraints for Additive Gaussian).: _For the additive Gaussian approach, each analyst \(A_{i}\)'s constraint can be set to \(\frac{l_{i}}{l_{max}}\psi_{P}\), where \(l_{max}\) denotes the maximum privilege in the system._ **Comparing Two Specifications.** We compare the two analyst-constraint specifications (Def. 10 and 11) with experiments in Section 6.2. Besides their empirical performance, the vanilla constraint specification (Def. 10) requires all data analysts to be registered in the system before the provenance table is set up. The additive Gaussian approach (with specification in Def. 11), however, allows for the inclusion of new data analysts at a later time. #### 5.3.2. Setting View Constraints for Dynamic Budget Allocation Existing privacy budget allocator (Srivastava et al., 2017) adopts a static splitting strategy such that the per view privacy budget is equal or proportional to their sensitivity. DProvOB subsumes theirs, by similarly, setting the view constraints to be equal or proportional to the sensitivity of each view, i.e., \(\{\psi_{V_{j}}|V_{j}\in\mathcal{V}\}=\{\lambda_{V_{j}}\cdot\epsilon/\hat{ \Delta}_{V_{j}}\}\forall V_{j}\in\mathcal{V}\), where \(\hat{\Delta}_{V_{j}}\) is the upper bound of the sensitivity of the view \(V_{j}\). We therefore propose the following water-filling view constraint specification for a better view budget allocation. Definition 12 (Water-filling View Constraint Setting).: _The table constraint \(\psi_{P}\) has been set up as a constant (i.e., the overall privacy budget) in the system. The administrator simply set all view constraints the same as the table constraint, \(\psi_{V_{j}}\coloneqq\psi_{P},\forall V_{j}\in\mathcal{V}\)._ With water-filling constraint specification, the provenance constraint checking will solely depend on the table constraint and the analyst constraints. The overall privacy budget is then dynamically allocated to the views based on analysts' queries. Compared to existing budget allocation methods on views, our water-filling specification based on the privacy provenance table reflects the actual accuracy demands on different views. Thus it results in avoiding the waste of privacy budget and providing better utility: i) DProvOB gesb fewer budgets on unpopular views, whose consumed budget is less than \(\lambda_{V_{j}}\cdot\epsilon/\hat{\Delta}_{V_{j}}\); ii) DProvOB can answer queries whose translated privacy budget \(\epsilon>\lambda_{V_{j}}\cdot\epsilon/\hat{\Delta}_{V_{j}}\). In addition, the water-filling specification allows DProvOB to add views over time. ### Privacy and Fairness Guarantees Theorem 5.7 (System Privacy Guarantee).: _Given the privacy provenance table and its constraint specifications, \(\Psi=\{\psi_{A_{i}}|A_{i}\in\mathcal{A}\}\cup\{\psi_{V_{j}}|V_{j}\in\mathcal{V }\}\cup\{\psi_{P}\}\), both mechanisms for DProvOB ensure \([\ldots,(A_{i},\psi_{A_{i}},\delta),\ldots]\)-multi-analyst-DP; they also ensure \(\min(\psi_{V_{j}},\psi_{P})\)-DP for view \(V_{j}\in\mathcal{V}\) and overall \(\psi_{P}\)-DP if all the data analysts collude._ With the provenance table, DProvOB can achieve proportional fairness when analysts submit a sufficient number of queries, stated as in the following theorem. Both proofs are deferred to appendices. Theorem 5.8 (System Fairness Guarantee).: _Given the privacy provenance table \(P\) and the described approaches of setting analyst constraints, both mechanisms achieve proportional fairness, when the data analysts finish consuming their assigned privacy budget._ ## 6. Experimental Evaluation In this section, we compare DProvOB with baseline systems and conduct an ablation investigation for a better understanding of different components in DProvOB. Our goal is to show that DProvOB can improve existing query answering systems for multi-analyst in terms of the number of queries answered and fairness. ### Experiment Setup We implement DProvOB in Scala with PostgreSQL for the database system, and deploy it as a middle-ware between Chorus (Kalalouts et al., 2018) and multiple analysts. Our implementation follows existing practice (Kalouts et al., 2018) to set all \(\delta\) parameter to be the same and small (e.g., 1e-9) for all queries. The \(\delta\) parameters in the privacy constraints (column/row/table) in the privacy provenance table are set to be capped by the inverse of the dataset size. #### 6.1.1. Baselines Since we build on top of Chorus (Kalouts et al., 2018), we develop a number of baseline approaches for the multi-analyst framework using Chorus as well for fair comparisons. * _Chorus_(Kalouts et al., 2018): This baseline is the plain Chorus, which uses GM and makes no distinction between data analysts and uses no views. The system only sets the overall privacy budget. * DProvOB _minus Cached Views (ChorusP):_ This baseline enables privacy provenance tracking for each data analyst but does not store any synopses. We use Def. 10 to set up analyst constraints and Def. 12 for view constraints. * DProvOB _minus Additive GM (Vanilla):_ We equip Chorus with privacy provenance table and the cached views, but with our vanilla approach to update and manage the provenance table and the views. The privacy provenance table is configured the same as in ChorusP. * _Simulating PrivateSQL (Srivastava et al., 2017):_ We simulate PrivateSQL by generating the static DP synopses at first. The budget allocated to each view is proportional to the view sensitivities (Srivastava et al., 2017). Incoming queries that cannot be answered accurately with these synopses will be rejected. #### 6.1.2. Datasets and Use Cases We use the Adult dataset (Kalouts et al., 2018) (a demographic data of 15 attributes and 45,224 rows) and the TPC-H dataset (a synthetic data of 1GB) (Kalouts et al., 2018) for experiments. We consider the following use cases. * _Randomized range queries (RRQ)_: We randomly generate 4,000 range queries _per analyst_, each with one attribute randomly selected with bias. Each query has the range specification \([s,s+o]\) where \(s\) and the offset \(o\) are drawn from a normal distribution. We design two types of query sequences from the data analysts: a) **round-robin**, where the analysts take turns to ask queries; b) **random**, where a data analyst is randomly selected each time. * _Breadth-first search (BFS) tasks_: Each data analyst explores a dataset by traversing a decomposition tree of the cross product over the selected attributes, aiming to find the (sub-)regions with underrepresented records. That is, the data analyst traverses the domain and terminates only if the returned noisy count is within a specified threshold range. To answer the queries, we generate one histogram view on each attribute. We use two analysts with privileges 1 and 4 by default setting and also study the effect of involving more analysts. #### 6.1.3. Evaluation Metrics We use four metrics for evaluation. * _Number of queries being answered_: We report the number of queries that can be answered by the system when no more queries can be answered as the utility metric to the system. * _Cumulative privacy budget_: For BFS tasks that have fixed workloads, it is possible that the budget is not used up when the tasks are complete. Therefore, we report the total cumulative budget consumed by all data analysts when the tasks end. * _Normalized discounted cumulative fairness gain (nDCFG)_: We coined an empirical fairness measure here. First, we introduce DCFG measure for a mechanism \(\mathcal{M}\) as \(\mathrm{DCFG}_{\mathcal{M}}=\sum_{i=1}^{n}\frac{|Q_{A_{i}}|}{\log_{2}(\frac{ 1}{t}+1)}\), where \(l_{i}\) is the privilege level of analyst \(A_{i}\) and \(|Q_{A_{i}}|\) is the total number of queries of \(A_{i}\) being answered. Then nDCFG is DCFG normalized by the total number of queries answered. * _Runtime_: We measures the run time in milliseconds. We repeat each experiment 4 times using different random seeds. ### Empirical Results We initiate experiments on the default settings of DProvOB and the baseline systems for comparison, and then study the effect of modifying the components of DProvOB. #### 6.2.1. End-to-end Comparison This comparison is with the setup of the analyst constraints in line with Def. 11 for DProvOB, and with Def. 10 for vanilla approach. We brief our findings. **Results of RRQ task.** We present Fig. 3 for this experiment. We fix the entire query workload and run the query processing on the five mechanisms. DProvOB outperforms all competing systems, in both round-robin or random use cases. Chorus and ChorusP can answer very few queries because their privacy budgets are depleted quickly. Interestingly, our simulated PrivateSQL can answer a number of queries which are comparable to the vanilla approach when \(\epsilon=6.4\), but answers a limited number of queries under higher privacy regimes. This is intuitive because if one statically fairly pre-allocates the privacy budget to each view when the overall budget is limited, then possibly every synopsis can hardly answer queries accurately. One can also notice the fairness score of ChorusP is significantly higher than Chorus, meaning the enforcement of privacy provenance table can help achieve fairness. **Results of BFS task.** An end-to-end comparison of the systems on the BFS tasks is depicted in Fig. 4. Both DProvOB and vanilla can complete executing the query workload with near constant budget consumption, while Chorus(P) spends privacy budget linearly with the workload size. DProvOB saves even more budget compared to vanilla approach, based on results on the TPC-H dataset. **Runtime performance.** Table 1 summarizes the runtime of DProvOB and all baselines. DProvOB and sPrivateSQL use a rather large Figure 4. End-to-end Comparison (BFS task): Cumulative privacy budget assumption v.s. workload indices. Left: over _Adult_ dataset; Right: over _TPC-H_ dataset. Figure 3. End-to-end Comparison (_RRQ task_, over _Adult_ dataset), from left to right: a) utility v.s. overall budget, round-robin; b) utility v.s. overall budget, randomized; c) fairness against baselines, round-robin; d) fairness against baselines, randomized. amount of time to set up views, however, answering large queries based on views saves more time on average compared to Chorus-based mechanisms. Recall that aGM requires solving optimization problems, where empirical results show incurred time overhead is only less than 2 ms per query. We draw the same conclusion for our RRQ experiments and runtime performance test on the other dataset. We defer the results to the appendices. #### 6.2.2. Component Comparison We consider three components in DProvDB to evaluate separately. **Cached Synopses.** Given the same overall budget, mechanisms leveraging cached synopses outperform those without caches in terms of utility when the size of the query workload increases. This phenomenon can be observed for all budget settings, as shown in Fig. 5. Due to the use of cached synopses, DProvDB can potentially answer more queries if the incoming queries hit the caches. Thus, the number of queries being answered would increase with the increasing size of the workload, given the fixed overall budget. **Additive GM v.s. Vanilla.** Given the same overall budget, additive GM outperforms the vanilla mechanism on utility. The utility gap increases with the increasing number of data analysts presented in the system. The empirical results are presented in Fig. 6. When there are 2 analysts in the system, additive GM can only gain a marginal advantage over vanilla approach; when the number of data analysts increases to 6, additive GM can answer around \(\sim\)2-4x more queries than vanilla. We also compare different settings of analyst constraints among different numbers of analysts and different overall system budgets (with 2 analysts). It turns out that additive GM with the setting in Def. 11 (DProvDB-1_max), is the best one, outperforming the setting from Def. 10 (DProvDB-1_sum, Vanilla-1 sum) all the time for different epsilons and with \(\sim\)4x more queries being answered when #analysts=6. **Constraint Configuration.** If we allow an expansion parameter \(\tau\geq 1\) on setting the analyst constraints (i.e., overselling privacy budgets than the constraint defined in Def. 11), we can obtain higher system utility by sacrificing fairness, as shown in Fig. 7. With 2 data analysts, the number of total queries being answered by additive GM is slightly increased when we gradually set a larger analyst constraint expansion rate. Under a low privacy regime (viz., \(\epsilon=3.2\)), this utility is increased by more than 15% comparing setting \(\tau=1.9\) and \(\tau=1.3\); on the other hand, the fairness score decreases by around 10%. This result can be interpretable from a basic economic view, that we argue, as a system-wise public resource, the privacy budget could be _idle resources_ when some of the data analysts stop asking queries. Thus, the _hard privacy constraints_ enforced in the system can make some portion of privacy budgets unusable. Given the constraint expansion, we allow DProvDB to tweak between a fairness and utility trade-off, while the overall privacy is still guaranteed by the table constraint. **Varying \(\delta\) parameter.** In this experiment, we control the overall privacy constraint \(\epsilon=6.4\) and vary the \(\delta\) parameter as per query. We use the BFS workload as in our end-to-end experiment and the results are shown in Fig. 8. While varying a small \(\delta\) parameter does not much affect the number of queries being answered, observe that increasing \(\delta\), DProvDB can slightly answer more queries. This is because, to achieve the same accuracy requirement, the translation module will output a smaller \(\epsilon\) when \(\delta\) is bigger, which will consume the privacy budget slower. Note that the overall privacy \begin{table} \begin{tabular}{l l l l l} \hline \hline Systems & Setup Time & Running Time & No. of Queries & Per Query Perf \\ \hline DProvDB & 20386.65 ms & 297.30 ms & 86.0 & 3.46 ms \\ Vanilla & 20388.65 ms & 118.04 ms & 86.0 & 1.37 ms \\ SrivastavaQL & 20388.65 ms & 166.51 ms & 86.0 & 1.94 ms \\ Chorus & N/A & 7380.47 ms & 62.0 & 119.04 ms \\ ChorusP & N/A & 7478.69 ms & 62.0 & 120.62 ms \\ \hline \hline \end{tabular} \end{table} Table 1. Runtime Performance Comparison over Different Mechanisms on TPC-H Dataset (running on a Linux server with 64 GB DDR4 RAM and AMD Ryzen 5 3600 CPU, measured in milliseconds) Figure 5. Component Comparison (_RRQ_ task, over _Adult_ dataset): Enabling Cached Synopses. Utility v.s. the size of query workload (round-robin)From left to right: \(\epsilon=\{0.4,0.8,1.6,3.2,6.4\}\). Figure 6. Component Comparison (_RRQ_ task, over _Adult_ dataset): Additive GM v.s. Vanilla. Left: utility v.s. #analysts, round-robin; Right: utility v.s. overall budgets, round-robin. constraint on \(\delta\) should be set no larger than the inverse of the dataset size. Setting an unreasonably large per-query \(\delta\) can limit the total number of queries being answered. **Other experiments.** We also run experiments to evaluate \(\mathsf{DProvOB}\) on data-dependent utility metric, i.e. relative error (Kumar et al., 2017), and empirically validate the correctness of the accuracy-privacy translation module. In particular, we performed experiments to show that the noise variance \(v_{q}\) (derived from the variance of the noisy synopsis) of the query answer, according to the translated privacy budget, is always no more than the accuracy requirement \(v_{i}\) submitted by the data analyst. As shown in Fig. 9 (a), the difference between the two values, \(v_{q}-v_{i}\), is always less than 0, and very small for a given BFS query workload (where the accuracy requirement is set to be above \(v_{l}>10000\)). Furthermore, we consider the following data-dependent utility, namely relative error (Kumar et al., 2017), which is defined as \[\text{Relative Error}=\frac{|\text{True Answer}-\text{Noisy Answer}|}{\text{ max}\{\text{True Answer},c\}},\] where \(c\) is a specified constant to avoid undefined values when the true answer is 0. Note that \(\mathsf{DProvOB}\) does not specifically support analysts to submit queries with _data-dependent_ accuracy requirements. The translated query answer can have a large relative error if the true answer is small or close to zero. We thereby merely use this utility metric to empirically evaluate the answers of a BFS query workload, as a complementary result [Fig. 9 (b)] to the paper. \(\mathsf{DProvOB}\) and the Vanilla approach have a larger relative error than Chorus and ChorusP because they can answer more queries, many of which have a comparatively small true answer - incurring a large relative error, than Chorus-based methods. ## 7. Discussion In this section, we discuss a weaker threat model of the multi-analyst corruption assumption, with which additional utility gain is possible. We also discuss other strawman solutions toward a multi-analyst DP system. ### Relaxation of Multi-analyst Threat Model So far we have studied that all data analysts can collude. A more practical setting is, perhaps, considering a subset of data analysts that are compromised by the adversary. This setting is common to see in the multi-party computation research community (Kumar et al., 2017) (_a.k.a_ active security (Kumar et al., 2017; Kumar et al., 2017)), where \(t\) out of \(n\) participating parties are assumed to be corrupted. Definition 13 ((\(t,n\))-compromised Multi-analyst Setting).: _We say a multi-analyst setting is \((t,n)\)-compromised, if there exist \(n\) data analysts where at most \(t\) of them are **malicious** (meaning they can collude in submitting queries and sharing the answers)._ The \((t,n)\)-compromised multi-analyst setting makes weaker assumptions on the attackers. Under this setting, the privacy loss is upper bounded by \((\sum_{t}e_{i},\sum_{t}\delta_{i})\), which is the summation over \(t\) largest privacy budgets. However, we cannot do better than the current \(\mathsf{DProvOB}\) algorithms with this weaker setting under worst-case privacy assumption. Theorem 7.1 (Hardness on Worst-case Privacy).: _Given a mechanism \(\mathcal{M}\) which is \([\ldots,(A_{i},e_{i},\delta_{i}),\ldots]\)-multi-analyst-DP, the worst-case privacy loss under \((t,n)\)-compromised multi-analyst DP is Figure 8. #queries being answered vs. varying delta parameter (BFS task, _Adult_). Left: round-robin, Right: randomized. Figure 7. Component Comparison (_RRQ_ task, over _Adult_ dataset): Constraint Configuration. First row: utility v.s. constraint settings. Second row: fairness v.s. constraint settings. Left: round-robin. Right: randomized. Figure 9. (a) The cumulative average of \(v_{q}-v_{l}\) for a BFS query workload (on _Adult_ dataset), where \(v_{i}\) represents the submitted accuracy requirement and \(v_{q}\) denotes the noise variance of the query answer. (b) Relative error of processing the BFS query workload (on _Adult_ dataset) among different mechanisms. lower bounded by \((\max\epsilon_{i},\max\delta_{i})\), which is the same as under the all-compromisation setting._ Proof Sketch.: Under the worst-case assumption, the analyst with the largest privacy budget \((\max\epsilon_{i},\max\delta_{i})\) is sampled within the \(t\) compromised data analysts. Then it is natural to see that the lower bound of the privacy loss is \((\max\epsilon_{i},\max\delta_{i})\). At first glance, the relaxation of the \((t,n)\)-compromisation does not provide us with better bounds. A second look, however, suggests the additional trust we put in the data analyst averages the privacy loss in compromisation cases. Therefore, with this relaxed privacy assumption, it is possible to design mechanisms to achieve better utility using policy-driven privacy. **Policies for multi-analyst**. Blowfish privacy (Kendal, 2017) specifies different levels of protection over sensitive information in the curated database. In the spirit of Blowfish, we can use policies to specify different levels of trust in data analysts using DProvOB. Definition 14 ((\(t,n\))-Analysts Corruption Graph).: _Given \(n\) data analysts and assuming \((t,n)\)-compromised setting, we say an undirected graph \(G=(V,E)\) is a \((t,n)\)-analysts corruption graph if,_ * _Each node in the vertex set_ \(v_{i}\in V\) _represents an analyst_ \(A_{i}\)_;_ * _An edge is presented in the edge set_ \(\mathsf{e}(v_{i},v_{j})\in E\) _if data analysts_ \(A_{i}\) _and_ \(A_{j}\) _can collude;_ * _Every connected component in_ \(G\) _has less than_ \(t\) _nodes._ The corruption graph models the prior belief of the policy designer (or DB administrator) to the system users. Groups of analysts are believed to be not compromised if they are in the disjoint sets of the corruption graph. Based on this corruption graph, we can specify the analysts' constraints as assigning the overall privacy budget \(\psi_{P}\) to each connected component. Theorem 7.2 ().: _There exist mechanisms with \((t,n)\)-multi-analyst DP perform at least as well as with multi-analyst DP._ Proof.: Given a \((t,n)\)-analysts corruption graph \(G\), we show a naive budget/constraint that makes \((t,n)\)-multi-analyst DP degrade to multi-analyst DP. Ignoring the graph structure, we split the overall privacy budget \(\psi_{P}\) and assign a portion to each node proportional to the privilege weight on each node. Let \(k\) be the number of disjoint connected components in \(G\). Then we have at most \([k]\cdot\psi_{P}\) privacy budget to assign to this graph. Clearly, the mechanisms with \((t,n)\)-multi-analyst DP achieve \(([k]\cdot\psi_{P})\)-DP. When \(n>t\), we have more than 1 connected component, meaning the overall privacy budget we could spend is more than that in the all-compromisation scenario. ### Comparison with Strawman Solutions One may argue for various alternative solutions to the multi-analyst query processing problem, as opposed to our proposed system. We justify these possible alternatives and compare them with DProvOB. **Strawman #1: Sharing Synthetic Data.** Recall that DProvOB generates global and local synopses from which different levels of noise are added with additive GM, and answers analysts' queries based on the local synopses. We show our proposed algorithm is optimal in using privacy budgets when all data analysts collude. One possible alternative may be just releasing the global synopses (or generating synthetic data using all privacy budgets) to every data analyst, which also can achieve optimality in the all-compromisation setting. We note that this solution is \(\min\) (\(\forall p_{i},(\max\epsilon_{i},\max\delta_{i})\))-DP (same as the overall DP guarantee provided by DProvOB), however, it does not achieve the notion of _Multi-analyst DP_ (all data analysts get the same output). **Strawman #2: Pre-calculating Seeded Caches.** To avoid the cost of running the algorithm in an online manner, one may consider equally splitting the privacy budgets into small ones and using additive GM to pre-compute all the global synopses and local synopses. This solution saves all synopses and for future queries, it can directly answer by finding the appropriate synopses. If the space cost is too large, alternatively one may store only the seeds to generate all the synopses from which a synopsis can be quickly created. This scheme arguably achieves the desired properties of DProvOB, if one ignores the storage overhead incurred by the pre-computed synopses or seeds. However, for an online query processing system, it is usually unpredictable what queries and accuracy requirements the data analysts would submit to the system. This solution focuses on doing most of the calculations offline, which may, first, lose the precision in translating accuracy to privacy, leading to a trade-off between precision and processing time regarding privacy translation. We show in experiments that the translation only happens when the query does not hit the existing cache (which is not too often), and the per-query translation processing time is negligible. Second, to pre-compute all the synopses, one needs to pre-allocate privacy budgets to the synopses. We has shown as well using empirical results that this approach achieves less utility than DProvOB. ## 8. Related Work Enabling DP in query processing is an important line of research in database systems (Kendal, 2017). Starting from the first end-to-end interactive DP query system (Kendal, 2017), existing work has been focused on generalizing to a larger class of database operators (Bauer et al., 2016; Bauer et al., 2016; Bauer et al., 2016; Bauer et al., 2017; Bauer et al., 2017), building a programmable toolbox for experts (Kendal, 2017; Bauer et al., 2017), or providing a user-friendly accuracy-aware interface for data exploration (Kendal, 2017; Bauer et al., 2017). Another line of research investigated means of saving privacy budgets in online query answering, including those based on cached views (Bauer et al., 2016; Bauer et al., 2017) and the others based on injecting correlated noise to query results (Bauer et al., 2016; Bauer et al., 2017; Bauer et al., 2017). Privacy provenance (on protected data) is studied in the personalized DP framework (Kendal, 2017) as a means to enforce varying protection on database records, which is dual to our system. The multi-analyst scenario is also studied in DP literature (Bauer et al., 2016; Bauer et al., 2017; Bauer et al., 2017; Bauer et al., 2017). Our multi-analyst DP work focuses on the online setting, which differs from the offline setting, where the entire query workload is known in advance (Bauer et al., 2016; Bauer et al., 2017; Bauer et al., 2017). The closest related work (Bauer et al., 2017) considers an online setup for multi-analyst DP, but the problem setting differs. The multi-analyst DP in (Bauer et al., 2017) assumes the data analysts have the incentive to share their budgets to improve the utility of their query answers, but our multi-analyst DP considers data analysts who are obliged under laws/regulations should not share their budget/query responses to each other (e.g., internal data analysts should not share their results with an external third party). Our mechanism ensures that (i) even if these data analysts break the law and collude, the overall privacy loss is still minimized; (ii) if they do not collude, each of the analysts \(A_{i}\) has a privacy loss bounded by \(\epsilon_{i}\) (c.f. our multi-analyst DP, Definition 5). However, (Friedman et al., 2017) releases more information than the specified \(\epsilon_{i}\) for analyst \(A_{i}\) (as (Friedman et al., 2017) guarantees DP). Some other DP systems, e.g. Sage (Sage, 2018) and its follow-ups (Sage et al., 2018; Sage et al., 2019; Sage et al., 2020), involve multiple applications or end-users, and they care about the budget consumption (Sage et al., 2018; Sage et al., 2019) or fairness constraints (Sage et al., 2019) in such scenarios. Their design objective is orthogonal to ours -- they would like to avoid running out of privacy budget and maximize utility through batched execution over a growing database. The idea of adding correlated Gaussian noise has been exploited in existing work (Bao et al., 2021; Sage et al., 2020). However, they all solve a simpler version of our problem. Li et al. (Li et al., 2022) resort algorithms to release the perturbed results to different users _once_ and Bao et al. (Bao et al., 2021) study the sequential data collection of _one user_. When putting the two dimensions together, _understudied_ questions, such as how to properly answer queries to an analyst that the answer to the same query with lower accuracy requirements has already been released to another analyst, arise and are not considered by existing work. Therefore, we propose the provenance-based additive Gaussian approach (Section 5.2) to solve these challenges, which is not merely injecting correlated noise into a sequence of queries. ## 9. Conclusion and Future Work We study how the query meta-data or provenance information can assist query processing in multi-analyst DP settings. We developed a privacy provenance framework for answering online queries, which tracks the privacy loss to each data analyst in the system using a provenance table. Based on the privacy provenance table, we proposed DP mechanisms that leverage the provenance information for noise calibration and built DProvDB to maximize the number of queries that can be answered. DProvDB can serve as a middle-ware to provide a multi-analyst interface for existing DP query answering systems. We implemented DProvDB and our evaluation shows that DProvDB can significantly improve the number of queries that could be answered and fairness, compared to baseline systems. While as an initial work, this paper considers a relatively restricted but popular setting in DP literature, we believe our work may open a new research direction of using provenance information for multi-analyst DP query processing. We thereby discuss our ongoing work on DProvDB and envision some potential thrusts in this research area. * **Tight privacy analysis.** In the future, we would like to tighten the privacy analysis when composing the privacy loss of the local synopses generated from correlated global synopses. * **Optimal processing for highly-sensitive queries.** While currently DProvDB can be extended to answer these queries naively (by a truncated and fixed sensitivity bound), instance optimal processing of these queries (Han et al., 2019) requires data-dependent algorithms, which is not supported by our current approaches. Our ongoing work includes enabling DProvDB with the ability to optimally answer these queries. * **System utility optimization.** We can also optimize the system utility further by considering a more careful design of the structure of the cached synopses (Bao et al., 2021; Sage et al., 2020), e.g. cumulative histogram views, making use of the sparsity in the data itself (Sage et al., 2020), or using data-dependent views (Sage et al., 2020). * **Other DP settings.**DProvDB considers minimizing the collusion among analysts over time an online system. The current design enforces approximate DP due to the nature of Gaussian properties. Our ongoing work extends to use Renyi DP or zCDP for privacy composition. Future work can also consider other noise distributions, e.g. Skellam (Skelam, 2018), to support different DP variants, or other utility metrics, e.g., confidence intervals (Sage et al., 2019) or relative errors, for accuracy-privacy translation, or other application domain, e.g., location privacy (Sage et al., 2019; Sage et al., 2020). * for example, the privacy budget consumed by a lower-privileged analyst during delegation is accounted to the analyst who grants this delegation. ###### Acknowledgements. This work was supported by NSERC through a Discovery Grant. We would like to thank the anonymous reviewers for their detailed comments which helped to improve the paper during the revision process. We also thank Runchao Jiang, Semih Salihooglu, Florian Kerschbaum, Jiayi Chen, Shuran Zheng for helpful conversations or feedback at the early stage of this project.
2309.00053
The first comprehensive study of a giant nebula around a radio-quiet quasar in the $z < 1$ Universe
We present the first comprehensive study of a giant, $\approx \! \! 70$ kpc-scale nebula around a radio-quiet quasar at $z<1$. The analysis is based on deep integral field spectroscopy with MUSE of the field of HE$\,$0238$-$1904, a luminous quasar at $z=0.6282$. The nebula emits strongly in $\mathrm{[O \, II]}$, $\rm H \beta$, and $\mathrm{[O \, III]}$, and the quasar resides in an unusually overdense environment for a radio-quiet system. The environment likely consists of two groups which may be merging, and in total have an estimated dynamical mass of $M_{\rm dyn}\approx 4\times 10^{13}$ to $10^{14}\ {\rm M_\odot}$. The nebula exhibits largely quiescent kinematics and irregular morphology. The nebula may arise primarily through interaction-related stripping of circumgalactic and interstellar medium (CGM/ISM) of group members, with some potential contributions from quasar outflows. The simultaneous presence of the giant nebula and a radio-quiet quasar in a rich environment suggests a correlation between such circum-quasar nebulae and environmental effects. This possibility can be tested with larger samples. The upper limits on the electron number density implied by the $\mathrm{[O \, II]}$ doublet ratio range from $\log(n_{\rm e, \, [O \, II]} / \mathrm{cm^{-3}}) < 1.2$ to $2.8$. However, assuming a constant quasar luminosity and negligible projection effects, the densities implied from the measured line ratios between different ions (e.g., $\mathrm{[O\,II]}$, $\mathrm{[O\,III]}$, and $\mathrm{[Ne\,V]}$) and photoionization simulations are often $10{-}400$ times larger. This large discrepancy can be explained by quasar variability on a timescale of $\approx 10^4{-}10^5$ years.
Zhuoqi Will Liu, Sean D. Johnson, Jennifer I-Hsiu Li, Gwen C. Rudie, Joop Schaye, Hsiao-Wen Chen, Jarle Brinchmann, Sebastiano Cantalupo, Mandy C. Chen, Wolfram Kollatschny, Michael V. Maseda, Nishant Mishra, Sowgat Muzahid
2023-08-31T18:00:23Z
http://arxiv.org/abs/2309.00053v3
# The first comprehensive study of a giant nebula around a radio-quiet quasar in the \(z<1\) Universe ###### Abstract We present the first comprehensive study of a giant, \(\approx\)70 kpc-scale nebula around a radio-quiet quasar at \(z<1\). The analysis is based on deep integral field spectroscopy with MUSE of the field of HE 0238\(-\)1904, a luminous quasar at \(z=0.6282\). The nebula emits strongly in [O II], H\(\beta\), and [O III], and the quasar resides in an unusually overdense environment for a radio-quiet system. The environment likely consists of two groups which may be merging, and in total have an estimated dynamical mass of \(M_{\rm dyn}\approx 4\times 10^{13}\) to \(10^{14}\) M\({}_{\odot}\). The nebula exhibits largely quiescent kinematics and irregular morphology. The nebula may arise primarily through interaction-related stripping of circumgalactic and interstellar medium (CGM/ISM) of group members, with some potential contributions from quasar outflows. The simultaneous presence of the giant nebula and a radio-quiet quasar in a rich environment suggests a correlation between such circum-quasar nebulae and environmental effects. This possibility can be tested with larger samples. The upper limits on the electron number density implied by the [O II] doublet ratio range from \(\log(n_{e,\rm[O\,II]}/\rm cm^{-3})<1.2\) to 2.8. However, assuming a constant quasar luminosity and negligible projection effects, the densities implied from the measured line ratios between different ions (e.g., [O II], [O III], and [Ne V]) and photoionization simulations are often 10\(-\)400 times larger. This large discrepancy can be explained by quasar variability on a timescale of \(\approx 10^{4}-10^{5}\) years. keywords: quasars: supermassive black holes - galaxies: groups - intergalactic medium ## 1 Introduction Galaxy evolution is a complex process that involves gas inflows and outflows thought to control star formation and black hole growth (for a review, see Naab & Ostriker, 2017). Observations of interstellar medium (ISM) gas masses and star formation rates suggest that massive star-forming galaxies have an ISM depletion timescale much smaller than the age of the Universe at \(z<3\)(Kennicutt & Evans, 2012; Tacconi et al., 2013). This can be explained if galaxies accrete gas from external sources to maintain their star-forming activity and black hole growth (though see Leitner & Kravtsov, 2011). At the same time, the ISM of galaxies can lose gas through various processes including stellar (for a review, see Zhang, 2018) and AGN feedback (for a review, see Fabian, 2012), ram pressure stripping (e.g., Hester, 2006), and tidal interactions with neighboring galaxies (e.g., Marasco et al., 2016). Therefore, observations of the physical conditions, kinematics, and distribution of gas around galaxies can provide insights into the mechanisms governing galaxy formation and evolution. For these reasons, observations of the gaseous cosmic ecosystems of galaxies were highlighted as a key long-term priority by the 2020 Decadal Survey for Astronomy and Astrophysics (National Academies of Sciences, 2021). The properties of gas flows around galaxies, including their morphology and kinematics, can be directly traced by observations of giant gas nebulae with state-of-the-art wide-field integral field spectrographs (IFSs) such as the Multi-Unit Spectroscopic Explorer (MUSE; Bacon et al., 2010) and the Keck Cosmic Web Imager (KCWI; Martin et al., 2010). At \(z>2\), systematic IFS surveys around radio-quiet quasars discovered ubiquitous giant H I Ly\(\alpha\) nebulae (e.g., Cantalupo et al., 2014; Borisova et al., 2016; Cai et al., 2019; O'Sullivan et al., 2020; Fossati et al., 2021; Mackenzie et al., 2021). More recently, a study of the ionization states of one of these nebulae found that the gas has a surprisingly large density for halo-scale emission or a very broad density distribution (Cantalupo et al., 2019). However, due to redshifting of optical emission lines into the infrared, surface brightness dimming, and the faintness of galaxies at high redshift, more fully characterizing these \(z>2\) nebulae is time-consuming even with large space- or ground-based telescopes (though see Langen et al., 2023). At low redshift, on the other hand, non-resonant emission lines such as [O II], H\(\beta\), and [O III] are available at optical wavelengths, and collecting galaxy spectra is less expensive. The power of IFSs enabled the discoveries of giant nebulae around starburst galaxies, galaxy groups, and quasars (e.g., Epinat et al., 2018; Boselli et al., 2019; Chen et al., 2019; Rupke et al., 2019; Zabl et al., 2021; Burchett et al., 2021; Leclercq et al., 2022; Dutta et al., 2023), arising from outflows, interactions, and filamentary accretion. These low redshift nebulae provide an opportunity to study the physical conditions and the processes that may produce giant nebulae at higher redshift. Most published studies of giant nebulae around \(z<1\) quasars have focused on radio-loud systems (Johnson et al., 2018; Helton et al., 2021; Johnson et al., 2022), which represent a small fraction of the general quasar population (e.g., Kellermann et al., 1989). Furthermore, clustering measurements indicate that radio-loud quasars typically reside in massive galaxy groups with halo masses of \(M\sim 10^{13}\) M\({}_{\odot}\) while the halo masses of more common radio-quiet systems are approximately five times lower on average (e.g., Shen et al., 2009). This mass miss-match and the possibility of radio jet feedback make the comparison between low-redshift giant nebulae around radio-loud quasars and high-redshift radio-quiet ones difficult. Recently, Chen et al. (2023) demonstrated the existence of giant nebulae around two radio-quiet quasars as part of a study focused on turbulence using the observed velocity structure function. In this paper, we present the first comprehensive characterization of a giant nebula and associated galaxy environment around a radio-quiet quasar at \(z<1\), HE 0238\(-\)1904. Recently, this nebula was independently discovered and reported by Zhao & Wang (2023). However, our interpretation of the system differs substantially from the one presented by Zhao & Wang (2023) due to adoption of a significantly different quasar systemic redshift. In particular, Zhao & Wang (2023) adopted a Mg II emission-based redshift of \(z=0.631\) from the Hamburg/ESO Survey of bright Quasars (Wisotzki et al., 2000). On the other hand, we adopt a redshift estimate of \(z=0.6282\) based on the [O II] emission-line centroid measured in the spectrum of the quasar extracted from the same MUSE dataset used to measure the kinematics of the giant nebula. The paper is organized as follows: In Section 2, we discuss the observations, data reduction, and processing. In Section 3, we describe our measurements and investigate the group environment and giant nebula properties. In Section 4, we investigate the origin of the nebula and the physical conditions of the gas. In Section 5, we summarize our findings and discuss their implications. Throughout the paper, we adopt a flat \(\Lambda\) cosmology with \(\Omega_{\rm m}=0.3\), \(\Omega_{\Lambda}=0.7\), and \(H_{0}=70\) km s\({}^{-1}\)Mpc\({}^{-1}\). All magnitudes are given in the AB system unless otherwise stated. ## 2 Observations and Data The \(z\approx 0.63\) quasar HE 0238\(-\)1904 has high-quality archival UV _HST_ absorption spectra used to study the CGM of the Milky Way (Zheng et al., 2019; Bish et al., 2021) and distant galaxies (Muzahid et al., 2018; Lehner et al., 2018) in addition to a highly ionized, fast outflow from the quasar itself (Muzahid et al., 2012; Arav et al., 2013). To identify faint foreground galaxies in the quasar field, we observed it with MUSE as part of the Quasar-field Blind Emitter Survey (MUSE-QuBES; Muzahid et al., 2020; Dutta et al., 2023) on the Very Large Telescope (VLT; PI: J. Schaye, PID: 094.A-0131(B) & 096.A-0222(A)). MUSE is an integral-field spectrograph on the UT4 VLT with a field of view (FoV) of \(1^{\prime}\times 1^{\prime}\) and a spaxel size of \(0.2^{\prime\prime}\) in wide-field mode (WFM). MUSE covers the spectral range between 4750 A to 9350 A and a resolution of \(R\sim 3000\). The MUSE observations are centered near the quasar sightline, and we obtained eleven exposures collected between November 18th, 2014 and February 2nd, 2016 with a total exposure time of 8.75 hr with median seeing full-width-at-half-maximum (FWHM) conditions of \(0.7^{\prime\prime}\). At the redshift of HE 0238\(-\)1904, the MUSE FoV corresponds to a projected size of \(\approx 400\) proper kpc (pkpc) on a side, and the spectral coverage includes emission lines such as [O II], H\(\beta\), and [O III]. These emission lines enable sensitive studies of any ionized nebulae and galaxies in the quasar's environment. To ensure robustness of results, we analyzed the MUSE data reduced through three independent pipelines including CubEx (Cantalupo et al., 2019), the MUSE GTO team pipeline (Weilbacher et al., 2014), and the ESO reduction pipeline (Weilbacher et al., 2012) and found consistent results with all three. All three pipelines include bias subtraction, flat fielding, wavelength calibration, geometric calibration, sky subtraction, flux calibration, and stacking of exposures. For the ESO reductions, we obtained the final, stacked datacube from the ESO Science Archive and performed additional post-processed sky subtraction with the Zurich Atmosphere Purge package (ZAP; Soto et al., 2016). For simplicity, we converted the air wavelengths delivered by the three pipelines to vacuum. To enable more sensitive and higher angular resolution photometric measurements of galaxies in the quasar field, we also obtained an image from the Advanced Camera for Surveys (ACS) on the _Hubble Space Telescope (HST)_ with the F814W filter (PI: L. Straka, PID: 14660) with a total exposure time of 2182 seconds split between four dithered exposures. We obtained the reduced, stacked image from the Barbara A. Mikulski Archive for Space Telescopes (MAST). In addition, to measure the UV luminosity of the quasar, we obtained the archival UV spectrum from the Cosmic Origins Spectrograph (COS; Green et al., 2012) from MAST. The spectrum consists of a total exposure time of 14400 seconds and 7496 seconds in the G130M and G160M gratings, respectively (PI: J. Green and S. Pentton, PID: 11541 and 12505). We reduced and coadded the COS spectrum following procedures outlined in Johnson et al. (2015); Chen et al. (2020). ### Quasar Light Subtraction HE 0238\(-\)1904 has a Gaia (Gaia Collaboration et al., 2018) \(G\)-band magnitude of \(m_{G}=15.2\), and this brightness combined with the broad wings of the MUSE point spread function (PSF) causes contamination of nearby galaxy spectra with quasar light. This contamination includes both continuum and line emission due to the unresolved narrow-line region in the nucleus. To study faint extended emission, we removed the contamination by performing quasar light subtraction as described in Helton et al. (2021). In summary, our method of quasar light subtraction does not rely on PSF measurements. Instead, it uses spectral information and the fact that quasars and galaxies have different spectral energy distributions (see also Rupke et al., 2017; Chen et al., 2023). In ground-based observations, the Earth's atmosphere scatters bluer photons more than redder ones so that the PSF is wider at bluer wavelengths. The differential scattering makes the spectral slope observed in a spaxel depend on the angular separation from the quasar with steeper (shallower) slopes further from (closer to) the quasar centroid. To account for this, we used a two-component non-negative matrix factorization (NMF; Blanton & Roweis, 2007; Ren et al., 2018) of the quasar light, with one component having a shallow slope and a second having a steep slope. Adding additional a third or fourth NMF component(s) did not noticeably improve the results. In general, the spectrum for each spaxel near the quasar has some light from the quasar and potentially nearby galaxies as well. To subtract quasar light while avoiding subtraction of galaxy light, we fit each spaxel with a linear combination of the two quasar non-negative components and the first two Sloan Digital Sky Survey-Baryon Oscillation Spectroscopic Survey (SDSS-BOSS) galaxy eigenspectra (Bolton et al., 2012) and then subtracted the quasar component of the model. Unlike with some other systems (e.g., Johnson et al., 2018), the host of HE 0238\(-\)1904 does not exhibit bright, extended starlight, so the contribution inferred by the galaxy model was not significant. ## 3 Measurements and Environment ### Quasar Properties HE 0238\(-\)1904 is a luminous, radio-quiet quasar (Veron-Cetty & Veron, 2006; Arav et al., 2013). To ensure self-consistent measurements of the quasar properties, we estimated its redshift, luminosity, and black hole mass using the MUSE spectrum extracted via **MPDAF**(Bacon et al., 2016) with a \(r=3\arcsec\) aperture. To measure the systemic redshift of the quasar, we fit the [O II]\(\lambda\lambda 3727,3729\) doublet with a Gaussian profile following Hewett & Wild (2010) and found \(z=0.6282\pm 0.0002\), where the uncertainty represents the scatter between the [O II] centroid and stellar absorption lines of SDSS quasars at similar redshift. This redshift is \(\approx\) 4500 km s\({}^{-1}\) from a previously reported Mg II based estimate from Wisotzki et al. (2000). Even so, a more recent Mg II based redshift of \(z=0.628\) from Monroe et al. (2016) confirms our [O II]-based redshift estimate. In general, quasar redshifts measured from the [O II] doublet are more accurate than those measured from broad-lines like Mg II, as we argue in Section 4.1. In addition, we estimated the bolometric luminosity and the black hole mass of HE 0238\(-\)1904 by fitting the extracted MUSE spectrum with the Python QSO fitting code (PyQSOFit; Guo et al., 2019). PyQSOFit fits a quasar's spectrum with a combination of a power-law continuum, Fe II template, and sets of Gaussian line profiles for both the broad- and narrow-lines. We modelled the H\(\beta\) and [O III] spectral region with the continuum components, three Gaussian profiles for the broad H\(\beta\), and two for the narrow H\(\beta\) and [O III]. From the fit, we computed a monochromatic luminosity at 5100A \(\lambda L_{5100}\approx 1.6\times 10^{46}\) erg s\({}^{-1}\) and a bolometric luminosity of \(L_{\rm bol}\approx 1.7\times 10^{47}\) erg s\({}^{-1}\) using the bolometric correction factor from Richards et al. (2006). Finally, we inferred a black hole mass of \(M_{\rm BH}\approx 10^{9.8}\) M\({}_{\odot}\) using the single-epoch virial theorem-based approach from Vestergaard & Peterson (2006). Following Kormendy & Ho (2013), this black hole mass corresponds to a stellar mass of \(M_{\star}\approx 10^{12.0}\) M\({}_{\odot}\) for the host galaxy, but we caution this stellar mass may be significantly overestimated due to uncertainty in single-epoch virial theorem-based black hole masses and observed scatter in the black hole mass-stellar mass relation. For example, if the true black hole mass is \(1\sigma\) below the mean single-epoch virial theorem estimate, and the stellar mass is \(1\sigma\) below the estimate from the black hole mass-stellar mass relation, the inferred stellar mass would be \(M_{\star}\approx 10^{11.4}\) M\({}_{\odot}\). Furthermore, the single-epoch virial theorem-based relation used here is not calibrated for quasars as luminous as HE 0238\(-\)1904, which may drive disk wind, erroneously inflating the black hole mass estimate. The fitted quasar spectrum is shown in Figure 1. ### Galaxy Measurements and Properties To study the environment of HE 0238\(-\)1904, we conducted a galaxy survey by first identifying all continuum sources in MUSE and the ACS+F814W image. We identified continuum sources by running Source Extractor (SE; Bertin & Arnouts, 1996) on a median MUSE white light image and the _HST_ image separately. To ensure completeness, we also added sources based on visual inspection. Figure 1: MUSE spectrum of HE 0238\(-\)1904 overplotted with best-fit models. The MUSE spectrum is shown as a solid black line, the power-law continuum model is shown as a dashed purple line, and the iron template model is shown using a solid blue line. The bottom left inset panel shows the [O II] line emission with the best-fit continuum+line model shown in red. The top right inset panel shows the H\(\beta\) and [O III] emission with the best-fit shown in red. We measured the systemic redshift of the quasar from the [O II] doublet, and inferred the black hole mass from the H\(\beta\) broad component and the continuum luminosity at 5100Å as described in detail in Section 3.1. Typically, sources are missing from MUSE due to biased background estimation caused by bright objects in the field or due to blending. Based on the background sky standard deviation and source counts in the ACS+F814W image, the imaging catalog is complete for objects brighter than \(m_{\rm F814W}\approx 26-27\), depending on angular size. For each identified object, we extracted a MUSE spectrum with MPDAF with a circular aperture of \(r=0.7^{\prime\prime}\), which is roughly the size of the MUSE seeing FWHM. The choice of this modest aperture may result in some wavelength dependent aperture losses but helps increase S/N for redshift estimation. We then fit each spectrum as a linear combination of SDSS galaxy eigenspectra as described in Helton et al. (2021) to measure the source redshift. In summary, we computed the best-fit linear combination on a grid from \(z=0\) to \(z=1\) with a step size of \(\Delta z=0.0001\) and recorded the goodness-of-fit statistic (\(\chi^{2}\)) over the entire grid. We adopted the redshift with the minimum global \(\chi^{2}\) as our initial solution. We then visually inspected each best-fit model to ensure robustness and assigned the redshift quality. For galaxies with both emission and absorption lines, we masked out strong emission lines and measured the redshift based on stellar absorption features when possible to avoid a potential bias in redshift from large-scale nebulae in the field (which may not be closely associated with the galaxies in question). Finally, Figure 2: _HST_ ACS+F814W image of the field of HE 0238-1904. The full image has a FoV of \(1.5^{\prime}\times 1.5^{\prime}\). The larger dashed box shows the \(1^{\prime}\times 1^{\prime}\) MUSE FoV. The smaller dashed box marks the \(30^{\prime\prime}\times 30^{\prime\prime}\) region displayed in Figure 4. The LOS velocities of galaxies relative to the quasar are denoted with outlining colors and the corresponding colorbar is shown on the bottom left. The histogram in the bottom right inset panel shows the velocity distribution of galaxies where galaxies in both orange and purple outlined regions are plotted separately. We note that the orange and purple regions and corresponding histograms are only for visualization. The two-Gaussian fitting of the velocity distribution does not rely on any spatial information. Galaxies in the quasar host environment are labeled with black circles and labeled by their IDs. The approximate stellar mass weighted group center is marked with a white asterisk while the weighted centers of the richer, redshifted group and less rich, blueshifted group are marked with red and blue asterisks, respectively. Based on spatial distribution and kinematics, HE 0238\(-\)1904 resides in a massive, rich environment potentially consisting of two galaxy groups which may be merging. we classified our confidence in the redshift measurements based on the number of the detected spectral features. All of the galaxies in the quasar environment have two or more spectral features except for G11 and G18. According to Helton et al. (2021), the uncertainty in galaxy redshifts measured in MUSE spectra with these techniques is \(\sigma\approx 20\,\mathrm{km\,s^{-1}}\). Comparing the continuum source catalog and the corresponding redshift measurements, the redshift survey is approximately 100% complete for sources brighter than \(m_{\mathrm{F814W}}\approx 24\) and approximately 95% complete for those brighter than \(m_{\mathrm{F814W}}\approx 25\). For comparison, an \(L_{*}\) galaxy at \(z\approx 0.6\) has \(m_{\mathrm{F814W}}\approx 20.6\) assuming the luminosity function from Faber et al. (2007). The high completeness of the galaxy survey at faint magnitudes enables us to study the origins of nebulae, even if they arise from interactions involving relatively faint dwarf galaxies. To examine properties of the quasar host environment, we identified candidate group members based on their LOS velocities relative to the quasar (\(\Delta v=v-v_{\mathrm{QSO}}\)). In particular, we selected galaxies with \(|\Delta v|<2000\)\(\mathrm{km\,s^{-1}}\). We inferred the physical properties of the selected galaxies with Bagpipes (Carall et al., 2018, 2019). Bagpipes performs stellar population synthesis (SPS) with a stellar evolution model from Bruzual and Charlot (2003), an initial mass function from Kroupa (2001), and the Bayesian inference package Multinest(Buchner et al., 2014; Feroz et al., 2009, 2019). We fit both spectroscopic and photometric data simultaneously with Bagpipes. Many of the galaxies in our sample only have one photometric datapoint available, necessitating the use of the spectra to further inform the stellar population synthesis. In our fitting procedure, we assumed an exponential star formation history with e-folding time scale of \(0.01<\tau/\mathrm{Gyr}<8.00\), solar stellar metallicity, and dust attenuation model from Calzetti et al. (2000) with \(0<A_{V}/\mathrm{mag}<2\). The choice of exponentially declining star formation histories enables more direct comparison with surveys such as MUSE-Wide (Urrutia et al., 2019) and the MUSE Ultra DEEP Field (Fossati et al., 2019). We introduced a 2nd order multiplicative polynomial to reconcile the potential artificial differences between SED measured in photometry and spectra. This polynomial accounts for systematic uncertainty in the MUSE flux due to wavelength dependent aperture losses and uncertainty in the flux calibration (Weilbacher et al., 2020). We also used Bagpipes spectrum noise scaling to allow the relative weighting of the photometry and spectrum to be a nuisance parameter. We note that the results are not sensitive to this scaling in our case (see Carnall et al., 2019). In addition to the ACS+F814W photometry, we also included \(grizY\) photometric data from the Dark Energy Survey (DES; Abbott et al., 2021) available for 16 galaxies. The resulting stellar mass estimates and dust attenuation \(A_{V}\) values are reported in Table 1. The stellar masses have associated systematic uncertainties of \(\approx 0.2\) dex. Galaxies close to the quasar (G1-G7) are contaminated by the quasar light, and we used the quasar-light subtracted spectra for Bagpipes fitting when possible. Galaxies G1, G3, G11, G13, G18, G20, and G31 do not have a stellar mass estimate because their continua are too faint or are too badly contaminated by the quasar continuum. To further characterize these galaxies, we also report Figure 3: MUSE galaxy spectra with the best-fit spectral models. The MUSE spectrum is shown by a solid black line. The uncertainty is shown by a solid grey line. The best-fit model used for redshift measurement is shown by a solid red line. 4000 A break strength (D4000; Gallazzi et al. 2005) and rest-frame \(B\)-band absolute magnitude with \(K\)-corrections calculated using templates from Coleman et al. (1980) chosen based on the strength of the 4000 A break. The IDs, galaxy coordinates (R.A., Decl.), redshifts, ACS+F814W apparent magnitudes, absolute \(B\)-band magnitudes, adopted K-correction templates (SO, Scd, or Irregular), and D4000 measurements are reported in Table 1, along with the angular distances, projected distances, and LOS velocity differences from the quasar sightline. The locations of these galaxies are shown in Figure 2 and several example MUSE spectra are overplotted with their best-fit PCA spectral models in Figure 3. An interactive view of the galaxy environment and spectra is available online1. Footnote 1: [http://zhuoqiliu.com/HE0238-1904.html](http://zhuoqiliu.com/HE0238-1904.html) ### The Galactic Environment In the MUSE field of HE 0238\(-\)1904 we identified 35 galaxies, including the quasar host, with LOS velocities \(|\Delta v|<2000\) km s\({}^{-1}\) of the quasar systemic velocity, which is sufficient to encompass most members of even massive galaxy clusters. Figure 2 shows a \(1.5\arcmin\times 1.5\arcmin\) FoV image from the ACS+F814W observations of the field where \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline ID & R.Aa & Decl.b & \(z\)c & \(m_{\rm F814W}\)d & \(M_{B}\)e & \(K\)-correction & D4000 & \(A_{V}\) & \(\log{(M_{*}/{\rm M}_{\odot})}\)f & \(\Delta\rho\)f & \(d\)h & \(\Delta\nu\)f \\ & (J2000) & (J2000) & & (AB) & (AB) & template & (mag) & & (′′) & (pkpc) & (km s\({}^{-1}\)) \\ \hline Host & 02:40:32.58 & \(-\)18:51:54. & 0.6282 &... &... &... &... &... &... & 0.0 & 0.0 & 0 \\ G1 & 02:40:32.63 & \(-\)18:51:55. & 0.6278 & 24.3 & \(-\)17.5 & S0 & 1.26 \(\pm\) 0.57 &... & 9.3 & 4.4 & 30.4 & -76 \\ G2 & 02:40:32.73 & \(-\)18:51:47. & 0.6270 & 23.3 & \(-\)18.5 & S0 & 1.56 \(\pm\) 0.08 & 0.1 & 9.5 & 4.8 & 32.7 & -224 \\ G3b & 02:40:32.74 & \(-\)18:51:55. & 0.6280 & 23.8 & \(-\)18.3 & Irr &... &... &... & 9.6 & 5.0 & 34.3 & -40 \\ G4 & 02:40:32.57 & \(-\)18:51:56. & 0.6284 & 24.9 & \(-\)17.3 & Irr & 1.05 \(\pm\) 0.07 & 0.2 & 8.3 & 5.4 & 36.7 & +34 \\ G5 & 02:40:32.71 & \(-\)18:51:57.0 & 0.6280 & 25.2 & \(-\)17.0 & Irr & 0.64 \(\pm\) 0.08 & 0.1 & 7.4 & 5.9 & 40.1 & -40 \\ G6 & 02:40:32.96 & \(-\)18:51:54. & 0.6295 & 22.4 & \(-\)19.4 & S0 & 1.35 \(\pm\) 0.02 & 0.1 & 10.1 & 6.1 & 41.5 & +237 \\ G7 & 02:40:33.04 & \(-\)18:51:53. & 0.6275 & 23.8 & \(-\)18.0 & S0 & 1.30 \(\pm\) 0.04 & 0.0 & 9.3 & 6.9 & 46.9 & -132 \\ G8 & 02:40:32.21 & \(-\)18:51:58. & 0.6284 & 21.8 & \(-\)20.0 & S0 & 1.62 \(\pm\) 0.02 & 0.2 & 10.4 & 9.1 & 61.9 & +34 \\ G9 & 02:40:33.44 & \(-\)18:51:50.7 & 0.6330 & 23.8 & \(-\)18.1 & S0 & 1.49 \(\pm\) 0.05 & 0.2 & 9.7 & 12.2 & 82.2 & +882 \\ G10 & 02:40:33.53 & \(-\)18:51:48. & 0.6323 & 20.0 & \(-\)21.9 & S0 & 1.71 \(\pm\) 0.01 & 0.8 & 11.5 & 13.8 & 94.3 & +753 \\ G11 & 02:40:32.37 & \(-\)18:51:37.6 & 0.6302 &... &... &... &... &... &... &... & 14.1 & 96.3 & +360 \\ G12 & 02:40:32.00 & \(-\)18:51:39. & 0.6297 & 21.4 & \(-\)20.4 & S0 & 1.64 \(\pm\) 0.02 & 0.2 & 10.6 & 14.1 & 96.5 & +274 \\ G13 & 02:40:32.28 & \(-\)18:52:04.9 & 0.6272 &... &... &... &... &... &... & 14.2 & 97.0 & -187 \\ G14 & 02:40:33.17 & \(-\)18:51:37.9 & 0.6310 & 22.6 & \(-\)19.2 & S0 & 1.37 \(\pm\) 0.03 & 0.7 & 10.0 & 15.8 & 108.0 & +513 \\ G15 & 02:40:33.62 & \(-\)18:51:43.2 & 0.6253 & 24.8 & \(-\)17.0 & S0 & 1.99 \(\pm\) 0.22 & 0.4 & 9.0 & 16.8 & 115.0 & -537 \\ G16 & 02:40:31.85 & \(-\)18:52:05. & 0.6279 & 23.8 & \(-\)18.0 & S0 & 1.98 \(\pm\) 0.16 & 1.1 & 9.5 & 17.5 & 119.8 & -58 \\ G17 & 02:40:33.75 & \(-\)18:51:45. & 0.6332 & 22.7 & \(-\)19.1 & S0 & 1.57 \(\pm\) 0.03 & 0.6 & 10.1 & 17.6 & 120.3 & +919 \\ G18 & 02:40:33.53 & \(-\)18:51:39.6 & 0.6332 &... &... &... &... &... &... & 17.9 & 121.9 & +922 \\ G19 & 02:40:33.69 & \(-\)18:52:00.1 & 0.6358 & 22.2 & \(-\)19.7 & S0 & 1.60 \(\pm\) 0.02 & 0.4 & 10.3 & 18.0 & 122.9 & +1398 \\ G20 & 02:40:31.97 & \(-\)18:52:07.9 & 0.6271 &... &... &... &... &... &... & 18.8 & 128.1 & -205 \\ G21 & 02:40:33.48 & \(-\)18:51:36.9 & 0.6341 & 22.1 & \(-\)19.7 & S0 & 1.26 \(\pm\) 0.02 & 1.4 & 10.3 & 19.3 & 131.8 & +1084 \\ G22 & 02:40:31.34 & \(-\)18:52:02.5 & 0.6268 & 23.0 & \(-\)18.9 & S0 & 1.66 \(\pm\) 0.05 & 0.5 & 10.1 & 20.9 & 142.8 & -261 \\ G23 & 02:40:33.76 & \(- we marked the quasar with a grey star and labelled galaxies with circles as well as their ID. The color of the circle represents the LOS velocity of each galaxy relative to the quasar. Additionally, we display the \(1^{\prime}\times 1^{\prime}\) MUSE FoV, and a smaller \(30^{\prime\prime}\times 30^{\prime\prime}\) region which is the focus of later figures in this work. Among the 35 galaxies in the environment of HE 0238\(-\)1904, four (two) exhibit stellar masses of \(\log(M_{*}/\mathrm{M}_{\odot})>10.5\) (\(>11\)) (excluding the quasar), indicating a significant overdensity and likely a massive group. To further characterize the environment, we show the distribution of galaxies' LOS velocities relative to the quasar (\(\Delta v=v-v_{\mathrm{QSO}}\)) in the bottom right panel of Figure 2. The LOS velocity distribution peaks around \(-100\,\mathrm{km\,s^{-1}}\) but exhibits a non-Gaussian tail toward higher velocity of \(+100\,\mathrm{km\,s^{-1}}\) to \(+1400\,\mathrm{km\,s^{-1}}\). There is a clear trend between LOS velocity and location on the sky visible in Figure 2 with galaxies with \(\Delta v>0\,\mathrm{km\,s^{-1}}\) largely falling North East of the quasar and those with \(\Delta v<0\,\mathrm{km\,s^{-1}}\) falling near the quasar or South West of it. To better visualize the location\(-\)velocity trend, we divided the field into two regions, one NE of the quasar and one SW of it. The NE (SW) one is marked by an orange (purple) trapezoid in Figure 2. We also show the LOS velocity distribution of the galaxies in each trapezoidal region by the corresponding histograms in the inset panel in Figure 2. The peak and the tail in the histogram correspond closely to these two regions respectively. The non-Gaussian LOS velocity distribution and correlation with spatial location suggests that the overdensity near the quasar host may consist of two distinct, but possibly interacting, galaxy groups. To quantify the velocity dispersions of these two potential groups, we fit two Gaussians to the entire LOS velocity distribution. This results in one narrow, blueshifted Gaussian and one broader, redshifted one. The blueshifted Gaussian has a mean LOS velocity of \(\Delta v_{\mathrm{group}}=-99\pm 25\,\mathrm{km\,s^{-1}}\) and a 1D velocity dispersion of \(\sigma_{\mathrm{group}}=92\pm 50\,\mathrm{km\,s^{-1}}\) and includes \(\approx 35\%\) of the galaxies near HE 0238\(-\)1904. The redshifted Gaussian has \(\Delta v_{\mathrm{group}}=629\pm 140\,\mathrm{km\,s^{-1}}\) and \(\sigma_{\mathrm{group}}=506\pm 90\,\mathrm{km\,s^{-1}}\) and includes \(\approx 65\%\) of the galaxies. In both cases, the uncertainty estimates are based on bootstrap resampling. While the Gaussian fitting did not include any spatial information, the two Gaussians closely match the purple and orange velocity histograms formed from a spatial separation (see Figure 2). These fitting results suggest that the environment around the quasar includes one massive group at \(\Delta v_{\mathrm{group}}\approx 600\,\mathrm{km\,s^{-1}}\) and one less massive group closer to the quasar velocity. Assuming each group is virialized, we estimate dynamical masses of \(M_{\mathrm{dyn}}\sim 9.8\times 10^{13}\,\mathrm{M}_{\odot}\) and \(M_{\mathrm{dyn}}\sim 5.7\times 10^{11}\,\mathrm{M}_{\odot}\)(Munari et al., 2013) for the richer, redshifted group and less rich, blueshifted group, respectively. To place a lower limit on the mass estimate, we fit a single Gaussian to galaxies with \(\Delta v>200\,\mathrm{km\,s^{-1}}\). We found the velocity dispersion is \(\approx 400\,\mathrm{km\,s^{-1}}\), corresponding to a mass of \(M_{\mathrm{dyn}}\sim 3.8\times 10^{13}\,\mathrm{M}_{\odot}\). The mass range of \(M_{\mathrm{dyn}}\approx 4\times 10^{13}-10^{14}\,\mathrm{M}_{\odot}\) is consistent with massive group or modest mass cluster. However, we caution that the assumption that the groups are virialized introduces additional uncertainty given the complex environment. Finally, in Figure 2, we show the stellar mass weighted group center as a white asterisk, and membership weighted (\(\frac{P_{\mathrm{blue/red}}}{P_{\mathrm{blue+P}}+P_{\mathrm{red}}}\)) centers as red and blue asterisks for the richer, redshifted group and less rich, blueshifted group respectively. To test the expectation that dynamically more massive groups will contain more massive galaxies, we investigate the most massive galaxies in each group. G8 and G22 are the most massive galaxies with a stellar mass of \(\log(M_{*}/\mathrm{M}_{\odot})=10.4\) and 10.1 respectively in the less rich, blueshifted group. On the other hand, the richer, redshifted group includes two massive elliptical galaxies, G10 and G34, with \(\log(M_{*}/\mathrm{M}_{\odot})=11.5\) and 11.2, respectively. Furthermore, the richer, redshifted group contains a massive disc galaxy, G33, with \(\log(M_{*}/\mathrm{M}_{\odot})=10.8\). This is consistent with HE 0238\(-\)1904 residing in an overdense region likely made of two groups with the redshifted one being richer and more massive. However, the quasar redshift falls between the centroids of the two groups indicating that it could arise in either or truly be located between them. Despite the large uncertainty in the stellar mass of the quasar host galaxy (see Section 3.1), the large black hole mass suggests its a massive galaxy, possibly the largest in the overdensity around HE 0238\(-\)1904. It is therefore more probable that HE 0238\(-\)1904 resides in the richer, redshifted group. Nonetheless, we cannot completely rule out the possibility that HE 0238\(-\)1904 originates from the less rich, blueshifted group. In either case, the dynamically rich and likely unrelaxed environment could result in galaxy interactions that can produce giant nebulae via ram pressure and tidal stripping. ### Nebular Environment Due to ionizing radiation from the accretion disk, wide-field IFS observations of quasar fields often find large nebulae (Johnson et al., in prep). To search for the nebulae around HE 0238\(-\)1904, we conducted continuum subtraction of the datacube locally for the [O II], H\(\beta\), and [O III] emission lines around the quasar. For continuum fitting near each of the three lines, we masked the spectral region within \(\pm 500\)\(-\)1000 km s\({}^{-1}\) of the expected observed wavelength at the quasar's redshift. We fine-tuned the masked region individually for each of the three lines to avoid skyline contamination and to account for the larger width [O II] doublet. For each spaxel in the masked datacube, we then fit a third-order polynomial to the continuum regions around each line and subtracted the best-fit model to complete the continuum subtraction. This continuum-subtracted MUSE datacube enabled the discovery of a giant ionized nebula in [O II], H\(\beta\), and [O III] around HE 0238\(-\)1904 with a total area of \(\approx 5000\) kpc\({}^{2}\) which is visualized in Figure 4. This nebula surrounds the quasar with projected radii of \(d\approx 30\) to 50 kpc and with LOS velocities of \(\Delta v\approx-250\) to \(+250\) km s\({}^{-1}\) from the quasar. The nebula is more extended to the South East and the South West of the quasar. The South East extension of the nebula is spatially coincident with galaxies G1, G3, G4, and G5. Additionally, the tail extending South West of the quasar is distinct from but approximately in the direction of G8. To examine the nebula and any relationship with galaxies in the quasar environment, we show [O II] and [O III] emission contours over the HST image in panel (a) of Figure 4. We also display a nebular LOS velocity map in panel (b) and a [O III]\(/\)[O II] line ratio map in panel (c). We constructed these two maps by jointly fitting Gaussian line profiles to the continuum-subtracted [O II], H\(\beta\), and [O III] datacubes. Instead of fitting the spectrum of each individual spaxel, we averaged over circular apertures of \(r=1^{\prime\prime}\) to enhance S/N. We chose this aperture radius based on experimentation to visualize even faint parts of the nebula. These two maps provide an opportunity to study the spatial dependence of the kinematics and the ionization state of the gas. In addition, we show three panels of narrowband images generated from the continuum subtracted datacubes for each of [O II] and [O III] in velocity ranges of \(-300\) to \(-100\) km s\({}^{-1}\), \(-100\) to \(+100\) km s\({}^{-1}\), and \(+100\) to \(+300\) km s\({}^{-1}\) in panel (d)-(f) and (g)-(i) respectively. The nebula exhibits an irregular morphology but with a spatial trend in kinematics. In particular, the region North of the quasar is redshifted relative to the quasar and has a LOS velocity of \(\Delta v= Figure 4: Visualizations of the nebula discovered around HE 0238\(-\)1904. Panel (a): HST ACS+F814W image of the field. Galaxies are circled in black and labelled with their IDs. Panel (b): map of the nebular LOS velocity relative to the quasar systemic velocity. Galaxies are circled in black and colored with their velocities. Panel (c): map of nebular photoionization shown as the line ratio [O III] \(\lambda 5008/\)[O II] \(\lambda 43727+3729\). Panel(d)-(f) and Panel (g)-(i): narrow-band [O II] and [O III] surface brightness maps extracted from the MUSE datacube over the velocity intervals labelled in each panel. The inset panel in Panel (h) shows a zoomed, unsmoothed map around G3 and G5 to emphasize the possible existence of a tidal tail. These maps are overlaid with [O II] and [O III] surface brightness contours at levels of 0.08 and \(0.3\times 10^{-17}\) erg cm\({}^{-2}\) s\({}^{-1}\) arcsec\({}^{-2}\). The contours shown in panel (e) and panel (h) are overlaid on the HST image in blue and red respectively. We note that surface brightness maps and contours are smoothed with 3 pixel kernels. A version of this figure with the region circles marked in every velocity panel is available online1. \(0-250\,\mathrm{km\,s^{-1}}\). The region South of the quasar including the tail to the West is mainly blueshifted relative to the quasar but with a small redshifted region in the most Southern points. This southern region is spatially coincident and potentially kinematically coincident with G1, G3, G4 and G5. However, the continua of these galaxies are too faint to measure stellar absorption-based redshifts. This raises the possibility that their nebular spectra may be contaminated by the surrounding nebulae, resulting in a biased redshift measurement. In the case of G3 and G4, the line width of the nebular emission near the galaxies is significantly narrower than the more extended emission from nearby parts of the giant nebula, indicating that the galaxy line emission likely arises in the ISM of the two dwarfs. The nebula also shows a spatial trend in the ionization-state-sensitive [O III]\(/\)[O II] line ratio. The majority of the nebula is [O II] dominated but the region South East of the quasar has greater [O III] emission, particularly, at a few [O III] knots near G1, G3 and G5. The knots near G3 and G5 have the highest surface brightness in the nebula. Furthermore, the bright region extending to the South of the brightest knot near G3 is reminiscent of a tidal tail. To better explore the properties of the nebula, we selected several representative regions in it and extracted their full spectra to infer physical conditions from both strong ([O II], H\(\beta\), and [O III]) and weak lines ([Ne V]\(\lambda 3427\), H\(\delta\), H\(\gamma\), [O III]\(\lambda 4364\), and He II\(\lambda 4687\)2). We picked the locations of these regions to cover a wide range in line ratios, surface brightness, and projected locations relative to the quasar. These regions are shown in panel (g) of Figure 4 and labelled with letters and numbers where S# refers to regions with higher surface brightness for which we used an extraction radius of \(0.7\arcsec\) while B# labels low surface brightness regions which required a larger extraction radius (\(>1\arcsec\)) to achieve sufficient S/N. Footnote 2: Other weak lines such as [Ne III]\(\lambda 3869\), He I\(\lambda 3889\) & H\(\epsilon\) are covered by MUSE but we do not use them in this work because of contaminating sky lines or blending with other lines. To measure the emission properties for each region, we jointly fit the strong and weak emission lines described above with Gaussian profiles using LMFT (Newville et al., 2014). For each region, all fitted lines share the same redshift and velocity width, but line fluxes are free parameters except for cases with line ratios set by atomic physics (e.g., [O III]\(\lambda 4960\) and [O III]\(\lambda 5008\)). In most cases, a single set of Gaussians is enough to describe the emission line profiles, except for S3, S4, and B4 which require a second set of Gaussians to account for broader (\(\sigma\approx 100\)-\(170\,\mathrm{km\,s^{-1}}\)) emission wings. Such emission wings are often seen around luminous quasars due to quasar-driven outflows (Heckman et al., 1981; Liu et al., 2013, 2013), but the wings on S3, S4, and B4 may also be due to projection effects. We summarize the measurements for these regions, including their distances from the quasar, extraction radii, line fluxes, LOS velocities, and 1-D velocity dispersions, in Table 2. We display strong and weak line spectra as well as their best-fit models in Figure 5 and Figure 6 respectively for a representative subset of the regions. ## 4 Discussion As discussed in Section 3.3, the environment of HE 0238\(-\)1904 is overdense and includes a massive galaxy group or cluster. Based on clustering studies, this environment is richer than those of most radio-quiet systems, but consistent with expectation for radio-loud ones. This demonstrates that radio-quiet systems like HE 0238\(-\)1904 are diverse in terms of their host environment. Nevertheless, the lack of detected radio emission and amorphous morphology of the nebula suggests that it is not jet related. Considering that most published giant nebulae at \(z<1\) are in a rich environments, the presence of giant nebulae might be correlated with group properties. A larger sample size of quasars with wide IFS observations is required to investigate this possibility. Alternatively, such a rich environment can be explained by variable radio quasars. Quasars are capable of changing from radio-quiet to radio-loud or vice versa. Nyland et al. (2020) found 26 sources showing radio variability over timescales of decades from the SDSS DR14 \begin{table} \begin{tabular}{l c c c c c c c c c} \hline ID & Distancea & Extraction & [O II] & H\(\beta\) & [O III] & [Ne V] & [O III] & He II & \(\Delta v\)b & \(\sigma\)c \\ & (kpc) & radius & \(\lambda\lambda 3727+3729\) & \(\lambda 5008\) & \(\lambda 3346\) & \(\lambda 4364\) & \(\lambda 4687\) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) \\ & & (\(\arcsec\)) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & (\(10^{-17}\,\mathrm{erg}\) & \\ & & s\({}^{-1}\)\({}^{-2})\) & s\({}^{-1}\)\({}^{-2}\)) & s\({}^{-1}\)\({}^{-2}\)) & s\({}^{-1}\)\({}^{-2}\)) & s\({}^{-1}\)\({}^{-2}\)) & \\ \hline S1 & 45 & 0.7 & \(1.73\pm 0.05\) & \(0.69\pm 0.06\) & \(9.17\pm 0.05\) & \(0.15\pm 0.03\) & \(0.21\pm 0.02\) & \(<0.21\) & \(-11\pm 3\) & \(62\pm 4\) \\ S2 & 36 & 0.7 & \(3.55\pm 0.08\) & \(1.14\pm 0.14\) & \(23.48\pm 0.10\) & \(0.37\pm 0.05\) & \(0.40\pm 0.04\) & \(0.35\pm 0.11\) & \(-55\pm 3\) & \(43\pm 4\) \\ S3 & 25 & 0.7 & \(<0.30\) & \(<0.27\) & \(6.27\pm 0.22\) & \(<0.15\) & \(<0.09\) & \(<0.18\) & \(-107\pm 3\) & \(61\pm 4\) \\ S3\({}_{\rm uing}\) & 25 & 0.7 & \(2.90\pm 0.10\) & \(0.73\pm 0.09\) & \(2.44\pm 0.22\) & \(<0.18\) & \(<0.12\) & \(<0.21\) & \(-14\pm 9\) & \(104\pm 5\) \\ S4 & 17 & 0.7 & \(1.34\pm 0.18\) & \(0.28\pm 0.08\) & \(3.39\pm 0.10\) & \(<0.09\) & \(<0.15\) & \(-114\pm 3\) & \(45\pm 4\) \\ S4\({}_{\rm uing}\) & 17 & 0.7 & \(4.17\pm 0.20\) & \(0.52\pm 0.09\) & \(3.14\pm 0.12\) & \(<0.27\) & \(<0.15\) & \(<0.27\) & \(+12\pm 8\) & \(169\pm 6\) \\ S5 & 9 & 0.7 & \(5.96\pm 0.28\) & \(0.77\pm 0.26\) & \(2.51\pm 0.22\) & \(<0.84\) & \(<0.51\) & \(<0.78\) & \(+8\pm 11\) & \(140\pm 11\) \\ S6 & 20 & 0.7 & \(5.04\pm 0.07\) & \(1.47\pm 0.12\) & \(14.03\pm 0.07\) & \(0.15\pm 0.05\) & \(0.22\pm 0.04\) & \(0.34\pm 0.09\) & \(-62\pm 3\) & \(68\pm 4\) \\ S7 & 29 & 0.7 & \(0.99\pm 0.04\) & \(0.18\pm 0.06\) & \(0.63\pm 0.04\) & \(<0.09\) & \(<0.06\) & \(<0.18\) & \(-72\pm 8\) & \(111\pm 8\) \\ S8 & 18 & 0.7 & \(2.33\pm 0.04\) & \(0.52\pm 0.06\) & \(1.98\pm 0.04\) & \(<0.09\) & \(<0.06\) & \(<0.15\) & \(-119\pm 4\) & \(89\pm 4\) \\ S9 & 11 & 0.7 & \(3.71\pm 0.16\) & \(1.10\pm 0.15\) & \(2.56\pm 0.13\) & \(<0.45\) & \(<0.27\) & \(<0.39\) & \(+173\pm 7\) & \(110\pm 7\) \\ S10 & 15 & 0.7 & \(1.96\pm 0.05\) & \(0.47\pm 0.05\) & \(1.58\pm 0.04\) & \(<0.12\) & \(<0.09\) & \(<0.15\) & \(+58\pm 4\) & \(79\pm 5\) \\ B1 & 49 & 1.4 & \(1.14\pm 0.08\) & \(0.89\pm 0.12\) & \(2.21\pm 0.0 Figure 5: Examples of nebular spectra (stronger lines) and best-fit spectral models for multiple regions. The locations of these regions are shown as circles and labelled by their IDs in Figure 4. The extracted spectrum is shown as solid black lines and the error array is shown as grey lines. The best-fit models are shown as red solid lines. In most nebular regions, we detected strong emission lines such as [O II], H\(\beta\), and [O III]. Figure 6: Examples of nebular spectra (fainter lines) and best-fit spectral models for multiple regions. The locations of these regions are shown as circles and labelled by their IDs in Figure 4. The plotting style is as described in Figure 5. Only in the most luminous nebular regions, we detected weak emission lines such as [Ne V]\(\lambda\)3427, H\(\delta\), H\(\gamma\), [O III]\(\lambda\)4364, and He II\(\lambda\)4687. quasar catalog (Paris et al., 2018) and the Wide-field Infrared Survey Explorer (WISE; Wright et al., 2010) R90 quasar catalog (Assef et al., 2018). These sources, once considered radio-quiet quasars, now meet the criteria for radio-loud ones. It implies that the probability that any particular radio-quiet quasar becomes radio-loud on the light-crossing timescale of the nebula is approximately 1%. However, the presence of a massive group and nebula mean that HE 0238\(-\)1904 is not a representative quasar and so may be more likely to transition to radio-loud relatively soon. On the other hand, the possibility that HE 0238\(-\)1904 was previously radio-loud and is now radio-quiet is harder to address since such transitions are not well studied. In the following subsections, we discuss insights into the physical origins and state of the giant nebula which includes analyses of density and ionization-state sensitive diagnostic emission lines. Several of these analyses require priors on the dust content and density of the gas. To investigate dust content, we estimate Balmer line ratios, and find H\(\delta\)/H\(\gamma\) ratios of \(\approx 0.55\). These ratios are consistent with Case B recombination (Osterbrock and Ferland, 2006) in the absence of dust. To obtain density estimates, we infer emission measure of the nebula from the surface brightness of H\(\beta\) following Chen et al. (2019). Assuming H\(\alpha\)/H\(\beta\approx 3\), a clumping factor of 1, and length-scale 30 pkpc, we found an electron density of \(\log(n_{\rm e}/{\rm cm}^{-3})\approx-1\). However, this density estimate has a large uncertainty and is effectively a lower limit due to the assumption of a unity clumping factor. ### Origin of the Nebular Gas Giant nebulae can be produced via ram pressure and tidal stripping, AGN and stellar feedback, or filamentary accretion. The nebula around HE 0238\(-\)1904 is unlikely to arise from a jet-driven outflow given the fact that the quasar is radio-quiet and exhibits no detectable radio jet. While S3 and S4 exhibit broad emission wings, most regions are well characterized by a single Gaussian profile with narrow velocity dispersion (\(\sigma<120\) km s\({}^{-1}\); see Table 2). These quiescent kinematics are inconsistent with the broad velocity dispersion expected from radio-quiet AGN and stellar feedback (Liu et al., 2013; Rupke et al., 2019). In addition, the morphology is inconsistent with expectations for filamentary accretion (Johnson et al., 2022). On the other hand, the nebula is spatially and kinematically coincident with likely interacting galaxies in the field of HE 0238\(-\)1904, suggesting that stripping from interactions is likely responsible for most of the nebula with possible subdominant contributions from outflows. The nebula spatially surrounds the Host, G1, G3, G4, and G5, and extends to the South West of the quasar to a projected distance of \(d\sim 70\) pkpc. This spatial coincidence suggests that the nebula likely arises from interaction-related stripping. The dwarf galaxies G3 and G5 show a possible tidal-tail-like structure as shown in panels (e) and (h) of Figure 4, suggesting that this part of the nebula might be created from tidal stripping. In addition to this, the emission maps on larger scales resemble a head-tail morphology with the head around the quasar and with the tail extending to the South West of the quasar. Head-tail morphologies are commonly seen in nebulae originated from ram pressure stripped ISM (e.g., Poggianti et al., 2016; Boselli et al., 2019; Chen et al., 2019). Interestingly, while the nebula exhibits a head-tail morphology, it does not exhibit multiple filaments like some "jellyfish" galaxies observed in the optical line emission. Instead, it resembles the smoother emission profile sometimes seen in ram-pressure debris observed in H I 21-cm (Hess et al., 2017). There are two plausible explanations for ram pressure stripping in the environment of HE 0238\(-\)1904. First, the nebula may arise from stripping of the quasar host's ISM and CGM if it is falling into the richer, redshifted group and passing through the associated hot halo. Second, dwarf galaxies may have travelled through the hot halo of the massive group from West to East, leaving their ram pressure stripped ISM and CGM behind along their path. The discovery of a giant nebula requires both the presence of gas and its positioning within quasar's ionization cone. However, due to projection effects, the relative position between the quasar and the nebula remains uncertain. The two previously mentioned hypotheses provide potential frameworks. (1) If the gas results from stripping of the quasar host's ISM, the nebula is likely to surround the quasar. In this case, it will naturally be illuminated by the quasar. Alternatively (2) if the nebula arises from the stripped CGM/ISM of other galaxies in the overdensity, the gas will be widely distributed throughout the groups and more distant from the quasar. Only a fraction of this gas might coincidentally fall within the quasar's ionization cone, consistent with the large opening angle suggested by Trainor and Steidel (2013); Borisova et al. (2016); Schmidt et al. (2018); den Brok et al. (2020). To examine between these scenarios, we show the surface brightness profiles of [O II] and [O III] made with Photutils (Bradley, 2023) in Figure 7. The profile of [O II] declines smoothly as a function of radius, and plateaus at \(\approx 50\) pkpc. In contrast, the [O III] profile exhibits shallower drop due to the bright knots seen in the narrow-band images. The plateau in the [O II] profile corresponds to the head-tail morphology of the nebula, and the bright knots hints at a dwarf-related origin for part of the nebula. Collectively, the [O II] and [O III] profiles suggest a complex scenario. The centroids of narrow-band [O II] and [O III] surface brightness maps are 10 and 19 pkpc away from the quasar respectively, an alignment to within 15% of the size of the nebula. This coincidence could be explained if the gas surrounds the quasar or if the quasars ionization cone is fairly well centered on our LOS. However, the significant contributions of individual dwarf galaxies to the [O III] surface brightness profile underscore the challenge in precisely determining the nebula's position Figure 7: Emission line surface brightness profile for the nebula around HE 0238\(-\)1904. The [O II] and [O III] profiles are extracted over a velocity interval of \(-600\) to \(600\) km s\({}^{-1}\), and are circularly averaged at different distances from the quasar centroid. The profile of [O II] declines smoothly as a function of radius, while the [O III] exhibits shallower drop due to the bright knots seen in the narrow-band images. relative to the quasar. Consequently, it is plausible that both scenarios (1) and (2) contribute to the nebula. The giant nebulae around HE 0238\(-\)1904 was independently discovered and reported by Zhao & Wang (2023). They attributed the gas to a superbubble driven by the quasar based on an apparent large velocity shift between the nebula and the quasar redshift and as well as broad line widths reported near the quasar. However, the large velocity shift is due to the reliance on an older, Mg II-based redshift of \(z=0.631\), which is \(\approx+500\) km s\({}^{-1}\) from our [O III]-based redshift of \(z=0.6282\). Rather than relying on a redshift estimate from the literature, we measured the quasar redshift and kinematics of the giant nebula from the same MUSE dataset to avoid any systematic uncertainty due to wavelength calibration errors. Moreover, quasar redshifts based on [O II] are generally more accurate than those measured from Mg II due to the narrowness of the line and lack of blueshifted wings on [O II]. In particular, quasar redshifts measured from [O II] trace the underlying quasar host redshifts measured in stellar absorption to within \(\approx\pm 20\) km s\({}^{-1}\)(Hewett & Wild, 2010). Finally, our redshift estimate of \(z=0.6282\) is more consistent with the centroid of the broad H\(\beta\) line, aligns with the peak of the quasar's [O III] emission line, and matches a more recent Mg II-based redshift of \(z=0.628\) from the UV-bright Quasar Survey (Monroe et al., 2016). Furthermore, we measured significantly narrower line widths near the quasar. This is likely due to our removal of [O III] and [O II] emission from the unresolved narrow-line emission region of the quasar while Zhao & Wang (2023) only removed emission from the broad-line region. In summary, the modest velocity shifts and largely narrow emission line widths are consistent with much of the gas originating from interactions with more minor possible contributions from an outflow. When using the updated quasar redshift and quasar-light subtracted datacube, we find no evidence for a fast, quasar driven superbubble in the system. ### Physical Conditions of the Emitting Gas Previous studies of giant nebulae have attributed the ionization of the gas to ionizing photons from AGN, shocks, and young stellar populations (e.g., Johnson et al., 2018; Rupke et al., 2019; Chen et al., 2019; Helton et al., 2021; Zhang et al., 2023). The presence of the quasar suggests the source of ionization is AGN-related. To study the physical conditions of the gas, we measured the the density- and temperature-sensitive [O II]\(\lambda 3729/\)[O II]\(\lambda 3727\) and [O III]\(\lambda 4364/\)[O III]\(\lambda 5008\) line ratios as well as ionization state-sensitive strong and weak line ratios in each region. These line ratio measurements are reported in Table 2 and 4 [O III]/[O II] map is shown in panel (c) of Figure 4. We discuss these measurements and their implications in the following three subsections. #### 4.2.1 Direct Density and Temperature Estimates With spectral coverage of [O II]\(\lambda 3727\), [O II]\(\lambda 3729\), [O III]\(\lambda 4364\), and [O III]\(\lambda 5008\), we can directly measure electron density (\(n_{\rm e}\)) and temperature (\(T_{\rm e}\)), as discussed in Osterbrock & Ferland (2006). The [O II] doublet is a good density estimator because the difference in excitation energy between these two upper states is small so that the relative population in the two states is determined by electron density and is insensitive to temperature. In contrast, the [O III] doublet upper states have a larger excitation energy difference, making the populations of these states mainly sensitive to electron temperature and insensitive to electron density. Electron number densities from the [O II] doublet are reasonable proxies for the overall densities of ionized nebulae because H and O share similar ionization energies of \(13.6\) eV. To translate line ratios into physical conditions, we used Pyneb (Luridiana et al., 2015) which predicts the [O II] and [O III] line ratios at a given density and temperature by solving the detailed balance equation for an \(n\)-level atom. We fit the measured line ratios with Pyneb models by performing Markov chain Monte Carlo (MCMC) analysis with emcee(Foreman-Mackey et al., 2013), and inferred physical conditions from the resulting posteriors. We report the densities in Table 3, though we omit measurements in cases where the S/N or broad line width results in poorly constrained conditions. For all regions where the [O II] doublet is resolved, the line ratio is in the low density limit except S6. We therefore report 95% upper limits in density for all but S6. The inferred electron number density upper limits range from \(1.2<\log(n_{\rm e,[O\,{\rm II}]}/{\rm cm}^{-3})<2.8\), with a median of \(\log(n_{\rm e,[O\,{\rm II}]}/{\rm cm}^{-3})<1.6\). These density upper limits are consistent with gas arising from ionized ISM (Draine, 2011) or CGM. We detected [O III]\(\lambda 4364\) in only three luminous regions, S1, S2, and S6. The inferred temperatures for S1, S2, and S6 are \(\log(T/{\rm K})\approx 4.2\), 4.2, and 4.1 respectively. #### 4.2.2 Indirect Density Estimates from Photoionization Simulations Under the assumption that the nebula is ionized by the quasar, its ionization states are set by the luminosity of the quasar, density of the gas, and distance from the quasar, with secondary effects from metallicity and ionizing spectral shape. With an estimate of the quasar's luminosity and assuming projection effects are negligible, the density structure of the gas can be inferred from measured line ratios (see Cantalupo et al., 2019). Studies of high redshift quasar nebulae found ionization states can only be explained by a density of \(\log(n_{\rm H}/{\rm cm}^{-3})\approx 1.9\), significantly higher than expected CGM/IGM densities, or alternatively by a broad density distribution (see Cantalupo et al., 2019). At low redshift, this kind of scenario can be further explored with insight from rest-optical lines to compare ionization-based densities with more direct density estimates from the [O II] doublet. To infer the physical conditions from the line ratios in Table 2, we ran photoionization simulations for each region with Cloudy version C17.03 (Ferland et al., 2017). We modelled the quasar's radiation field using a power law (\(I\propto\nu^{\alpha}\)) between 0.37 and 73.5 Ryd, with \(\alpha\) between \(-1.8<\alpha<0\) following Groves et al. (2004) but extending to a higher \(\alpha\). We set the modeled quasar luminosity at 1 Ryd using direct measurement of the monochromatic UV luminosity from COS. For the gas, we adopted single density and single metallicity models, with density of \(-2<\log(n_{\rm H}/{\rm cm}^{-3})<4.6\) and metallicity of \(-1.5<\log(Z/Z_{\odot})<0.5\). We chose this metallicity range to cover the characteristic metallicities of the cool CGM around massive elliptical galaxies (Zahedy et al., 2019) but extended it to higher metallicity in case some gas has ISM origins. Due to limited ion coverage, metallicity and \(\alpha\) are degenerate in some cases, so we treated them as nuisance parameters and focused on inferred densities. We note that there is relatively little degeneracy between density and metallicity except at high metallicities of \(\log(Z/Z_{\odot})>0.2\) when increased cooling from metal lines begins to substantially change the equilibrium temperature. For each region, we conducted these models in grids with a step of 0.2 dex in density and metallicity, and 0.2 in \(\alpha\). We then interpolated these models with the RegularGridInterpolator function from scipy.interpolate(Virtanen et al., 2020) within these ranges after checking for convergence. Finally, we ran emcee to estimate posteriors given the measured line ratios and uncertainties. We verified the quality of the fits by comparing the posteriors of the model line ratios with the measured line ratios using violin plots shown in Figure 9. The violin plots verify that the ionization-state-sensitive line ratios (shown in the middle panels) are consistent with the measured line ratios. The best-fit \(\alpha\) values for most regions are within \(-1.0<\alpha<-0.6\), somewhat greater than ones given in Groves et al. (2004). Inferred metallicities for S1, S2, and S6, with He II and [Ne V] detections, are well-constrained to be \(-0.2<\log(Z/Z_{\odot})<0.2\). The densities inferred from these photoionization simulations range from \(\log(n_{\rm H,Cloudy}/{\rm cm}^{-3})=1.6\) to 4.2 and are reported in the right column of Table 3, though we stress that these densities neglect potential quasar variability and projection effects. #### 4.2.3 Comparison of the Density Estimates Previous photoionization-based estimates of the density of quasar nebulae at high-redshift found unexpectedly high densities, close to or exceeding typical densities for the ISM, despite being measured on CGM/IGM scale (Cantalupo et al., 2019). The ionization sensitive line ratios of the nebula around HE 0238\(-\)1904 also imply high photoionization-based densities of \(1.6<\log(n_{\rm H,\ Cloudsy}/{\rm cm}^{-3})<4.2\). However, the more direct [O II]-based densities are inconsistent with and significantly smaller than the photoionization-based densities for most regions as shown in Table 3. To better demonstrate this inconsistency, Figure 9 shows both the measured line ratios and the posteriors inferred from the photoionization models for S2, S6, and S9. The ionization-state-sensitive line ratios are consistent with the model posteriors for all three regions, while the [O II] line ratios are highly discrepant for S6 and S9. The right panel of each subfigure shows the density posteriors from both direct and indirect density estimates. As shown in Table 3, we found that all regions with photoionization-based density estimates except S1, S2, B1, and B3 have a large (1\(-\)2 dex) discrepancy when compared to the [O II] doublet-based densities. In the most extreme case, S5, the two density estimates are off by 2.6 dex or a factor of 400. In principle, the inferred density mismatch could be explained by a non-uniform density distribution if the [O II] arises from less dense gas than the other emission lines. To test whether a more complicated density structure could explain the density mis-match, we modeled the emitting gas as a multi-phase system consisting of one low density component and one high density component with the relative contribution of each treated as an additional free parameter. This model successfully reproduces the observed emission-line ratios, and the density inferred for the high density component matches the single-phase model results. Furthermore, the posteriors of the two-component model indicate that the high density component dominates the [O II] emission. Therefore, a two-phase model cannot explain the density discrepancy between the direct [O II]-based density measurements and the ionization-state-based density estimates. To test if a broad, continuous density distribution can explain the discrepancy, we modelled the emitting gas with a log-normal density distribution (see Cantalupo et al., 2019). A log-normal distribution is defined as \[{\rm PDF}(n){\rm d}n=\frac{1}{\sqrt{2\pi}\sigma}{\rm exp}\Big{[}-\frac{[\ln(n )-\ln(\mu)]^{2}}{2\sigma^{2}}\Big{]}{\rm d}{\ln}(n) \tag{1}\] where \(\sigma\) is the dispersion and \(\mu\) is the mean density. We started with calculating emission line emissivity in an extended Cloudy model grid, similar to ones discussed in Section 4.2.2. We then computed the predicted line ratios for a log-normal density distribution by interpolating Cloudy models and integrating over the PDF. Our results show that a log-normal distribution with a large \(\sigma\) can reproduce the ionization-sensitive line ratios, but the log-normal models predict that the [O II] emission arises from dense gas, resulting in [O II] line ratios of \(\log(\frac{43729}{37272})=-0.4\) to \(-0.1\), inconsistent with the observed ratios of \(\log(\frac{43729}{37272})>0.1\). Therefore, a broad density distribution is unlikely to reconcile the density discrepancy. Alternatively, projection effects can also result in disagreement \begin{table} \begin{tabular}{l c c c} \hline ID & \(\log(n_{\rm e,[O\,II]}/{\rm cm}^{-3})\)a & \(\log(m_{\rm H,Cloudy}/{\rm cm}^{-3})\)b & \(\log(U_{\rm Cloudsy})\)c \\ \hline S1 & \(<1.6\) & \(1.6_{-0.1}^{+0.1}\) & \(-2.2_{-0.1}^{+0.1}\) \\ S2 & \(<1.7\) & \(1.7_{-0.1}^{+0.1}\) & \(-2.1_{-0.1}^{+0.1}\) \\ S3 & — & — & — \\ S4 & — & — & — \\ S5 & \(<1.6\) & \(4.2_{-0.3}^{+0.2}\) & \(-3.0_{-0.3}^{+0.2}\) \\ S6 & \(1.8_{-0.1}^{+0.1}\) & \(2.7_{-0.1}^{+0.1}\) & \(-2.5_{-0.1}^{+0.1}\) \\ S7 & \(<1.9\) & \(3.0_{-0.3}^{+0.3}\) & \(-3.2_{-0.3}^{+0.3}\) \\ S8 & \(<1.3\) & \(3.5_{-0.2}^{+0.2}\) & \(-3.3_{-0.2}^{+0.2}\) \\ S9 & \(<2.3\) & \(4.1_{-0.3}^{+0.2}\) & \(-3.5_{+0.3}^{+0.2}\) \\ S10 & \(<1.4\) & \(3.6_{-0.2}^{+0.2}\) & \(-3.3_{-0.2}^{+0.2}\) \\ B1 & \(<2.8\) & \(2.1_{-0.2}^{+0.1}\) & \(-2.7_{+0.2}^{+0.1}\) \\ B2 & \(<1.2\) & \(2.9_{-0.3}^{+0.1}\) & \(-3.4_{+0.3}^{+0.1}\) \\ B3 & \(<2.5\) & \(1.9_{-0.2}^{+0.1}\) & \(-2.8_{+0.2}^{+0.1}\) \\ B4 & — & — & — \\ \hline \end{tabular} 1 \end{table} Table 3: Summary of nebula regions in the Field of HE 0238\(-\)1904. between the two density estimates. However, assuming that the gas is randomly and approximately spherically distributed around the quasar, the projected distance is unlikely to be much smaller than the radial distance between the quasar and the nebula. For example, producing a factor of 400 mismatch in density requires the radial distance to be 20 times larger than the projected distance. While such projection effects are possible in principle, the required contrived geometry is unlikely. In principle, the discrepancy in density could be explained if the nebula is not directly ionized by the quasar due to obscuring dust or translucent clouds blocking its light from reaching this gas. Filtering the quasar's radiation through dust would soften the incident ionizing radiation field. However, the best-fit \(\alpha\) values from our photoionization analysis suggests a hard ionizing spectrum for almost all regions. The hard inferred ionizing spectrum is inconsistent with expectations from a quasar SED filtered through dust clouds. Alternatively, translucent clouds of moderate optical thickness to ionizing photons can also filter the quasar's radiation. Depending on the density and the physical size, these clouds could produce distinct line ratios as a function of depth into the cloud (Liu et al., 2013). Typically, the outer parts of the cloud produce no significant [O II] or [O III] emission because oxygen is highly ionized. However, H\(\beta\) is a recombination line and so a non-negligible fraction of the H\(\beta\) emission arises from outer parts of the cloud that do not emit in [O II] or [O III]. As a result, translucent regions are expected to have stronger H\(\beta\) emission than [O II] and [O III]. Yet, none of the nebular regions have such \(\rm[\,O\,III]/H\beta\) ratio. If these translucent clouds exist around HE 0238\(-\)1904, they therefore must be blended with optically thick clouds due to seeing conditions and projection effects. The presence of unresolved translucent clouds could be investigated by observing the nebula with higher spatial resolution instruments such as NIRSpec on the JWST or with adaptive optics from the ground. Nevertheless, while translucent clouds may help reconcile the density discrepancy in some cases, moderate optical depth clouds can only absorb a modest portion of the quasar's radiation. Therefore, it is unlikely to explain the largest density discrepancies. On the other hand, the ionization of the nebulae could be due to young stellar populations (Morisset et al., 2015) or fast shocks (Allen et al., 2008). However, there is no evidence of extended star-formation in rest-frame \(u\)-band images of the system formed from the MUSE datacube. To investigate the possibility of fast shocks, we show two emission line diagnostic diagrams overlaid with shock models in a grid of shock velocity and magnetic field strength in Figure 8. Producing the observed [O III]/[O II] and [Ne V]/[O II]3 ratios requires shock velocities of \(v_{\rm shock}>250\rm\,km\,s^{-1}\)(Allen et al., 2008). These shock velocities are greater than the LOS velocity and velocity dispersion of the nebula in nearly all locations, even after accounting for projection effects. For example, some regions (S1 and S2) would require shock velocities exceeding \(1000\rm\,km\,s^{-1}\) and most regions (S3, S4, S6, S8, S10, B1, B2, B3, and B4) would require \(>300\rm-400\,km\,s^{-1}\), making them unlikely to be ionized by shocks. On the other hand, while the observed line ratios of S5, S7, and S9 favor AGN photoionization, large uncertainties in their H\(\beta\) flux can accommodate shocks with velocities as low as \(200\rm\,km\,s^{-1}\). This would alleviate the density discrepancy in these three regions. However, for most regions, the shock velocity required to reproduce the observed line ratios exceeds velocities observed in the system. Shocks are therefore unlikely to explain the density discrepancy in most cases. Footnote 3: We note that [Ne V]/[Ne III] as a better shock tracer cannot be used due to [Ne III]\(\lambda\)3869 is severely contaminated by skylines. Perhaps more likely, the difference in the density estimates could be due to quasar variability (Richstone & Oke, 1977). Quasar variability is directly observed on timescales of decades (Stone et al., 2022). Observations of "changing-look" AGN, light echoes, and quasar proximity zones suggest the average episodic lifetime of quasars may range from \(10^{4}\) to \(10^{7}\) years and AGN episodes may be highly clustered (e.g., Schirber et al., 2004; Goncalves et al., 2008; Kirkman & Tytler, 2008; Trainor & Steidel, 2013; Syhers & Shull, 2014; Schawinski et al., 2015; Comerford et al., 2017; Schmidt et al., 2018; Shen, 2021). Therefore, each region of the nebula around HE 0238\(-\)1904 may experience a drastically different radiation field from the quasar, depending on the light travel time. For example, S5 and S6 are at a projected distance of 10 to 20 kpc from the quasar, respectively, and their line ratios can be explained if the quasar was 400 and 10 times less luminous than currently observed. In contrast, S1 and S2 are at a projected distance of \(\approx 40\) kpc from the quasar, and their properties can be explained if they received ionizing radiation consistent with the current luminosity of the quasar. We confirmed that quasar variability could explain the ionization state and [O II] ratio by re-running Cloudy models and MCMC analysis after significantly decreasing the quasar luminosity. ## 5 Summary and Conclusions In this paper, we presented the first comprehensive analysis of a giant nebula around a radio-quiet quasar at \(z<1\) based on MUSE observations of the field of HE 0238\(-\)1904. The wide FoV, high spatial sampling, and wide wavelength coverage enabled us to investigate the origin and the physical condition of the group and gaseous environment with a spatially resolved analysis of the morphologies, kinematics, and nebular photoionization properties. Our finding can be summarized as follows. 1. We found that HE 0238\(-\)1904 resides in an overdense environment containing two potentially merging galaxy groups based on spatial distribution and kinematics. This includes a less rich, blueshifted group with 12 galaxies and a richer, redshifted group with 22 galaxies. Assuming the more massive group is virialized, its dynamical mass is \(M_{\rm dyn}\sim 4\times 10^{13}\)-\(10^{14}\)\(\rm\,M_{\odot}\). Such a massive, rich environment is unusual for a radio-quiet quasar, which typically resides in a halo with a mass of \(\sim 3\times 10^{12}\)\(\rm\,M_{\odot}\)(Shen et al., 2009). 2. We identified a giant nebula covering a projected area of \(\approx 5000\) kpc\({}^{2}\) around HE 0238\(-\)1904 emitting strongly in [O II], H\(\beta\), and [O III]. The nebula has an irregular morphology with a spatial trend in kinematics where the region North of the quasar is redshifted and the region South of the quasar is mainly blueshifted relative to the quasar. The southern region is spatially coincident with four dwarf galaxies. 3. The coincidence with nearby galaxies suggests that it arises from stripping of ISM or CGM, which is consistent with its morphology and largely narrow LOS velocity dispersion. In addition, the nebula shows a head-tail morphology with the head near the quasar and with the tail extending toward South West of the quasar. The head-tail structure may originate from ram pressure if the quasar and the surrounding nebula are infalling toward the massive galaxy group to the North East. However, we note there are some small regions at \(d\approx 20\) kpc from the quasar that have broader emission wings, perhaps suggesting an outflow origin. 4. To better characterize the physical conditions of the nebula, we measured the fluxes of strong and weak emission line fluxes. The inferred electron number density upper limits from the [O II] doublet range from \(\log(n_{\rm e,[O\,II]}/\rm cm^{-3})<1.2\) to 2.8, with a median of \(\log(n_{\rm e,[O\,II]}/\rm cm^{-3})<1.6\). These density upper limits are consistent with ISM or CGM origin. However, densities inferred from photoionization models are often inconsistent with the [O II]-based density upper limits, reaching values of up to 400 times higher. * The disagreement in density estimates is unlikely to be due to density inhomogeneities, but can be explained by quasar variability, if the quasar varied significantly on timescales of \(10^{4}\) to \(10^{5}\) years. This finding suggest that long-term quasar variability should be included when considering ionization-based inferences into the physical conditions of giant nebulae around quasars. The possibility of significant quasar variability on timescales of \(10^{4}\) to \(10^{5}\) years has implications far beyond accretion disk physics in the central engine. In particular, significant fluctuations on these timescales can result in out-of-equilibrium conditions in the low density circumgalactic medium due to the long recombination time of low density gas (Oppenheimer and Schaye, 2013; Segers et al., 2017). Indeed, such AGN "flickering" may be responsible for strong O VI absorption observed around Milky Way-like galaxies at low redshift (Oppenheimer et al., 2018). The recent and upcoming commissioning new IFSs on large telescopes, such as LLAMAS (Furesz et al., 2020), IFUM (Mateo et al., 2022), Blue MUSE (Richard, 2019), and MIRMOS (Konidaris et al., 2020), will continue to drive further discoveries of giant nebulae which could be followed up with IFS like HARMONI (Thatte et al., 2022) on future, 30-meter class telescopes, extending similar insights to higher redshifts and fainter systems. ## Acknowledgements SDJ and ZQL acknowledge partial support from HST-GO-15280.009-A, HST-GO-15298.007-A, HST-GO-15655.018-A, and HST-GO-15935.021-A. JIL is supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program. SC gratefully acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation programme grant agreement No 864361. This paper is based on observations from the European Organization for Astronomical Research in the Southern Hemisphere under ESO (PI: J. Schaye, PID: 094.A-0131(B) & 096.A-0222(A)), and the NASA/ESA Hubble Space Telescope (PI: L. Straka, PID: 14660; PI: J. Green, 11541; PI: S. Penton, PID: 12505). Additionally, this paper made use of the NASA/IPAC Extragalactic Database, the NASA Astrophysics Data System, Astropy (Astropy Collaboration et al., 2022), Aplpy (Robitaille and Bressert, 2012), and Photutils (Bradley, 2023). ## Data availability The data used in this paper are available from the the ESO and HST data archives.
2302.14354
Deep Learning for Identifying Iran's Cultural Heritage Buildings in Need of Conservation Using Image Classification and Grad-CAM
The cultural heritage buildings (CHB), which are part of mankind's history and identity, are in constant danger of damage or in extreme situations total destruction. That being said, it's of utmost importance to preserve them by identifying the existent, or presumptive, defects using novel methods so that renovation processes can be done in a timely manner and with higher accuracy. The main goal of this research is to use new deep learning (DL) methods in the process of preserving CHBs (situated in Iran); a goal that has been neglected especially in developing countries such as Iran, as these countries still preserve their CHBs using manual, and even archaic, methods that need direct human supervision. Having proven their effectiveness and performance when it comes to processing images, the convolutional neural networks (CNN) are a staple in computer vision (CV) literacy and this paper is not exempt. When lacking enough CHB images, training a CNN from scratch would be very difficult and prone to overfitting; that's why we opted to use a technique called transfer learning (TL) in which we used pre-trained ResNet, MobileNet, and Inception networks, for classification. Even more, the Grad-CAM was utilized to localize the defects to some extent. The final results were very favorable based on those of similar research. The final proposed model can pave the way for moving from manual to unmanned CHB conservation, hence an increase in accuracy and a decrease in human-induced errors.
Mahdi Bahrami, Amir Albadvi
2023-02-28T07:14:15Z
http://arxiv.org/abs/2302.14354v1
Deep Learning for Identifying Iran's Cultural Heritage Buildings in Need of Conservation Using Image Classification and Grad-CAM+ ###### Abstract The cultural heritage buildings (CHB), which are part of mankind's history and identity, are in constant danger of damage or in extreme situations total destruction. That being said, it's of utmost importance to preserve them by identifying the existent, or presumptive, defects using novel methods so that renovation processes can be done in a timely manner and with higher accuracy. The main goal of this research is to use new deep learning (DL) methods in the process of preserving CHBs (situated in Iran); a goal that has been neglected especially in developing countries such as Iran, as these countries still preserve their CHBs using manual, and even archaic, methods that need direct human supervision. Having proven their effectiveness and performance when it comes to processing images, the convolutional neural networks (CNN) are a staple in computer vision (CV) literacy and this paper is not exempt. When lacking enough CHB images, training a CNN from scratch would be very difficult and prone to overfitting; that's why we opted to use a technique called transfer learning (TL) in which we used pre-trained ResNet, MobileNet, and Inception networks, for classification. Even more, the Grad-CAM was utilized to localize the defects to some extent. The final results were very favorable based on those of similar research. The final proposed model can pave the way for moving from manual to unmanned CHB conservation, hence an increase in accuracy and a decrease in human-induced errors. built cultural heritage conservation deep learning image processing convolutional neural networks (CNN) gradient weighted class activation mapping (Grad-CAM) Structural health monitoring transfer learning ## 1 Introduction Two main categories of Cultural Heritage (CH) are tangible and intangible heritages, and the CHBs fall under the former category. The tangible CHs have universal values which must be physically preserved for future generations as an irreplaceable legacy [1, 2]. CHBs are indubitably an integral part of the history and culture of human beings. Throughout the years many of these precious buildings have been in danger of damage due to several reasons, namely material deterioration, natural disasters, presence of visitors, vandalism, etc. [3, 4, 5]. Currently, the topic of CH has attracted increasing global attention from scientists and researchers alike, and the scope of its concept is constantly expanding. Most social scientists emphasize on its utility in supporting ethnic and national interests, while many others point to its creative and counter-hegemonic aspects [5, 6]. ### Importance Endowed with rich CHBs, Iran is ranked 10th in 2022, among all other countries, with 26 UNESCO world heritage sites [7]. Although only 26 of the CHBs in Iran have been registered in UNESCO and not all of them are buildings, the number of CHBs in Iran is of the order of thousands and according to archaeological findings, Iranian architecture dates back to 6,000-8,000 B.C. [8]. One of the reasons why Iran has been unsuccessful in registering more CHBs is the fact that most of these CHBs have not been preserved correctly, if not at all. Even some CHBs are beyond restoration. The CHBs, which fall under the category of immovable tangible CHs, demand more sophisticated methods for conservation since we cannot move them to museums to preserve. Lack of resources in terms of skilled practitioners, budget, and new technologies are just some of the shortcomings that introduce many problems in the conservation process. As regards the usage of state-of-the-art technologies, Iran as a developing country still uses archacion, and sometimes obsolete, manned methods to preserve these precious treasures of humanity. From a broader perspective, many CHBs around the world suffer from such problems as well, so the use of artificial intelligence (AI) techniques such as ML and DL is not a luxury anymore but a necessity. Using ML and DL, we can move toward unmanned conservation of CHB, hence an increase in accuracy and a decrease in human-induced error. ### Research Aim The aim of this paper was to develop a highly generalized, yet simple, deep learning pipeline for the identification of CHBs in need of preservation, which can be used even in poor countries. We achieved this by making our model as lightweight as possible using a wealth of novel methods, as not all countries have access to expensive resources. This mindset allows for having fewer data and processing power but still reaping satisfying results (Table 3). ### Contribution **Unprecedented in Iran:** To the best of our knowledge, and to our surprise, not even a single scientific research had been conducted using ML or DL in the conservation of Iran's CHBs. The body of research outside Iran is not so much either. according to Fiorucci et al. [9] the use of ML in CH literacy has been quite limited in contrast to other fields. We believe that more research in the intersection of AI and CH can change this situation and can pave the way for the prevalence of such techniques in the process of CHB conservation around the world and accrue many benefits to CHB literacy as well. **First-hand Complex Data:** We used first-hand data, which had been collected from different sources, as discussed in subsection 3.1. Using first-hand data is important in the sense that not only our experiment would be unprecedented in Iran but globally as well; since no known CHB dataset to date [9] can cover the diversity of types of buildings, types of defects, and color nuances of both Persian and Islamic architecture, like ours. **New combination of Methods:** This paper proposes an automated deep learning pipeline for identifying surface damage of CHBs. Having developing countries in mind, we used a combination of state-of-the-art methods to cater to their conservation needs with as little budget as possible. That said, the final deep learning pipeline, using a pre-trained MobileNet, can be run on low-cost devices, for instance a budget mobile phone, to make inference. * Image classification: define whether a CHB needs preservation or not. * MobileNet: a very lightweight CNN architecture, but with approximately the same performance as a lot of havier CNNs (e.g., ResNet and/or Inception). * Grad-CAM: to approximately localize the defects. * Transfer learning: to reap great results without the need for expensive servers or manpower to take copious images. * A valid data augmentation pipeline: allows the model to learn more features from the same data. * Compound regularization method: a combination of four regularization methods together, namely augmentation, dropout, L2 regularization, and batch normalization. ## 2 Related works Globally many attempts have been made to use deep learning for damage detection in CHB images. Wang et al. [10] used object detection (OD) with the aid of FasterR-CNN based on a ResNet101 CNN to detect damage in images of masonry buildings with bounding boxes. In another research, Wang et al. [11] used instance segmentation (IS), by the means of a Mask R-CNN model, for damage detection, using a masked colored layer, in glazed tiled CHBs. An interesting work by Pathak et al. [12] used Faster-RCNN to detect damage in CHBs, but with one major difference to other works. They used point clouds data, instead of images, as the input to their proposed model, and instead rendered point clouds as images which increased the versatility of their model, since capturing photogrammetry doesn't have the same limitations of manually taking photos. Expectedly, damage detection using deep learning is not limited to CHB literacy; for instance, Perez and Tah [13] used OD to detect defects on the images of modern buildings. As highly revered as OD and IS are, they have some downsides, namely (1) a time-consuming data labeling process with bounding boxes (for OD) or color annotation (for IS); (2) the need for a huge amount of accurately labeled data; (3) detecting only pre-specified types of defects; and (4) much higher computational complexity, in comparison with image classification. This is especially important in the case of developing countries (e.g., Iran), where budgets and resources are limited. That's why despite the prevalence of OD and IS in computer vision, many researchers opted to use the simpler image classification, where each image will be given a label as a whole, and the position of damage is not delineated. As an example, Perez et al. [14] used image classification and CAM layers to classify and localize defects. The downside of their work was not the use of image classification, but using cropped images, which would have been more suitable for object detection rather than image classification. The usage of image classification and deep learning has not been just for damage detection, but aspects of CHB can benefit from them, as was the case with Llamas et al. [15] who attempted to classify different architectural elements in historical buildings. In terms of methodology, we followed the footsteps of Llamas et al. [15] and Perez et al. [14] by using image classification over OD and/or IS. Although our work is different in terms of the details of methodology and data. Unlike them, we used data augmentation and a combination of four regularization methods together, which in our case resulted in a 4-5% improvement in metrics (Table 3 and 4). **Research Gap:** To the best of our knowledge, most of the works regarding deep learning and CHB use either simplistic data or use the data belonging to a single CHB. As a result, the final trained model lacks the generalization needed to be used for a wide range of buildings in the country of origin. We believe that the data must reflect the variety of real-world data with no editing or cropping. This way the research can come as close as possible to the practical application of using deep learning in the conservation of CHBs. Despite being known as de facto in CV, OD and/or IS need substantial computational resources to process images and detect damage, therefore making these methods infeasible for developing and/or poor countries with so many CHBs (e.g., Iran). Using more lightweight and sophisticated techniques, we can achieve reasonable results but with low-budget and simple devices (e.g., Mobile Phones). ## 3 Materials and Methods ### Data For this experiment, we curated a labeled dataset of approximately 10,500 CHB images. In the following, the data curation process is discussed. #### 3.1.1 Data Collection The data were gathered from four different sources; (i) The archives of Iran's cultural heritage ministry; (ii) The author's (M.B) personal archives; (iii) images captured on site by the authors (M.B) during the research process and (iv) pictures crawled from the Internet but kept it to a minimum as their distribution differed due to heavy edits and effects. The images that didn't meet the desired quality were removed, to avoid introducing noise to our dataset. Our collected images proved to be very challenging, in the terms of complexity, peculiarity, level of detail, and variation in size, color, characteristics, etc (Figure 1). Regarding the population of data, as it was infeasible to have access to all the CHBs of Iran, or manually take pictures of them, we tried a random but fair approach to increase the richness of data by taking samples from a wide variety of buildings in terms of architectural style, color theme, quality, time of building, etc. In the process of collecting data different types of criteria were foremost in our minds: * **Locations**: Semnan, Hamedan, Tehran, Ghazvin, etc. * **Types**: Mosques, Shrines, Churches, Palaces, etc.; * **Style**: Islamic, Roman, Persian, etc.; * **Types**: cracks, deterioration, mold, etc.; * **Color nuances**: we have images from different times of the day and in different seasons.; #### 3.1.2 Data cleaning and preprocessing A number of preprocessing steps were taken before creating our final datasets: 1. Cleaning low-quality images, in terms of relevance, corruption, aspect ratio, grayscale, lighting condition, etc. (Figure A.1). 2. Fixing the auto-rotation EXIF metadata. 3. Finding a good enough resolution and resizing all images to it (i.e., 224x224). 4. Normalizing pixel values to a range of \([-1,1]\). #### 3.1.3 Data labeling Not to exacerbate the existent data imbalance, we chose binary classification over multi-class classification. The negative class (label 0) was used for images that didn't include physical defects and the positive class (label 1) for the ones that did. Not to become biased in the labeling phase we had three different highly qualified CHB practitioners label the images individually. This way the final label of a single image was determined by the majority vote of these three labelers. When it comes to labeling, especially image data, we almost always have to deal with some degree of inconsistency, as different practitioners have different experiences, expertise, criteria, etc. To mitigate this effect we defined some criteria by which each labeler had a more consistent and clear guideline to label the images. Figure A.2 shows why it was so crucial to have some criteria that distinctly determine what should be considered a defect (e.g., in terms of length or depth). As regards what types of physical defects were considered in the labeling process, we can enumerate the crack, mold, stain, and deterioration as the most important ones with enough samples in our dataset. #### 3.1.4 Creating the datasets After cleaning and preprocessing our data, it was divided into three mutually exclusive and jointly exhaustive sets, namely train, validation (aka dev), and test (Figure 1). To ensure a random but fair division we used stratifying shuffle that's why we have approximately the same ratio between the number images for each label (Table 1). Figure 1: A few sample images which show the complexity, diversity and variation of our data. As it's evident in Table 1, the notorious yet prevalent problem of data imbalance could be identified. A will be discussed in subsection 4.2 we used a weighted loss function to mitigate this problem by a large margin. ### Convolutional Neural Networks (CNNs) Synonymous with unassailable performance when it comes to processing image data, the CNNs were a staple in the field of CV since their introduction in 1989 by LeCun et al. [16, 17]. Therefore it was somewhat indubitable that we needed to process our CHB images with this type of NNs to benefit from all the advantages that could accrue to our models by using CNNs. Goodfellow et al. [18] believe CNNs to have three main benefits: translation equivariance, sparse connections, and parameter sharing. A CNN network has less number of learnable parameters in comparison with its conventional fully connected (FC) counterpart. This reduction in the number of parameters is the product of having sparse connections and parameter sharing which enables CNNs to; (i) train faster; (ii) be less prone to overfitting and as results demand fewer train data; and (iii) be able to work with high dimensional data (e.g., images), that their FC counterparts are incapable of. The CNN does the onerous work of feature extraction automatically; the task that without CNNs used to be done by hand engineering the features [19]. In this experiment, we used three of the most prestigious CNN architectures which have shown compelling results and interesting loss convergence, namely ResNet [20], Inception [21], and MobileNet [22]. ### Transfer Learning Dealing with several restraints such as lack of enough data and powerful computers, a methodology called transfer learning was employed to drastically mitigate these impediments. TL tries to transfer the knowledge, a pre-trained model has already learned from a large amount of data, to another model [23]. Generally, TL consists of two main parts. The first part is responsible for customizing the output layer to our problem. The second part fine-tunes the pre-trained model to adapt more to our specific data. ### Class Activation Mapping (CAM) In spite of the merits of image classification, there is a notorious drawback that lies within, and that is the black-box nature of artificial neural networks (NN). That being said, we don't know whether the model considers pertinent features in an image to decide its class or not. That's why researchers came up with a solution named class activation mapping (CAM) [24]. In this experiment we used gradient-weighted class activation maps (Grad-CAM) [25] which is a CAM method that merges the gradients (aka derivatives) of the final classification, that is the output layer deciding the label of the image, and the output of the final Conv layer of the model to generate a heatmap. The heatmap then is applied to the original image to localize the places that were taken into account when deciding its class/label. ### Regularization As one of the salient reasons for the occurrence of overfitting is the lack of enough data, which is ubiquitous in CV, we are always in need of more data. Unfortunately getting more brand-new data is not always possible. A workaround is to use the data we already have to increase the number of valid labeled train data, hence a decrease in overfitting as the model is now less capable of naively memorizing the train set [26]. As data augmentation is a staple in CV [26], we almost always opt for using it and this paper is not exempt. Finally, in Figure 2 the result of our proposed data augmentation pipeline after nine runs on the same image can be seen. The data augmentation methods used in this paper can be found in Table 2. Briefly, to decrease overfitting, which is commonplace in DL models, due to their high capacity in terms of the number of parameters, a combination of four famous methods were used, namely L2 regularization [27], dropout [28], batch normalization layer [29], and data augmentation [26]. The results of this combining approach, as discussed in section 5, \begin{table} \begin{tabular}{c c c c c} \hline \hline **class/label** & **Total images** & **Train set** & **Validation set** & **Test set** \\ \hline **negative/0** & 1432 & 1018 (13.8\%) & 207 (12.99\%) & 207 (13.28\%) \\ **positive/1** & 9096 & 6358 (86.2\%) & 1386 (87.01\%) & 1352 (86.72\%) \\ \hline **Total** & 10528 & 7376 (70.06\%) & 1593 (15.13\%) & 1559 (14.80\%) \\ \hline \hline \end{tabular} \end{table} Table 1: The distribution of data; both aggregated and for each dataset separately. were quite satisfiable in terms of overfitting and resulted in a very small amount of overfitting (i.e., \(<1\%\)) for all of our models. ## 4 Implementation ### Network Architecture In the Figure 3 the holistic architecture of our proposed method is represented. Not to process new input images through a data preprocessing pipeline every time, we embedded both the resizing and the normalization preprocessing functions into our network (i.e., pink box). This way, there would be no need to process the unknown images before executing the prediction on them, after the model had been trained. It was alluded to before that in this experiment we made use of several pre-eminent CNN architectures to tackle the problem at hand and not to be biased toward a certain architecture. As a result, four different networks were implemented, namely ResNet50-v2, ResNet152-v2, InceptionResNet-v2, and MobileNet-v2. One main difference between the ResNet50-v2 and other models is that we trained the ResNet50-v2 from scratch and with randomly initialized weights; while the other three were pre-trained models which were accompanied by TL. \begin{table} \begin{tabular}{c c|c c} \hline **method** & **value** & **method** & **value** \\ \hline random flip & Horizontal & random brightness & 0.05 \\ random rotation & 0.005 & random saturation & 0.6 - 1.2 \\ random crop & 5\% & random contrast & 0.75 - 1.1 \\ random quality & 80 - 100 & random hue & 0.03 \\ \hline \end{tabular} \end{table} Table 2: The data augmentation methods used in this paper and their corresponding values. Figure 2: An example of applying the proposed data augmentation methods on a train image (i.e., nine times). Notice how random, realistic, and valid the augmented versions are. The responsibility of the Global Average Pooling layer (i.e., purple box) was to flatten the output of the last Conv layer into a matrix, which is the desired shape of the input of a fully connected (FC) layer. Before replacing the output of the pre-trained model with a layer of our own, an FC layer (i.e., light blue box) was added to decrease underfitting; the bigger our network becomes the less underfitting we experience, but it also increasing overfitting, that's why a single FC layer proved to provide a desired trade-off, and thus reduced underfitting by a large margin without increasing overfitting too much. As shown in Figure 3, our model has two outputs. The first (i.e., green box) is responsible for the task of classification, by which each image will be given a label (i.e., negative or positive). The second output on the other hand does the task of localizing the parts by which the model has decided on a certain label for a specific image; this task is done by the Grad-CAM method (i.e., orange box). ### Evaluation To evaluate the implemented networks several metrics have been used in an endeavor to meticulously monitor the behavior of the networks at different stages of training. All these metrics will be scrutinized in the following subsections. #### 4.2.1 Cost function As mentioned in subsubsection 3.1.4 our two classes were imbalanced and since it would nudge our model to be biased toward the class with more examples (i.e., the positive class), we had to tackle this problem somehow. Having decided in favor of using the class weight method due to its numerous merits the Equation 1 was used to calculate the weight of each class, but it's worth noting that there is a myriad of ways to calculate the weights but as we would fine-tune the calculated weights later on in hyperparameter tuning phase we chose the most widely used: \[w_{c}=\frac{n_{t}}{n_{l}*n_{c}} \tag{1}\] Where \(w_{c}\), \(n_{t}\), \(n_{l}\), and \(n_{c}\) indicate the calculated weight of class \(c\), the total number of images in the dataset, the number of classes, and the number of images in class \(c\) respectively. These weights then will be used in the cost function of our networks so that the importance of images belonging to the inferior class outweighs that of the superior class, in a way that network will be rewarded or penalized more when it comes to the images of the class with fewer examples in it. The binary cross-entropy cost function was used, and the way it calculates cost before and after applying class weights can be seen in Equation 2 and 3 respectively. To make it more concrete the first one is used in validation, test, and prediction while the latter is employed in training time; that is we only care about data imbalance during training which is common sense as the network only updates its internal parameters (e.g., weights) in training time and backpropagation. \[L(\hat{y},y)=-\bigg{(}ylog(\hat{y})+(1-y)log(1-\hat{y})\bigg{)} \tag{2}\] Figure 3: The overall architecture of our proposed model/network. Where the values shown in parenthesis below each layer represent the layer’s output shape. The \(N\), \(n_{t}^{[L]}\), \(n_{W}^{[L]}\), and \(n_{C}^{[L]}\) refer to the batch size, height, width, and channels of the last layer (\(L\)) of the embedded CNN model respectively. \[L(\hat{y},y)=-\bigg{(}(w_{1})(y)log(\hat{y})+(w_{0})(1-y)log(1-\hat{y})\bigg{)} \tag{3}\] Where \(y\) refers to the true label and the \(\hat{y}\) to the predicted label of the given record. Note that as we did binary classification and sigmoid activation function for the output layer then \(\hat{y}\) is actually the probability ([0, 1]) of the record belonging to the positive class. #### 4.2.2 Performance measures and metrics When it comes to the evaluation of our model, several metrics were incorporated to ensure the rigor of our results. As we suffer from imbalanced data the Accuracy can be quite misleading if the model gets biased toward the superior class, so to address this issue four more performance measures were used, namely Precision, Recall, F-Score, and AUC. If anything, the F-Score is the harmonic mean of the Precision and Recall, thus it takes into account both of them to give us a balanced score of the two. Mathematically, Accuracy, Precision and Recall, and F-Score are defined as: \[Accuracy=\frac{TP+TN}{TP+FP+TN+FN} \tag{4}\] \[Precision=\frac{TP}{TP+FP} \tag{5}\] \[Recall=\frac{TP}{TP+FN} \tag{6}\] \[F\text{-}Score=\frac{2*Precision*Recall}{Precision+Recall} \tag{7}\] Where TP, TN, FP, and FN are True Positive, True Negative, False Positive, and False Negative respectively. In this paper the FN takes precedence over FP, thus the Recall is more important than precision as the FN is in the denominator of the Recall's Equation 6, however, we tried to balance them as much as possible. The reason is that if an image is falsely labeled as positive then in the worst-case scenario we lose time, but in the case of an image being falsely labeled as negative, then a building in dire need of conservation can be overlooked which might lead to irredeemable destruction. The area under the ROC curve, abbreviated as AUC, was employed in an endeavor to refrain from creating a model biased toward a certain class. AUC demonstrates the power of the model in distinguishing different classes. ## 5 Results After slogging through the onerous task of training and fine-tuning the hyperparameters several times, we achieved highly satisfactory results (Table 3). Note that the training process of the ResNet50-v2 doesn't have the fine-tuning step as we trained it from the ground up and with random initial weights. Considering the lack of enough data and computational power, which were alluded to before, it was of no surprise that the networks trained with TL fared the best. Among the networks that used TL, there is no definite winner, but the MobileNet-v2 had the best performance considering both the performance measures and the computational complexity for both the training and making an inference. That said, MobileNet's lightweight architecture is conducive to training and predicting faster which is especially important for devices with low computational power such as mobile phones, edge devices, etc. which are considered de facto pieces of equipment to monitor CHBs [30]. ### Evaluation of MobileNet-v2's Performance As mentioned before and according to Table 3 the fine-tuned model made with pre-trained MobileNet-v2 was the winner among the other three networks. and its lightweight architecture which is conducive to training and predicting faster is especially important for devices with low computational power such as mobile phones, edge devices, etc. That being said, as the winner among all four network architectures let's scrutinize MobileNet-v2's performance even more. The results of other networks in detail can be found in Figure A.3-A.5. The Table A.1 displays the most important hyperparameters used during the training and fine-tuning of our multiple networks. The fine-tuned MobileNet-v2 doesn't suffer from underfitting nor overfitting (Figure 4). As regards the second output of the fine-tuned MobileNet-v2, the localizations seemed completely relevant and attest to the fact that the model had learned the correct features in the train data (Figure 5). The output of several Conv layers, aka feature maps, from our fine-tuned MobileNet-v2 network, are visualized in Figure 6; we purposefully chose one layer from the beginning, one from the middle, and another from the end of the network to demonstrate that the more we go deep into the network the more holistic and abstract the detected features will be and vice versa. ## 6 Discussion This work demonstrates the facilities of DL in the conservation of CHB by the means of damage detection. As we have collected a diverse set of intricate CHB images, the trained model is very robust and achieved a minimum of 90% for all the metrics we used on the test set. More than our diverse data, using TL, data augmentation, and three different regularization methods in combination, was conducive to reducing overfitting and increasing the generalization power of our model. The salient reasons that attest to why our results are considered to be good enough are (i) Bayes error rate and (ii) the value of performance measures. Although measuring Bayes error rate is a hard and time-consuming task, which was not in the scope of this experiment, we can argue that its value is high, as for instance even a highly skilled Figure 4: The changes in performance measures reported after each epoch for both the train and validation sets during the training and fine-tuning phase; belonging to the MobileNet-v2 network. the green line indicates the point, in terms of epoch number, where we started to fine-tune some late layers in the pre-trained model. \begin{table} \begin{tabular}{c c c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Measure**} & \multicolumn{3}{c}{**ResNet50V2 1**} & \multicolumn{3}{c}{**ResNet152V2 22 **} & \multicolumn{3}{c}{**MobileNetV2 23**} & \multicolumn{3}{c}{**InceptionResNetV2 2 **} \\ \cline{2-13} & **train** & **val** & **test** & **train** & **val** & **test** & **train** & **val** & **test** & **train** & **val** & **test** \\ \hline **Loss** & 0.48 & 0.47 & 0.48 & 0.38 & 0.38 & 0.38 & 0.31 & 0.32 & 0.33 & 0.36 & 0.36 & 0.37 \\ **Accuracy** & 0.83 & 0.84 & 0.83 & 0.88 & 0.89 & 0.89 & 0.90 & 0.90 & 0.90 & 0.88 & 0.88 & 0.88 \\ **Precision** & 0.87 & 0.87 & 0.87 & 0.92 & 0.92 & 0.92 & 0.95 & 0.94 & 0.94 & 0.91 & 0.91 & 0.91 \\ **Recall** & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.96 & 0.94 & 0.94 & 0.94 & 0.96 & 0.95 & 0.96 \\ **F-Score** & 0.91 & 0.91 & 0.91 & 0.93 & 0.94 & 0.94 & 0.94 & 0.94 & 0.94 & 0.93 & 0.93 & 0.93 \\ **AUC** & 0.54 & 0.54 & 0.89 & 0.88 & 0.88 & 0.93 & 0.92 & 0.90 & 0.87 & 0.86 & 0.85 \\ **TP** & 6040 & 1310 & 1287 & 6056 & 1319 & 1296 & 5961 & 1311 & 1274 & 6082 & 1319 & 1295 \\ **FP** & 923 & 189 & 192 & 551 & 107 & 114 & 328 & 78 & 76 & 623 & 123 & 135 \\ **TN** & 95 & 21 & 22 & 467 & 103 & 100 & 690 & 127 & 139 & 395 & 87 & 79 \\ **FN** & 318 & 73 & 67 & 302 & 64 & 302 & 397 & 77 & 79 & 276 & 64 & 59 \\ \hline \hline \end{tabular} \end{table} Table 3: Final results, after hyperparameter tuning. CHB practitioner from the south of Iran, would have had a hard time detecting the defects in CHBs from north of the country, considering the peculiarity and idiosyncrasies of each building in our dataset. According to Mandrekar [31], in the field of CV, values larger than 90% are considered excellent, so it's safe to assume that the MobileNet-v2 had excellent performance, recording values above 90% for all of our metrics. Other than reaching the best performance among other models, the MobileNet-v2 is particularly interesting as it is a faster NN which is particularly important in doing real-time damage detection in devices with low computational resources, such as mobile phones or edge devices. Using our proposed model based on MobileNet-v2 can pave the way for the wide usage of such models in CH sites in Iran and/or around the world with the fewest possible resources. Figure 5: Some samples of the output of Grad-CAM layer of fine-tuned MobileNet-v2 network. The localized defects are shown by a heatmap (from Blue to Red). Figure 6: A few samples (i.e., 8) of feature maps from the beginning (top), mid-section (middle), and end (bottom) of our fine-tuned MobileNet-v2 network. The input image was the same as that of the Of subfigure c in Figure 5. To compare our results with those of similar researchers, the papers of Llamas et al. [15] and Perez et al. [14] were used, as these were the ones that used image classification, CNN, and TL, just like this experiment. As both of these papers used multiclass classification whereas we used binary classification, we took the average of each metric (e.g., Recall) for all classes, Llamas et al. had ten/10 classes and Perez et al. had four/4 classes; this way we could make their results comparable to those of ours. The comparison of the results on the test set is shown in Table 4. The most important challenges and limitations that we faced during this experiment were: (i) needing more data, which is a perennial problem in CV; (ii) lack of suitable computational power; and (iii) inconsistency in labeling due to personal preference and difference in the level of labelers' expertise. ## 7 Conclusion This experiment is concerned with applying novel yet matured methods such as DL and CNNs to make the process of conservation of CHBs less prone to errors and more efficient than doing it manually by direct human supervision. By getting Iran's CHB practitioners, the main beneficiaries of this experiment, to use our proposed models besides their old methods, a higher rate of success in detecting physical defects of such buildings can be achieved. We irrevocably believe that CHB practitioners using DL models, such as our proposed one, can identify physical defects more often than either does alone and hopefully as a result, a lower prospect of CHBs deteriorating in structural health. In an endeavor to practically demonstrate the utilities of DL in CH literature, We developed a fully fledged DL model that classifies the images in need of conservation and even more approximately localizes the defects to help the CH practitioners identify defects in a timely manner, and as a result speed of the process of CHB conservation as well as increasing its accuracy. In spite of all the limitations, we achieved very good results with a score of at least 94% for Precision, Recall, and F1-Score, which were about 4-5% more than similar works (Table 4). As regards future works, addressing the limitations we faced can open up a plethora of opportunities in terms of methods and outputs. for instance, if had access to a large amount of labeled data and powerful servers, physical or in the cloud, then object detection or instance segmentation would be more useful and could elicit more accurate and user-friendly results from our data. Having gotten traction in the past few years, the generative adversarial networks (GANs) can be utilized in our network architecture to propose restoration based on the label and localizations our proposed model offers.
2305.20056
Rare Life Event Detection via Mobile Sensing Using Multi-Task Learning
Rare life events significantly impact mental health, and their detection in behavioral studies is a crucial step towards health-based interventions. We envision that mobile sensing data can be used to detect these anomalies. However, the human-centered nature of the problem, combined with the infrequency and uniqueness of these events makes it challenging for unsupervised machine learning methods. In this paper, we first investigate granger-causality between life events and human behavior using sensing data. Next, we propose a multi-task framework with an unsupervised autoencoder to capture irregular behavior, and an auxiliary sequence predictor that identifies transitions in workplace performance to contextualize events. We perform experiments using data from a mobile sensing study comprising N=126 information workers from multiple industries, spanning 10106 days with 198 rare events (<2%). Through personalized inference, we detect the exact day of a rare event with an F1 of 0.34, demonstrating that our method outperforms several baselines. Finally, we discuss the implications of our work from the context of real-world deployment.
Arvind Pillai, Subigya Nepal, Andrew Campbell
2023-05-31T17:29:24Z
http://arxiv.org/abs/2305.20056v1
# Rare Life Event Detection via Mobile Sensing Using Multi-Task Learning ###### Abstract Rare life events significantly impact mental health, and their detection in behavioral studies is a crucial step towards health-based interventions. We envision that mobile sensing data can be used to detect these anomalies. However, the human-centered nature of the problem, combined with the infrequency and uniqueness of these events makes it challenging for unsupervised machine learning methods. In this paper, we first investigate granger-causality between life events and human behavior using sensing data. Next, we propose a multi-task framework with an unsupervised autoencoder to capture irregular behavior, and an auxiliary sequence predictor that identifies transitions in workplace performance to contextualize events. We perform experiments using data from a mobile sensing study comprising N=126 information workers from multiple industries, spanning 10106 days with 198 rare events (\(<2\%\)). Through personalized inference, we detect the exact day of a rare event with an F1 of 0.34, demonstrating that our method outperforms several baselines. Finally, we discuss the implications of our work from the context of real-world deployment. Data and Code Availability.Tesserae study data (Mattingly et al., 2019) can be obtained through a data usage agreement ([https://tesserae.nd.edu/](https://tesserae.nd.edu/)). Code is not publicly available, but provided on request. Institutional Review Board (IRB).The study protocol is fully approved by the Institutional Review Boards at Dartmouth College, University of Notre Dame, University of California-Irvine, Georgia Tech, Carnegie Mellon University, University of Colorado-Boulder, University of Washington, University of Texas-Austin, and Ohio State University. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2017-17042800007. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. ## 1 Introduction Life events (LE) are significant changes in an individual's circumstances that affect interpersonal relationships, work, and leisure (Hill, 2002). The nature of these events inevitably affect mental well-being and general health (Goodyer, 2001). The detrimental effects of adverse LEs (e.g., death of a loved one, losing a job, terminal illness) have been widely studied and known to be associated with a higher incidence of depression and altered brain network structure (Falkingham et al., 2020; Gupta et al., 2017). In contrast, positive LEs (e.g., taking a vacation, job promotion, childbirth) are associated with increased life satisfaction, and higher cognitive function (Castanho et al., 2021). Moreover, LEs affect cardiovascular vascular disease risk factors, such as increased central adiposity, heightened exposure to inflammation, and elevated resting blood pressure (Steptoe and Kivimaki, 2013). A study by Steptoe and Kivimaki (2012) suggests that even minor LEs can trigger cardiac events like myocardial ischemia. Sensing data is a multivariate time series, and traditional ML approaches for anomaly detection include the One-Class SVM (OCSVM) (Ma and Perkins, 2003), Isolation Forest (IF) (Liu et al., 2008), and logistic regression (Hilbe, 2009). However, these approaches do not capture temporal dependencies. Additionally, creating user-specific thresholds is critical in human-centered tasks. Thus, methods which directly predict anomalies (IF) or require threshold parameter tuning (OCSVM) are not ideal. Recently, timeseries based autoencoders have received significant attention (Rumelhart et al., 1985; Zhou and Paffenroth, 2017; Audibert et al., 2020; Su et al., 2019), and many methods use RNN variants such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al., 2014) to capture temporal dependencies. In an autoencoder, the reconstruction error is used to detect anomalies. However, the complexity of human behavior and class imbalance creates biased learned representations, making it challenging to distinguish normal and rare events (Pang et al., 2021). An intuitive solution to this problem involves increasing the error of rare events without affecting normal events. Multi-task learning achieves this by incorporating additional information from a related task(s) to learn a shared representation, and thus compensate for the limitations of a purely unsupervised approach (Sadhu et al., 2019; Wu et al., 2021). In this paper, we first use statistical analysis to examine whether LEs result in behavioral shifts observable through mobile sensing. Next, we propose Multi-Task Anomaly Detection (MTAD) to detect "in-the-wild" rare LEs using behavioral mobile sensing data. We hypothesize that a standalone unsupervised autoencoder cannot effectively capture differences between normal and rare events using the reconstruction error because of the heterogeneity in human data and the significant class imbalance (\(<2\%\) rare events). Thus, MTAD trains an auxiliary sequence predictor to contextualize events by capturing changes in workplace performance due to the event. For example, a participant reported that getting promoted positively impacted their work performance, while another person mentioned that visiting their sick parents had a negative effect. We aim to identify such changes at inference and compute a scaling factor that magnifies the reconstruction error of rare events. ### Contributions Toward the vision of detecting life events from mobile sensing data collected in in-the-wild settings, our main contributions are as follows. First, we perform granger-causality testing to detect if the days before an event can be used to predict days after the event. Thus, establishing a relationship between LEs and behavior (section 4). Second, we propose a multi-task learning architecture consisting of two components: (1) an LSTM based encoder-decoder to calculate an anomaly score, and (2) a sequence predictor to contextualize the anomaly score by inferring a transition in workplace performance (section 5). Third, we perform empirical analysis on a real-world dataset to compare MTAD with five state-of-the-art baselines to analyze performance and robustness (section 5.4). Finally, we rigorously evaluate parameters that affect MTAD (section 5.4) and discuss implications of our research (section 6). ## 2 Related Work ### Change Point Detection Change point detection (CPD) refers to the change in the state of a time series. CPD has been applied to various physiological signals (Shvetsov et al., 2020; Fotoohinasab et al., 2020; Stival et al., 2022). For example, Shvetsov et al. (2020) propose an unsupervised approach based on point clouds and Wasserstein distances to detect six different types of arrhythmias. Designing appropriate metrics that can identify change is vital in CPD. Consequently, Chen et al. (2019) design a metric for EEG CPD by modifying several similarity metrics from other domains. In fitness, Stival et al. (2022) propose a method combining CPD and Gaussian state space models to detect actionable information such as physical discomfort and de-training. ### Anomaly detection methods Approaches for anomaly detection in multivariate time series are varying, and several challenges exist based on the method, and applied area (Pang et al., 2021). In deep learning, the LSTM encoder-decoder (or LSTM autoencoder) has received a lot of attention. Malhotra et al. (2016) demonstrate the robustness of using an LSTM in an autoencoder framework. Similarly, Park et al. (2018) propose the LSTM-VAE, which combines the temporal modeling strength of LSTM with the variational inference capability of a VAE. The resulting model obtains better generalization for multimodal sensory signals. The Deep Autoencoding Gaussian Mixture Model (DAGMM) jointly optimizes two components to enable optimal anomaly detection, an autoencoder computes a low-dimensional representation, and a gaussian mixture model that predicts sample membership from the compressed data (Zong et al., 2018). Audibert et al. (2020) propose an adversarially trained autoencoder framework to detect anomalies. For anomalous driving detection, Sadhu et al. (2019) introduce a multi-task architecture that magnifies rare maneuvers using domain knowledge regarding the frequency of driving actions. ### Life event detection To the best of our knowledge, there are two works similar to ours. First, Faust et al. (2021) use OCSVM to assess the response to an adverse life event using physiological and behavioral signals from wrist-worn wearables. They focus on examining the responses to the adverse event and the coping strategies employed by the participant. Their findings suggest the existence of behavioral deviations after a negative event, motivating us to focus on prediction. Second, Burghardt et al. (2021) detect abnormal LEs using wrist-worn wearables from hospital and aerospace workers. Their method works by first creating a time series embedding using a hidden markov model variant and then uses a random forest or logistic regression for classification. Our work differs from the previous studies in several ways: (1) we use smartphone behavioral data instead of wearable physiological data, (2) we consider postive, negative, and multiple LEs (differing from Faust et al. (2021)), (3) we focus on deep models instead of traditional ML, (4) our data has an extremely low anomaly ratio (\(<2\%\)) compared to Burghardt et al. (2021) (\(11.7\%\) and \(14.9\%\)). Thus, we view our problem to be significantly challenging. Moreover, we provide crucial statistical motivation to pursue LE detection. ## 3 Study The Tesserae study (Mattingly et al., 2019) recruited 757 information workers across different companies in the USA for one year where participants respond to several surveys. They were instrumented with a Garmin vivoSmart 3 wearable and a continuous sensing app is installed on their phone. Participants are instructed to maintain data compliance level of 80% to warrant eligibility for monetary remuneration. The sub-cohort's age ranged from 21 to 64, with an average of 34. Of the 126 individuals, the dataset is fairly balanced with 67 and 59 identified as male and female, respectively. The top 3 areas of occupation were Computer and Mathematical Sciences, Business and Finance, and Engineering. Roughly 98% of the participants had at least a college degree. In terms of mobile platform, the cohort had 66 Android users and 60 iOS users. Please refer to the link in the data availability statement to learn about the Tesserae study. Additional demographic information is listed in Appendix A.3. ### Features In contrast to studies using wearable physiology data (Faust et al., 2021; Burghardt et al., 2021), we use daily summaries of behavioral mobile sensing features in our analyses. Overall, we used walking duration, sedentary duration, running duration, distance traveled, phone unlock duration, number of phone unlocks, number of locations visited, and number of unique locations visited. Further, to better understand user behavior, we divide the features (except number of unique locations visited) into 4 "epochs" for modelling: epoch 1 (12am - 6am), epoch 2 (6am - 12pm), epoch 3 (12pm - 6pm), and epoch 4 (6pm - 12am). Ultimately, 29 features were used for analyses. Previous studies elucidate the importance of these features to understand human behavior from a workplace context (Nepal et al., 2020; Mirjafari et al., 2019). ### Ground Truth The definition of a **significant life event** is subjective and depends on the individual. We adopt a widely accepted definition from job stress literature, which describes these events as situations where psychological demand exceeds available resources (Karasek et al., 1981). After study completion, participants were asked to describe significant LEs using their diaries, calendars, and other documents. Participants provided free-text descriptions for every event, start and end dates, date confidence, significance, valence (positive/negative), type of event, and workplace performance impact. Valence, date confidence, and workplace performance are reported on a 1-7 likert scale as follows: (1) Valence: "1" indicated "Extremely Positive" and "7" indicated "Extremely Negative", and (2) Date confidence: "1" indicated "Lowest confidence" and "7" indicated "Highest confidence". Workplace performance impact is assigned to one of the following - "Large Negative Effect", "Medium Negative Effect", "Small Negative Effect", "No Effect", "Small Positive Effect", "Medium Positive Effect", "Large Positive Effect". Our selection criteria is as follows: (1) Valence must be Extremely Positive, Very Positive, Very Negative, or Extremely Negative, and (2) Date confidence must be "Moderately High", "High", or "Highest". Next, we set a 30-day date limit before and after an event for analysis. For overlapping events, the limit is set 30 days before and after the first and last events, respectively. These choices are based on a study by Faust et al. (2021) examining the impact of LEs within a 60-day period. Finally, the missingness and uneven spacing (discontinuous days) within this time frame must be \(<25\%\). Every day was labelled as "1" indicating a rare event or "0" indicating a normal event. For workplace performance, the label is forward filled, and an "Unknown" label is assigned to days before the rare event. Our final dataset consists of 10106 days from 126 participants with 198 rare LEs (\(<2\%\)). ## 4 Statistical Testing Initially, we ask the question: "Does the behavior of an individual change after an LE?". If so, what features are significant in most of the time series? To this end, we applied the granger-causality test as follows. First, we split the 159 multivariate time series (29 features) into two parts, one before and including the rare event (\(T^{pre}\)) and the other after the rare event (\(T^{post}\)). Next, for each feature, a granger-cause test is applied to investigate if \(T^{pre}\) granger-causes \(T^{post}\). A \(p<0.05\) implies the time series for the specific feature is significant. Finally, we sum the significant time series across each feature. For example, in Figure 1, loc_visit_num_ep_1 has a value of 42, which implies that 42 out of 159 time series were granger-cause significant. We used the statsmodel package for python to apply the granger-cause test with a lag of upto 6. The SSR-based F-test is used to evaluate statistical significance at \(p=0.05\). Significance at any lag is considered granger-causality, and the total number of significant time series (out of 159) for each feature is displayed in Figure 1. From Figure 1, we observe that the number of locations visited and location distance between 12am-6am and 12pm-6pm is significantly impacted by LEs in several cases, suggesting location behaviors are crucial to LE detection. In addition, we observe that walking and running have approximately the same number of granger-cause time series across episodes, suggesting an overall change throughout the day rather than a shift in the timing of these activities. In contrast, sedentary action varies across episodes, suggesting that LEs might affect the sedentary state at different times of the day. Also, unlock duration and count vary due to a life event. While the number of unlocks between 12am-6am has the most number of significant time series, the unlock duration between 6am-12pm has more significance comparatively. ## 5 Multi-task anomaly detection ### Problem formulation Given a set of \(I\) participants, we define a multivariate time series for each user \(u\in\{1,\ldots,I\}\) with \(T\) days as \(\mathcal{T}^{u}=\{\mathbf{x}_{1}^{u},\ldots,\mathbf{x}_{T}^{u}\}\), where \(\mathbf{x}\in\mathbb{R}^{m}\), \(m\) is the number of mobile sensing features, and \(t\in\{1,\ldots,T\}\) is a specific day. To model temporal dependencies, we apply a rolling window approach to the time series. A window at day \(t\) with a predefined length \(l\) is given Figure 1: Heatmap indicating the number of time series (out of 159) that are granger-cause significant at \(p<0.05\). Larger values imply that the corresponding feature changes after an LE in many participants, and could be important for detection. The x-axis is the number of significant time series, i.e., the count of significant time series out of 159. The y-axis are the mobile sensing features. “act” is an activity which can be still, walking, running. “loc” specifies location. “dist” is distance. “num” is number. by: \[W_{t}=\{\mathbf{x}_{t-l+1},\ldots,\mathbf{x}_{t-1},\mathbf{x}_{t}\} \tag{1}\] Using equation (1), the user's multivariate time series \(\mathcal{T}^{u}\) is transformed into a collection of windows \(\mathcal{W}=\{W_{1},\ldots,W_{T}\}\), where \(W\in\mathbb{R}^{l\times m}\). Next, a window is assigned a binary label \(y^{R}\in\{0,1\}\), where \(y^{R}=1\) indicates a rare life event at the time \(t\) (i.e., \(y^{R}_{t}=1\)) and \(y^{R}=0\) indicates a normal event in all other cases. Observe that we only consider the exact day of the rare event to be a rare window. Given that each participant's windows is transformed separately, we generalize the entire collection of normal and rare event windows across participants as \(\mathcal{W}_{normal}=\{W_{1},\ldots,W_{N}\}\) and \(\mathcal{W}_{rare}=\{W_{1},\ldots,W_{R}\}\), respectively. In our context, we define a multi-task model with two related tasks trained using \(\mathcal{W}_{normal}\). First, an unsupervised learning task trained to reconstruct the input \(\mathcal{W}_{normal}\), which produces higher errors or anomaly scores when reconstructing \(\mathcal{W}_{rare}\). Thus, facilitating rare life event detection. Second, a supervised learning task to contextualize and scale the anomaly score. Here, \(\mathcal{W}_{normal}\) is trained to predict a workplace performance vector \(\mathbf{y}\in\mathbb{R}^{l}\), where each day \(t\in\{1,\ldots,l\}\) in \(W\) represented by \(y_{t}\) belongs to one of the workplace performance labels described in 3.2. _Problem Statement._ Given a participant's multivariate time series window \(W_{t}\) and the corresponding workplace performance vector \(\mathbf{y}\), the objective of our problem is to train a multi-task framework capable of detecting a rare life event at time \(t\). ### Multi-task Architecture Our multi-task framework (Figure 2) consists of three components: an encoder E which maps a window \(W\) to a low-dimensional representation (latent space) \(Z\), a decoder \(D\) to reconstruct \(W\) from \(Z\) (5.2.1), and a sequence predictor \(P\) to predict the workplace performance vector \(\mathbf{y}\) (5.2.2). #### 5.2.1 Unsupervised Autoencoder (Task A) We capture temporal dependencies from the multivariate time series using LSTMs (Hochreiter and Schmidhuber, 1997) to build the encoder-decoder architecture. An LSTM encoder learns from an input window \(W\) by running through the day-wise input sequence and computes a fixed size latent space \(Z\). Next, \(Z\) is copied multiple times to match the length of the window. Finally, the LSTM decoder \(D\) uses \(Z\) to reconstruct the input sequence, and the reconstructed sequence is represented as \(\overline{W}\). We train the LSTM encoder-decoder (LSTM-ED) by minimizing the reconstruction error between \(W\) and \(\overline{W}\) using the mean squared error defined as: \[\mathcal{L}_{A}=\frac{1}{l\times m}\|W-\overline{W}\|_{F}^{2} \tag{2}\] where \(\overline{W}=D(Z)\); \(Z=E(W)\); \(\|\cdot\|_{F}\) is the Frobenius norm Recall that we only use \(\mathcal{W}_{normal}\) to train the LSTM-ED to learn normal event representations. Therefore, by using the reconstruction error as an anomaly score \(\alpha\), we can detect rare events based on their higher \(\alpha\) values. However, it is possible that some participants or events do not exhibit significant behavior changes which can be captured by our LSTM-ED through \(\alpha\). To address this challenge, we attempt to identify anomalies through a supervised learning setup in the next section. Srivastava et al. (2015) describes LSTM encoder-decoder architectures in detail. ``` Input:\(\mathcal{D}_{train}\) with \(\mathcal{W}_{normal}=\{W_{1},\ldots,W_{N}\}\), \(\{Y_{1},\ldots,Y_{N}\}\), class weight vector \(\mathbf{w}\), and number of epochs \(E\). Output: Trained \(E\), \(D\), \(P\) \(E\), \(D\), \(P\)\(\leftarrow\) initialize weights; \(e\gets 1\); repeat for\(n\gets 1\) to \(N\)do \(\underline{Z_{n}}\gets E(W_{n})\); \(\overline{W_{n}}\gets D(Z_{n})\); \(\widehat{Y_{n}}\gets P(Z_{n})\); \(\mathcal{L}_{A}\leftarrow\frac{1}{l\times m}\|W_{n}-\overline{W_{n}}\|_{F}^{2}\); \(\mathcal{L}_{B}\leftarrow\sum_{i=1}^{l}\sum_{j=1}^{c}Y_{nij}\times\ln( \widehat{Y_{nij}})\times w_{j}\); \(\mathcal{L}\leftarrow\mathcal{L}_{A}+\mathcal{L}_{B}\); \(E,D,P\)\(\leftarrow\) update weights using \(\mathcal{L}\); end for \(e\gets e+1\); until\(e=E\); ``` **Algorithm 1**Training #### 5.2.2 Sequence Prediction (Task B) To scale the anomaly score \(\alpha\), we train a supervised sequence predictor \(P\) to detect day-wise workplace performance. The window \(W\) and a true workplace performance label vector \(\mathbf{y}\in\mathbb{R}^{l}\) as training inputs, where the label for day \(t\in\{1,\ldots,l\}\) in \(W\)represented by \(y_{t}\) has one of the performance labels described in section 3.2. Moreover, \(Y\in\mathbb{R}^{l\times c}\) represents one-hot vectors from \(\mathbf{y}\) with \(c\) classes (\(c=8\)). Observe that \(W\) is same for tasks A and B. Hence, our architecture shares the LSTM encoder network \(E\) described in section 5.2.1. For Task B, \(P\) is composed of an LSTM network to further extract temporal features from the latent representation \(Z\) followed by a fully connected layer \(FC\) (softmax activation) to predict day-wise class probabilities \(\widehat{Y}\). At inference, \(\widehat{Y}\) is mapped to the predicted workplace performance label vector \(\widehat{\mathbf{y}}\). The model is optimized using the weighted categorical cross-entropy loss function defined by: \[\mathcal{L}_{B}=-\sum_{i=1}^{l}\sum_{j=1}^{c}Y_{ij}\times\ln(\widehat{Y_{ij}}) \times w_{j} \tag{3}\] where \(\widehat{Y}=\text{softmax}(FC)\); \(w_{j}\) is the weight for class \(j\). From equations (2) and (3), we can represent the final loss function as \(\mathcal{L}=\mathcal{L}_{A}+\mathcal{L}_{B}\). The proposed multi-task architecture is trained in an end-to-end fashion, where the tasks A and B are jointly optimized by minimizing \(\mathcal{L}\) (Algorithm 1). #### 5.2.3 Inference The detection phase workflow (Algorithm 2) computes the anomaly score \(\alpha\) from Task A and a scaling factor \(s\) from Task B for each test window. **Anomaly Score.** Recall that, our goal is to identify a rare event on the exact day. Thus, we unroll \(W\) and \(\overline{W}\) to compute the score for the most recent day \(t\) as follows: \[\alpha=\frac{1}{m}\sum_{j=1}^{m}(x_{j}-\overline{x_{j}})^{2} \tag{4}\] where \(\mathbf{x}\) and \(\overline{\mathbf{x}}\) are the true and reconstructed multivariate time series, respectively; \(m\) is the number of mobile sensing features. **Scaling factor.** The effect of stressful life events affect work-life balance in US workers and reduce productivity (Hobson et al., 2001). Thus, there is reason to believe workplace performance shifts after an LE. To capture this change, we first transform the predicted workplace performance vector \(\widehat{\mathbf{y}}\) into a binary vector \(\mathbf{r}\in\mathbb{R}^{l-1}\) defined as: \[r_{t-1}=\begin{cases}1&\widehat{y}_{t}\neq\widehat{y}_{t-1}\\ 0&otherwise\end{cases}\] where \(t\in\{2,\ldots,l\}\) In essence, we identify the day of workplace performance change and use it as a proxy for rare event detection. For example, consider a \(\widehat{\mathbf{y}}=\) ["Unknown", "Unknown", "Large Negative Effect"] with \(l=3\), then the transition vector is \(r=[0,1]\). Initially, we assumed that a value "1" on the most recent day could be directly used to detect the rare event. However, this idea had two major limitations. First, it is possible that larger window sizes might have multiple transitions (\(r_{t}=1\)). Second, erroneous predictions might hinder detection. Thus, we exponential weight our transition vector \(r\). Intuitively, more recent workplace performance shifts have larger impact Figure 2: The proposed multi-task learning architecture illustrating training information flow for a window \(W\) of length \(l\). on behavioral changes owing to LE. The scaling factor \(s\) aims to capture the abovementioned effect as follows: \[s=\frac{1}{l-1}\sum_{t=1}^{l-1}e^{-\lambda tr_{t}} \tag{5}\] where \(\lambda\) is a constant decay factor. **Detection.** The final scaled anomaly score \(\delta\) is computed from equations (4) and (5) in the following way: \(\delta=\frac{\alpha}{s}\). Observe that \(\delta=\alpha\) when \(\mathbf{r}\) is a zero vector, i.e., a vector with no workplace performance changes. Ultimately, a window \(W_{t}\) with a scaled anomaly score \(\delta\) has a rare life event at \(t\) (\(y_{t}^{R}=1\)) if \(\delta\) is greater than a threshold \(\gamma\). However, the scarcity of rare events hindered threshold tuning based on performance metrics. Thus, \(\gamma\) is the 95th percentile anomaly scores from the validation data set. ``` Input:\(\mathcal{D}_{test}\) with \(\mathcal{W}=\{W_{1},\dots,W_{N+R}\}\), \(\gamma\) from \(D_{val}\), \(l\), \(\lambda\). Output:\(\mathbf{y^{R}}=\{y_{1}^{R},\dots,y_{N+R}^{R}\}\) for\(n\gets 0\) to \(N+R\)do \(Z_{n}\gets E(W_{n})\); \(\overline{W_{n}}\gets D(Z_{n})\); \(\widehat{Y_{n}}\gets P(Z_{n})\); \(\mathbf{x},\overline{\mathbf{x}}\leftarrow\text{unroll}(W_{n})\), \(\text{unroll}(\overline{W_{n}})\) ; \(\alpha\leftarrow\frac{1}{m}\sum_{j=1}^{m}(x_{j}-\overline{x_{j}})^{2}\); \(\widehat{\mathbf{y}}\leftarrow\text{get class labels from probabilities }\widehat{Y_{n}}\); \(\mathbf{r}\leftarrow\text{compute binary transition vector from}\) \(\widehat{\mathbf{y}}\); \(s\leftarrow\frac{1}{l-1}\sum_{t=1}^{l-1}e^{-\lambda tr_{t}}\); \(\delta\leftarrow\frac{\alpha}{s}\); if\(\delta>\gamma\)then \(y_{n}^{R}\gets 1\); else \(y_{n}^{R}\gets 0\); end if end for ``` **Algorithm 2**Inference ### Experimental Setup We performed several pre-processing steps for optimal training and inference. First, we impute missing data and unevenly sampled time series using the mean. Second, we forward fill missing rare event and workplace performance labels. Third, within-subject feature normalization is applied as a precursor for personalization. Finally, each participant's time series is transformed into windows of length \(l=10\) using equation (1). In our analysis, we divide \(\mathcal{W}_{normal}\) and the corresponding \(y\) into training (\(\mathcal{D}_{train}\)), validation (\(\mathcal{D}_{val}\)), and test (\(\mathcal{D}_{test}\)) with ratio of 80:10:10, respectively. Next, we append \(\mathcal{W}_{rare}\) to \(\mathcal{D}_{test}\). Note that rare events are held-out for testing and are neither used for training nor validation. Moreover, we generate ten different user-dependent splits to ensure the training, validation, and test data sets consist of time series from all participants. To assess positive class performance in imbalanced data, we use precision (P), recall (R), and the F1-score, and report mean and standard deviation across the splits. The windows are unrolled at inference to detect a rare event on the exact day. Thus, identifying if a rare event is present within a window is not considered to be an accurate detection. ### Results In this section, we examine the properties of the proposed framework by comparing it with other baselines (5.4.1), analyzing the strengths of personalization (5.4.2), and estimating the changes in performance at different window sizes and decay constants for the scaling factor (5.4.3). Next, we perform an ablation study to assess the necessity of the sequence predictor (5.4.4). Finally, we assess the types of events identified by MTAD (5.4.5). #### 5.4.1 Performance We evaluate the performance of our algorithm by comparing it with five state-of-the-art baselines for anomaly detection, namely, OCSVM, IF, LSTM-VAE, DAGMM, and LSTM-ED. As shown in Table 1, MTAD performs significantly better than all traditional machine learning and deep learning methods in terms of P, R, and F1. Particularly, MTAD's 0.29 F1 score is 2.6 times greater than a standard LSTM autoencoder (LSTM-ED). Unlike other methods, DAGMM does not compute a normal event decision boundary, i.e., train only with normal event data. Consequently, we observe that DAGMM has a higher recall than methods like LSTM-ED, LSTM-VAE, and OCSVM, but it has poor precision. Interestingly, IF performs better than deep models like LSTM-ED and LSTM-VAE. IF directly pre dicts a rare event without considering temporal information, whereas LSTM approaches used windows that might contain unknown rare events or behavioral discrepancies, thus, resulting in poor performance. Moreover, we observe that unsupervised LSTM autoencoder approaches are sensitive to variance in human behavior (Table 1), suggesting that the latent representation computed might be biased to a specific user "persona". To address this, we attempt to personalize our approach to improve performance. #### 5.4.2 Personalization Towards personalization, we applied within-subject normalization to capture user-specific behavior changes and computed user-specific thresholds \(\gamma^{u}\) from the individual's validation data. Detecting rare events using these personalized thresholds (PT) yields performance improvements for all threshold-based methods, as shown in Table 2. Overall, MTAD-PT is the best performing method with an F1 of 0.34, a 0.05 increase from the general model. From Table 1 and 2, we see that unsupervised methods like LSTM-VAE-PT and LSTM-ED-PT show the most F1 score improvements of 0.14 and 0.15, respectively. Interestingly, by personalizing MTAD, we observe a trade-off between precision and recall. Additionally, methods like IF directly predict an anomaly without thresholds and cannot be personalized without training. Our experiments show that methods like MTAD can achieve performance improvement simply by personalized thresholding without additional training, demonstrating its advantage in human-centered problems. #### 5.4.3 Effect of parameters **Window size.** Optimal window size is critical to sufficiently discerning differences between normal and \begin{table} \begin{tabular}{l l l l} \hline \hline Algorithm & P (std) & R (std) & F1 (std) \\ \hline OCSVM & 0.32 (0.04) & 0.07 (0.00) & 0.12 (0.06) \\ IF & 0.22 (0.01) & 0.18 (0.01) & 0.20 (0.00) \\ LSTM-VAE & 0.28 (0.04) & 0.07 (0.01) & 0.12 (0.01) \\ DAGMM & 0.04 (0.01) & 0.11 (0.02) & 0.06 (0.01) \\ LSTM-ED & 0.25 (0.03) & 0.07 (0.01) & 0.11 (0.01) \\ MTAD & **0.47 (0.05)** & **0.21 (0.03)** & **0.29 (0.04)** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of MTAD with baselines using precision (P), Recall (R), and F1-score (F1). \begin{table} \begin{tabular}{l l l l} \hline \hline Algorithm & P (std) & R (std) & F1 (std) \\ \hline LSTM-VAE-PT & 0.25 (0.02) & 0.27 (0.02) & 0.26 (0.02) \\ LSTM-ED-PT & 0.26 (0.02) & 0.26 (0.02) & 0.26 (0.01) \\ MTAD-PT & **0.33 (0.02)** & **0.35 (0.03)** & **0.34 (0.03)** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of personalized threshold models using precision (P), Recall (R), and F1-score (F1). Figure 4: Comparison between LSTM-ED-PT and MTAD-PT at different window sizes. Figure 3: Comparison between general LSTM-ED and MTAD at different window sizes. are events. Smaller window sizes allow each day to have more impact, whereas larger ones spread importance across many days. Thus, we evaluate the performance of MTAD, LSTM-ED, MTAD-PT, LSTM-ED-PT using four different window sizes \(l\in\{6,8,10,12\}\). We observe several interesting trends from Figures 3 and 4. First, the performance of MTAD and MTAD-PT increases with larger window sizes from 6 to 10. However, the difference in F1 score of MTAD-PT at \(l=10\) and \(l=12\) (0.34 vs 0.34) is insignificant. Second, the performance of LSTM-ED deteriorates gradually with increasing window sizes. Conversely, LSTM-ED-PT F1 score increases, demonstrating the robustness of using user-specific thresholds. Third, by personalizing, we observe a trade-off between P and R for MTAD at all window sizes. For our problem, the higher recall is acceptable, and we use a window size \(l=10\). **Decay constant.** The constant \(\lambda\) used in exponential weighting determines the intensity of decay where higher values drastically reduce the weight of days farther from the current day as opposed to lower values. We evaluated the sensitivity of MTAD and MTAD-PT at different decay constants \(\lambda\in\{0.5,2,5,10\}\) and identified no significant changes (Appendix A.4). Intuitively, we expect this behavior as the proposed method only magnifies the anomaly score of windows with at least one rare event, leaving windows with normal events unchanged. #### 5.4.4 Utility of the sequence predictor We evaluate the necessity of having both tasks by performing an ablation study at inference. After training, we treat the sequence predictor as a supervised classifier where a \(W\) has a rare event at time \(t\) if the transition vector value \(r_{t-1}=1\), i.e., there is a workplace performance change between the two most recent days. The model obtained a P, R, and F1 of 0.52, 0.04, and 0.07 respectively. An 0.07 F1 score of the sequence predictor is poor compared to the baselines in Table 1. Additionally, this experiment shows that a combined multi-task model has superior performance compared to standalone methods. #### 5.4.5 Analyzing event type and valence We analyze the events identified by our model and observe that it is capable of identifying personal, work, health, and financial events (Table 3). These types of events directly affect the participant or their relatives. Our method is unable to identify societal and miscellaneous events. These events could be related to politics, sports, or other local activities which could indirectly affect mood. From Table 3, we observe that MTAD-PT is fairly balanced at detecting positive and negative events with similar recall (0.40 vs 0.39). ## 6 Discussion ### Summary of results Initially, granger causality testing suggested that location features are crucial in detecting behavioral changes after an event. Perhaps, both negative and positive events such as loss of loved one or vacation result in changes in location dynamics. We observed that 46 time series were significant for location distance between 12am-6am. This is considerably large from an extreme anomaly detection perspective. Thus, motivating us to build a multi-task learning model to detect rare LEs. The results of rare life event detection presented in section 5.4 highlight the advantages our deep learning approach. A multi-task setup can be used to overcome the deficiencies of a purely unsupervised deep learning model. Specifically, the presence of a severe class imbalance (\(<2\%\)) can be addressed using our method. In comparison to Burghardt et al. (2021), we achieve a comparable F1 of 0.34 on a more challenging problem because of the aforementioned class imbalance (11.7% & 14.9% vs. 1.9%). Moreover, our method can detect both positive and negative LEs in addition to different events types such as personal, \begin{table} \begin{tabular}{l l l} \hline \hline Event Type & Total Events & Events Detected (R) \\ \hline \multicolumn{3}{l}{_Type_} \\ Personal & 92 & 41 (0.44) \\ Work & 69 & 21 (0.30) \\ Health & 14 & 7 (0.50) \\ Financial & 13 & 9 (0.69) \\ Societal & 8 & 0 (0.00) \\ Other & 2 & 0 (0.00) \\ \multicolumn{3}{l}{_Valence_} \\ Positive & 136 & 54 (0.40) \\ Negative & 62 & 24 (0.39) \\ \hline \hline \end{tabular} \end{table} Table 3: Events detected or recall (R) using MTAD-PT distributed by type work, health, and financial (Burghardt et al., 2021). Our approach can be extended to other applications by appropriately identifying auxiliary tasks to positively transfer knowledge to the main task. With the future vision of deploying models on mobile devices, it is imperative that models are not retrained on the phone. Higher computational costs of training models results in slower application performance. Consequently, human interaction with the application might be reduced. Personalizing the thresholds for each individual without re-training addresses this issue while simultaneously improving the performance of MTAD and other unsupervised models. ### Implications Two major directions could greatly benefit from our work. First, as LEs are difficult to identify without explicit questions, detecting them using mobile phones is valuable. Second, interventions can alleviate the stressful impact of LEs. **Detection.** Generally, LEs are identified only through self-reports. Automated detection in-the-wild is difficult because of the subtleties of human behavior. Our analysis of data from Android and iOS devices illustrates the effectiveness of using passive sensing data instead of conducting monthly interviews or similar alternative methods with employees. Thus, mobile phones are a low-burden option offering continuous real-time sensing to detect rare LEs. Ultimately, we envision detection leading to helpful and empathetic interventions. **Ubiquitous health interventions.** Individuals can use our detection algorithm for teletherapy and self-monitoring. In teletherapy, smartphone sensing applications can connect people to counselors, mentors, or therapists offering event-specific services. For example, a death in the family requires the expertise of a grief counselor, whereas a mentor can help tackle the stressors of a new job promotion. Applications like Talkspace and Betterhelp offer several services in this sector, and our methods can be integrated with similar applications. Second, our algorithm can be extended for self-monitoring, where an app tracks anomalous behaviors longitudinally--ultimately suggesting intervention strategies for significant behavioral deviations. Organizations should be proactive in improving the mental health and wellness of its employees. Here, we describe three intervention scenarios using smartphones. First, having helpful co-workers reduces negative triggers. Our method may be used to detect a life event passively and provide incentives to an information worker to help colleagues. Second, Spector and Fox (2002) suggest that emotion about control over a stressful event affects health. Thus, emotion regulation strategies such as cognitive re-appraisal (re-interpreting a situation) can be prompted to the user through mobile apps (Katana et al., 2019). Third, organized leisure crafting have been shown to positively affect mental health and can be used as an intervention tool (Ugwu, 2017). ### Limitations Some limitations of this work should be noted. First, we do not address the "cold-start problem" to evaluate performance for an unseen user. Thus, our model with personalization requires user behavioral data to construct specific thresholds and latent spaces for detecting LEs in the future. Second, it is useful to understand how the various mobile features contribute the most to detection. The latent features constructed by autoencoders are unobserved feature spaces compressed into a low-dimensional for reconstruction and prediction. Therefore, interpretation of these features is not straight-forward, the additional scaling of the auxiliary task further hinders this ability. Third, some rare events cannot be detected because of their similarity to normal events. In essence, there are several confounding factors that may or may not elicit a behavioral change in an individual. For example, if a person's responsibilities are similar after a job promotion, their routine and actions might not be significantly different. Conversely, it is also possible that normal days are anomalies and not related to the events themselves. ### Ethical considerations While monitoring worker behavior has benefits to health, it also highlights the need for ethical data usage. For instance, organizations analyzing mobile phone data should use informed consent to protect private data. The primary intention of life event detection must be to offer help and support. Nevertheless, sensing data can be used adversarially to monitor employee productivity to maximize benefits for the organization. Thus, sacrificing trust between employee and organization, while damaging interaction between people and technology. We do not advocate the use of mobile sensing methods like event detection in these scenarios. Future studies must collect data only through transparent and ethical processes to protect the employee's data. Moreover, extreme anomaly detection have higher error rates owing to its challenging nature. Therefore, having a human-in-the-loop is a necessity. We discuss two scenarios of good and bad interventions. **Scenario 1.** A recently promoted information worker is overwhelmed and unable to meet product goals. A **good** intervention offers resources for mentorship and stress management. A **bad** intervention gives ultimatums that affect job security. **Scenario 2.** An employee recovering from a physical injury struggles to keep up with their old workload. A **good** intervention connects them to an accessbility expert or counselor to help them with their specific issues. A **bad** intervention monitors the employees performance and puts them on performance review. ## 7 Conclusion In this paper, we showed that mobile sensing can address the challenging task of detecting rare LEs in-the-wild. We proposed MTAD, a multi-task framework for life event detection using behavioral sensing data from mobile phones. MTAD's use of an auxiliary sequence predictor addresses several challenges like extreme class imbalance (\(<2\%\)) and biased reconstruction score. We demonstrated the superior performance of our approach using a real-world longitudinal dataset by comparing it with state-of-the-art baselines. From a human-centered perspective, MTAD's effectiveness in personalization without additional training, robustness to decay, and balanced prediction of positive and negative events are desirable qualities. Ultimately, we envision our work motivates ubiquitous health-based intervention strategies through smartphones. ## Acknowledgments This work is supported in part by the Army Research Laboratory (ARL) under Award W911NF202011. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied by ARO or the U.S. Government.
2309.12457
Time-Reversal Symmetry Breaking Superconductivity in CaSb$_2$
CaSb$_2$ is a bulk superconductor and a topological semimetal, making it a great platform for realizing topological superconductivity. In this work, we investigate the superconducting upper and lower critical field anisotropy using magnetic susceptibility, and study the superconducting state using muon spin-relaxation. The temperature dependence of transverse-field relaxation rate can be fitted with a single-gap model or two-gap model. Zero-field relaxation shows little temperature dependence when the muon-spin is parallel to the $c*$-axis, while an increase in relaxation appears below 1 K when the muon-spin is parallel to the $ab$-plane. We conclude an $s+is$ order parameter considering the breaking of time-reversal symmetry (TRS), which originates from competing interband interactions between the three bands of CaSb$_2$. To explain the direction-dependent breaking of TRS we suggest loop currents developing in the plane of distorted square-net of Sb atoms.
M. Oudah, Y. Cai, M. V. De Toro Sanchez, J. Bannies, M. C. Aronson, K. M. Kojima, D. A. Bonn
2023-09-21T20:00:23Z
http://arxiv.org/abs/2309.12457v3
Critical Field Anisotropy and Muon Spin Relaxation Study of Superconducting Dirac-Semimetal CaSb\({}_{2}\) ###### Abstract CaSb\({}_{2}\) has been identified as a bulk superconductor and a topological semimetal, which makes it a great platform for realizing topological superconductivity. In this work, we investigate the superconducting upper and lower critical field anisotropy using magnetic susceptibility, and study the superconducting state using muon spin-relaxation. The temperature dependence of transverse-field relaxation can be fitted with a single-gap model or two-gap model, consistent with previous tunnel-diode oscillator measurements. We highlight that the normal state of CaSb\({}_{2}\) shows a large diamagnetic signal, which is likely related to its Dirac semimetal nature. Zero-field relaxation shows little temperature dependence when the muon-spin is parallel to the \(c\)-axis, while an increase in relaxation appears below 1 K when the muon-spin is parallel to the \(ab\)-plane. This may be related to a second superconducting phase appearing at low temperature below the bulk \(T_{c}\). However, we find no discernible anomaly in \(\mu_{0}H_{\rm c1}(0)\) around this temperature as has been seen in other superconductors with secondary superconducting states that appear at lower temperatures. ## I Introduction When a material enters the superconducting state it breaks U(1) gauge symmetry, and breaking of any additional symmetries is typically an indication of unconventional superconductivity [1]. In some unconventional superconductors, time-reversal symmetry (TRS) is broken as the material enters the superconducting state as proven by detection of spontaneous magnetic fields below the onset of superconductivity. This spontaneous magnetic field has been detected in zero-field muon relaxation measurements in unconventional superconductors such as UPt\({}_{3}\)[2] and Sr\({}_{2}\)RuO\({}_{4}\)[3; 4]. Spontaneous magnetic fields can emerge in non-centrosymmetric superconductors, where the lack of inversion symmetry results in a spin-split Fermi surface due to antisymmetric spin-orbit coupling (SOC) and a singlet-triplet mixing in the superconducting state [5], such as LaNiC\({}_{2}\)[6], Re\({}_{2}\)Zr [7], and La\({}_{7}\)Ir\({}_{3}\)[8]. This breaking of TRS can even appear in centrosymmetric multi-gap superconductors, for specific compositions, such as the case in FeSe [9], Fe(Se,Te) [10] and Ba\({}_{1-x}\)K\({}_{x}\)Fe\({}_{2}\)As\({}_{2}\)[11; 12] and single \(s\)-wave gap locally non-centrosymmetric SrPtAs and CaPtAs with strong SOC [13; 14]. In PrOs\({}_{4}\)Sb\({}_{12}\) TRS breaking appears below \(T_{\rm c}\) and is discussed in relation to nonmagnetic quadrupolar fluctuations in the normal state [15]. In the Sr-doped topological insulator Bi\({}_{2}\)Se\({}_{3}\), TRS breaking has been discussed in relation to the anisotropic Dirac cone dispersion in the normal state band structure of doped Bi\({}_{2}\)Se\({}_{3}\) allowing for triplet pairing [16]. In all of the above materials, the onset of TRS breaking coincides with \(T_{\rm c}\) under ambient conditions and none are reported to break TRS in manner such that it can only be detected when muon spins are in a specific direction. CaSb\({}_{2}\) belongs to the family of non-symmorphic antimonides \(M\)Sb\({}_{2}\) (\(M\)= Alkaline-Earth, Rare-Earth) containing screw rotation symmetry. CaSb\({}_{2}\) is a compensated semimetal [17; 18] and the calculated Fermi-surface demonstrating compensation supports CaSb\({}_{2}\) being a topological nodal-line semimetal [17; 19] due to the non-symmorphic space group 11, \(P2_{1}/m\)[20]. The compensated semimetal state is related to three bands crossing the Fermi level, two electron-like bands dominated by contributions from the Sb site forming a distorted square-net and a hole-like band dominated by contributions from the other Sb site forming a zig-zag chain along the \(b\)-direction [17]. Superconductivity was discovered recently in polycrystalline samples [21], and further confirmed in single crystal samples [17]. Recently, the anisotropy of the upper critical field of CaSb\({}_{2}\) based on resistivity measurements and the lower critical field estimate based on magnetization at 0.55 K have been reported [22]. The specific heat transition in single crystal samples suggests deviation from a single \(s\)-wave gap, with the possibility of multiple gaps in the superconduct ing state [17]. A coherence peak was observed near \(T_{\rm c}\) in Sb-nuclear quadrupole resonance (NQR) suggesting \(s\)-wave superconductivity [23], at least near \(T_{\rm c}\). Using a tunnel diode oscillator (TDO), the temperature dependence of the penetration depth in CaSb\({}_{2}\) reveals the presence of multiple gaps and exponential behaviour at low temperature that indicates nodeless superconductivity [24]. These reports warrant further investigations of the anisotropy of the superconducting state of CaSb\({}_{2}\) and studying the superconductivity using a local probe such as muon spin rotation/relaxation (\(\mu\)SR). Here we report the critical field anisotropy from magnetic susceptibility, estimate the penetration depth and the temperature dependence of the superconducting gap from transverse-field (TF) muon data, and find evidence for TRS breaking below \(T_{\rm c}\) in zero-field (ZF) \(\mu\)SR only when muons spin lies in the \(ab\)-plane, suggesting a spontaneous field perpendicular to the \(ab\)-plane emerges in CaSb\({}_{2}\). The breaking of TRS appears below \(\sim 1\) K, which is well below the \(T_{\rm c}\) of 1.6 K. The observation of TRS breaking despite the lack of magnetism or strong spin-orbit-coupling in CaSb\({}_{2}\) is intriguing, and may be related to the topological nature of the superconducting state reflecting the topologically non-trivial normal state band structure. ## II Methods Single crystals of CaSb\({}_{2}\) were grown using an Sb self flux method as described in Ref. [17]. This yields shiny, plate-like crystals with dimensions up to \(3\times 3\times 0.5\,\rm{mm}^{3}\) that are stable in air for several weeks. Phase purity and orientation of the crystals were checked by X-ray diffraction (XRD) using a Bruker D8 with Cu K\(\alpha_{1}\) radiation (1.54056 A). We confirmed bulk superconductivity in our samples using magnetization and specific heat measurements, which exhibited superconducting transitions at \(1.6\pm 0.02\) K. Measurements of the magnetic susceptibility were done using a Magnetic Property Measurements System 3 (MPMS3) from Quantum Design, equipped with a \({}^{3}\)He insert. Muon spectroscopy measurements were performed at the M15 beamline at TRIUMF's Centre for Molecular and Material Science, which is equipped with a dilution refrigerator. Multiple crystals were mounted on a silver plate with GE varnish, then the silver plate was attached to a cold finger utilizing copper filled grease for good thermal conductivity in the dilution refrigerator. We further secured the samples with a thin silver foil over the samples before mounting them into the dilution refrigerator. We achieved true-zero field for zero-field (ZF) \(\mu\)SR using the method previously described by Morris and Heffner [25] to accurately detect any spontaneous fields. We used non-spin-rotated mode for the ZF-\(\mu\)SR measurements and spin-rotated mode for the TF-\(\mu\)SR measurements. In spin-rotated mode the muon spins are rotated perpendicular to the beam velocity, spin lying in the \(ab\)-plane, before landing in the sample, and the field is applied to the sample along the beam direction, perpendicular to the \(ab\)-plane. The \(\mu\)SR data were analyzed with musrfit software [26] to obtain physical parameters. ## III Results and Discussion ### Critical Field Anisotropy Measurements of the anisotropy of the field-dependent dc magnetic susceptibility around the superconducting transition were performed down to 0.4 K, as shown in Fig. 1(a) and (b). The temperature dependent magnetic susceptibility was measured for fields applied parallel or perpendicular to the plate of the crystal, where the \(ab\)-plane lies in the plane of plate-like crystals. We define \(c*\) as the direction perpendicular to the \(ab\)-plane for this monoclinic crystal structure. In both cases, the transition temperature was defined as the 5% volume drop in the volume susceptibility where the 100% volume was defined as the signal in the lowest field measured (1 mT). Figure 1: Temperature dependence of the dc susceptibility measured with different applied fields using a zero-field-cooling procedure measured with \(H//c*\) and \(H//ab\) shown in (a) and (b), respectively. Data represented as volume susceptibility, \(\chi_{\rm V}\), and data in (b) are from Ref [17]. (c) The temperature dependence of the upper critical field \(\mu_{0}H_{c2}\) determined from \(\chi_{\rm V}\) with \(H//c*\) and \(H//ab\), from resistivity [17], and from TF-\(\mu\)SR with \(H//c*\). Solid lines are WHH fits described in the text to the data used to estimate \(\mu_{0}H_{c2}(0)\). The demagnetization correction based on the Brandt formula [27] yields a 100% volume fraction for the case of \(H//ab\), but the same correction results in a 30% volume fraction in the case of \(H//c*\). This is due to our plate-like crystals not being perfect rectangular slabs, as assumed in the calculation of the demagnetization factor. For the \(H//c*\) direction, we take the 5% volume drop relative the signal measured in the lowest field being the 100% volume fraction, as shown in Fig. 1(a). The upper critical field was estimated using the Werthamer-Helfand-Hohenberg (WHH) relation [28] for the data as shown with the lines in Fig. 1(c). The temperature dependence of \(H_{c2}^{ab}\) is typical of a type-II superconductor, as the transition moves to lower and lower field values upon increasing the temperature. Similar behavior is observed for measurements with the other field orientation \(H_{c2}^{c}\). The estimated upper critical fields are \(H_{c2}^{c}=8.1\) mT and \(H_{c2}^{ab}=24.7\) mT, thus yielding an anisotropy ratio \(\gamma_{\rm anisotropy}=H_{c2}^{ab}/H_{c2}^{c}\) in CaSb\({}_{2}\) of about 3.1. We estimate the coherence length based on the upper critical field, and obtain 202 nm and 66 nm for \(\xi_{\rm GL,ab}\) and \(\xi_{\rm GL,c}\), respectively. We note that deviation from WHH relation near \(T_{\rm c}\), where \(\mu_{0}H_{\rm c2}(0)\) increases slowly with decreasing temperature, is consistent with nodal-line \(s\)-wave superconductivity [29]. This warrants further investigation in future studies. To estimate the lower critical field \(\mu_{0}H_{\rm c1}(0)\) we measured the field-dependent magnetization (\(M\)) at different temperatures below the critical temperature in both directions, as shown in Fig. 2(a) and (d). We use a linear fit to the low-field data measured at 0.4 K in each direction, and subtract this fit from all the measured curves for measurements in each direction, as demonstrated in the inset of Fig. 2(a). The value of the uncorrected \(\mu_{0}H_{\rm c1}^{*}(0)\) at each temperature was determined based on the intersect of a linear fit to the upturn of this subtracted data, \(M-M_{fit}\), and the horizontal line where \(M-M_{fit}=0\) marked in the inset of Fig. 2(a) for 0.4 K and 1.5 K. The \(\mu_{0}H_{\rm c1}^{*}(0)\) plotted against the corresponding temperature for the \(H//c*\) and \(H//ab\) direction are shown in Fig. 2(b) and (e), and the same data plotted against temperature squared are shown in Fig. 2(c) and (f). We note that the temperature dependence of \(\mu_{0}H_{\rm c1}^{*}(0)\) is fitted well by the equation [30] \[\mu_{0}H_{c1}^{*}(T)=\mu_{0}H_{c1}^{*}(0)\left[1-\left(\frac{T}{T_{c}}\right) ^{2}\right] \tag{1}\] where \(\mu_{0}H_{\rm c1}^{*}(0)\) is the lower critical field at 0 K, where fits are shown with green line in Fig. 2(c) and (f). Although the data fit the equation above well, we attempt two independent fits to the temperature regions above and below 1 K in search of any anomaly that can be reconciled with our zero-field \(\mu\)SR. Enhancement of the lower critical field was observed in UPt\({}_{3}\)[31] and PrOs\({}_{4}\)Sb\({}_{12}\)[32], and this was related to the emergence of a second super Figure 2: The magnetization (\(M\)) as a function of applied field \(\mu_{0}H\) measured at different temperatures \(T\) below \(T_{c}\) with field applied parallel to \(c*\) direction and \(ab\)-plane shown in (a) and (d), respectively. A degaussing procedure was carried out between measurements, and a linear fit was applied to the low field region of the 0.4 K data. Lower critical field \(\mu_{0}H_{\rm c1}^{*}(0)\) as a function of temperature (\(T\)) for \(H//c*\) and \(H//ab\), shown in (b) and (e), estimated using the magnetization data in (a) and (d) by subtracting the linear fit to 0.4 K data from all the curves, shown in inset of (a). Linear fit is applied to the upturn data and the intersect is defined as \(\mu_{0}H_{\rm c1}^{*}(0)\). (c) and (f) show \(\mu_{0}H_{\rm c1}^{*}(0)\) as a function of \(T^{2}\) and fits to the data \(<1\) K (dashed blue line), \(>1\) K (dashed red line), and over the entire temperature range are shown (solid green line). conducting phase at low temperature. We discuss this further below in Sec. III.3. The typical equations used for estimating the penetration depth (\(\lambda\)) from \(\mu_{0}H_{\mathrm{c1}}(0)\) do not have a solution for the values measured in our experiment. Instead, we estimate the \(\mu_{0}H_{\mathrm{c}}(0)\) from \(\mu_{0}H_{\mathrm{c1}}(0)\) and \(\mu_{0}H_{\mathrm{c2}}(0)\) with the following equation [33]: \[\mu_{0}H_{c}=\sqrt{\mu_{0}H_{\mathrm{c1}}\times\mu_{0}H_{\mathrm{c2}}} \tag{2}\] Here we assume the \(\mu_{0}H_{\mathrm{c1}}(0)\) values for \(H//c*\) and \(H//ab\) are equivalent to the \(\mu_{0}H_{\mathrm{c1}}^{*}(0)\) measured, without applying any demagnetization correction. This is due to the difficulty of calculating the demagnetization correction for the \(H//c*\). The value of thermodynamic critical field \(\mu_{0}H_{\mathrm{c}}(0)\) is expected to be equivalent for both directions, but does not match for the current measurements on our samples. We take the average in both directions and estimate \(\mu_{0}H_{\mathrm{c}}(0)=6.0\pm 0.5\) mT, as shown in Table 1, which is consistent with the value previously reported using the integral of \(M(H)\) curve [22]. Despite this consistency of \(\mu_{0}H_{\mathrm{c}}(0)\), we suspect that the currently measured lower critical field values are inaccurate and should be the subject of future studies. We estimate the Ginzburg-Landau parameter for each direction with the following equation: \[\mu_{0}H_{c2}=\sqrt{2}\kappa_{GL}\mu_{0}H_{c} \tag{3}\] where \(\kappa_{GL}\) is the Ginzburg-Landauer parameter. We summarize our characterization of the superconducting state based on magnetic susceptibility measurements in Table 1. We estimate the upper critical field \(H//c*\) extracted from the 50% drop in resistivity measurement, previously published by some of the current authors [17], to be about \(11.5\pm 0.8\) mT in Fig. 1. This value is different from that reported recently by another group [22], which may be due to different sample quality or due to different current densities in the resistivity measurements. Nevertheless, the anisotropy of the upper critical field of \(\sim 3\) from the resistivity measurements reported [22] is consistent with our anisotropy estimates based on magnetic susceptibility measurements. Here we note that for \(H//c*\), it seems that the superconducting state of CaSb\({}_{2}\) is at the border of type-I/type-II regime. This poses challenges for our muon measurement as highlighted below. ### Transverse-Field Muon Study We perform transverse field \(\mu\)SR measurements on CaSb\({}_{2}\) with applied magnetic field perpendicular to the \(ab\)-plane, which can be used to determine the temperature dependence of the penetration depth. With the temperature dependence of the penetration depth we can infer properties of the energy gap of the superconducting state of CaSb\({}_{2}\). For these measurements we extract the field from the precession of muons inside the background silver, which is the most precise way of measuring the applied field. We performed the measurement with applied fields ranging from \(1.3-11.5\) mT, and all measurements were performed on samples cooled in the applied field. We employed a beam of muons polarized such that the spin is perpendicular to their momentum, while the applied field is parallel to the momentum. The spin of the muon precesses with a frequency proportional to the local magnetic field, and, upon decay, emits a positron preferentially in the muon spin direction. Typically with a field that is above the lower critical field, we expect a well ordered flux line lattice when using a field cooling procedure. Typical time evolution of the asymmetry for CaSb\({}_{2}\) is shown in Fig. 3(a), measured in 5.8 mT at 2.00 K and 0.03 K, above and below \(T_{\mathrm{c}}\) respectively. In the mixed state, we have an inhomogenous field distribution due to the presence of a flux line lattice (FLL), which results in a decay of the precession signal as a function of time. We fit the asymmetry spectra using a two term sinusoidal decaying function \[G_{\mathrm{TF}}(t) =A\left[F\exp\left(\frac{-\sigma^{2}t^{2}}{2}\right)\cos\left( \omega_{1}t+\phi\right)\right.\] \[\left.+(1-F)\exp(-\psi t)\cos\left(\omega_{2}t+\phi\right)\right]\] where the first term captures the signal from muons stopping in the sample and the second term captures the signal from muons stopping in the silver sample holder. \(F\) is the fraction of the signal coming from the sample, while \(\omega_{1}\) and \(\omega_{2}\) are the muon precession frequencies in the sample and the background, respectively. The \(A\) term is the total asymmetry and the \(\phi\) is the initial phase of the muons. \(\sigma\) and \(\psi\) are the depolarization rates for the sample and the background signals, respectively. The \(\sigma\) term contains a contribution from the field distribution caused by the vortex lattice in the superconducting state (\(\sigma_{sc}\)) and the smaller, temperature independent, contribution from randomly oriented nuclear dipole moments (\(\sigma_{N}\)). These two signals are added in quadrature, such that the contribution from the FLL can be obtained as \(\sigma_{sc}=\sqrt{\sigma^{2}-\sigma_{\mathrm{N}}^{2}}\). The superconducting relaxation rate (\(\sigma_{sc}\) \begin{table} \begin{tabular}{|c c c|} \hline **Direction** & \(ab\) & \(c*\) \\ \hline \(\mu_{0}H_{\mathrm{c1}}(0)\) & \(2.1\pm 0.4\) mT & \(2.8\pm 0.4\) \\ \(\mu_{0}H_{\mathrm{c2}}(0)\) & \(24.7\pm 0.8\) mT & \(8.1\pm 0.8\) mT \\ \(\mu_{0}H_{\mathrm{c}}(0)\) & \(6.0\pm 0.5\) mT \\ \(\xi_{\mathrm{GL}}\) & 202 nm & 66 nm \\ \(\kappa_{\mathrm{GL}}\) & \(2.90\pm 0.26\) & \(0.95\pm 0.12\) \\ \hline \end{tabular} \end{table} Table 1: Superconducting parameters derived from our measurements of CaSb\({}_{2}\). \(\mu_{0}H_{\mathrm{c1}}(0)\) is the average of that calculated for each direction based on Eq. 2. indicates the mean square inhomogeniety in the field experiend by muons, \(\left\langle\left(\Delta B\right)^{2}\right\rangle,\) due to the FLL [34], where \(\left\langle\left(\Delta B\right)^{2}\right\rangle=\left\langle\left(B-\left\langle B \right\rangle\right)^{2}\right\rangle\), which results in the relaxation rate for the FLL \[\sigma_{sc}^{2}=\gamma_{\mu}^{2}\left\langle\left(\Delta B\right)^{2}\right\rangle\] where \(\gamma_{\mu}(=2\pi\times 135.5\)MHz/T) is the muon gyromagnetic ratio. The Fourier power against internal magnetic field, shown in Fig. 3(b), shows a large peak corresponding to \(\omega_{2}\) of the silver sample holder. The relaxation rate \(\sigma\) as a function of temperature extracted from TF-\(\mu\)SR in various fields for CaSb\({}_{2}\) is plotted in Fig. 3(c). We extract \(T_{\text{c}}\) from the TF-\(\mu\)SR in various fields, where we define \(T_{\text{c}}\) as the intersection of the line of best fit with sharpest slope in the transition seen in \(\sigma\) and the normal state nuclear contribution \(\sigma_{\text{N}}\sim 0.210\). We calculate the expected nuclear magnetic moment by first estimating the muon stopping site based on the Hartree potential, where we find the preferred muon stopping site is (0.65,0.75,0.15). Based on the magnetic active nuclei, only Sb in our case, we find an expected nuclear dipolar field of 3.5323 \(\mu\)N. This corresponds to a \(\sigma_{\text{N,calc}}\sim 0.210\)\(\mu\)s, in agreement with the value measured experimentally, as shown in Fig. 3(c). The applied magnetic fields, as extracted from the precession of muons in the Ag sample holder \(\omega_{2}\), are plotted against \(T_{\text{c}}\) in Fig. 1(c), and we fit the WHH relation to obtain the \(\mu_{\text{\tiny{H}}}\)= 5.8 mT upper critical field from TF-\(\mu\)SR as \(10.5\pm 0.4\) mT. This \(\mu_{0}H_{\text{c2}}(0)\) value is consistent with estimates based on 50% drop in resistivity and those extracted from magnetization measurement with field applied perpendicular to the \(ab\)-plane. From the field dependence of \(\sigma\) measured well below \(T_{\text{c}}\) at \(25\pm 5\) m, shown in Fig. 4(b), we find a peak at low fields. The FLL state is only realized at fields well above this peak region, where ideally in strong type-II superconductors we expect a relatively weak field dependence above this peak and below the upper critical field. Since CaSb\({}_{2}\) is barely type-II, we do not have a wide range of weak field dependence, but nevertheless choose 5.8 mT, which is well above the peak position, as a field representing the highest likelihood of realizing a homogeneous FLL state. For fields approaching the upper critical field, \(\left[H/H_{c2}=0.5\right]\), the penetration depth can be calculated from the relaxation rate using the Brandt formula [35] for a triangular Abrikosov vortex lattice: \[\sigma_{\text{sc}}(T)=\frac{0.0609\times\gamma_{\mu}\phi_{0}}{\lambda^{2}(T)}\] where \(\sigma_{\text{sc}}(T)\) is in \(\mu\)s\({}^{-1}\) and \(\lambda(T)\) is in nm. \(\phi_{0}\) is the magnetic flux quantum, \(\left(2.067\times 10^{-15}\text{ Wb}\right)\). We can relate the temperature dependence of the relaxation rate to the penetration depth with \[\frac{\sigma_{sc}(T)}{\sigma_{sc}(0)}=\frac{\lambda^{-2}(T)}{\lambda^{-2}(0)}.\] The temperature dependence of the energy gap \(\Delta(T,\hat{k})\) within BCS theory [36] is given by: \[\Delta(T,\hat{k})=\Delta(0)\tanh\left\{1.82\left(1.018\left(\frac{T_{c}}{T}-1 \right)\right)^{0.51}\right\}g_{\hat{k}}\] where \(\Delta(0)\) is the gap magnitude at zero temperature, and the \(g_{\hat{k}}\) term accounts for the orientation (\(\hat{k}\)) dependence of the gap function, which can, for example, be substituted with 1 for an \(s\)-wave model and \(|\cos(2\phi)|\) for a \(d\)-wave model, where \(\phi\) is the azimuthal angle. CaSb\({}_{2}\) has a coherence length over normal state mean free path of about 1.78 [17], which places it at the border between clean and dirty limit. The temperature dependence of the superconducting gap can be obtained from the temperature dependence of the penetration depth in the clean limit with the relation Figure 3: Representative TF-\(\mu\)SR signals collected above and below \(T_{\text{c}}\) in CaSb\({}_{2}\) under an applied magnetic field of 5.8 mT. The solid lines are fits using the sinusoidal decaying function described in the text. (b) The Fourier transform of the M15 \(\mu\)SR Asymmetry for measurements in different applied magnetic fields at \(\sim\) 30 mK, representing the field distribution of the local field probed by muons. The sharp peaks in the data indicate the applied field experienced by muons stopping in the silver cold finger. The broad features at lower/higher internal fields represents the muons stopping in the CaSb\({}_{2}\) sample. (c) The muon Gaussian relaxation rate \(\sigma\) as a function of temperature in different applied magnetic fields. \[\frac{\lambda^{-2}(T)}{\lambda^{-2}(0)}=1+2\left\langle\int_{|\Delta(T,\hat{k})|}^ {\infty}\left(\frac{\delta f}{\delta E}\right)\frac{EdE}{\sqrt{E^{2}-\Delta^{2}( T,\hat{k})}}\right\rangle\] while in the dirty limit with the relation \[\frac{\lambda^{-2}(T,\hat{k})}{\lambda^{-2}(0)}=\left\langle\frac{\Delta(T, \hat{k})}{\Delta(0)}\tanh\left[\frac{\Delta(T,\hat{k})}{2k_{B}T}\right]\right\rangle\] where \(f=\left[1+\exp\left(E/k_{B}T\right)\right]^{-1}\) is the Fermi function, and quantities in brackets are the average over the Fermi surface. Considering previous specific heat measurements showing deviation from single-gap BCS [17] and that tunnel diode oscillator measurements on CaSb\({}_{2}\) are better fitted with a two-gap model, we utilized a two-gap model fit for our muon data where the total depolarization is expressed as the sum of two components: \[\frac{\sigma_{FLL}^{-2}(T)}{\sigma_{FLL}^{-2}(0)}=x\frac{\sigma_{FLL}^{-2} \left(T,\Delta_{0,1}\right)}{\sigma_{FLL}^{-2}\left(0,\Delta_{0,1}\right)}+(1 -x)\frac{\sigma_{FLL}^{-2}\left(T,\Delta_{0,2}\right)}{\sigma_{FLL}^{-2}\left( 0,\Delta_{0,2}\right)}\] where \(\Delta_{0,1}\) and \(\Delta_{0,2}\) are the gap values at zero temperature and \(x\) is the fraction of gap-1 over the sum of two gaps. We fit the gap using an \(s+s\) wave model as shown in Fig. 4(a). Assuming the zero-field tunnel diode oscillator measurement performed on CaSb\({}_{2}\) is representative of our samples, supported by the similarity of the specific heat data reported in the same paper [24] with our specific heat measurement on our samples [17], we accept the presence of two gaps in zero field in CaSb\({}_{2}\). Two gaps with the same value as reported (\(\Delta_{1}(0)/k_{B}T_{\rm c}\)= 1.8 and \(\Delta_{2}(0)/k_{B}T_{\rm c}\)= 0.81) [24] and single gap (\(\Delta(0)/k_{B}T_{\rm c}\)= 1.59) fit are shown in Fig. 4(a) in red and black, respectively. We note that for data measured in 5.8 mT a single \(s\)-wave gap is sufficient to fit the data, and the two-gap model does not significantly improve the fit. We note that the evolution of two gaps with applied magnetic fields has been demonstrated in measurements on NbSe\({}_{2}\)[37], and a similar evolution of the two gaps with applied magnetic field may appear in other multi-gap superconductors, such as CaSb\({}_{2}\). However, such a conclusion cannot be drawn from the current \(\mu\)SR data due to the large error bars. The strong magnetic field dependence of (\(\sigma_{sc}\)) in CaSb\({}_{2}\), shown in Fig. 4(b), is associated with the low Figure 4: (a) The temperature dependence of superconducting contribution to the relaxation rate \(\sigma_{\rm SC}\) (green symbols), and the fits with a single \(s\)-wave gap and two \(s\)-wave gaps in black and red, respectively. (b) The muon Gaussian relaxation rate \(\sigma\) as a function of applied magnetic field at base temperature (\(\sim 25\pm 5\) mK) and the average at low temperature, below \(0.25T_{\rm c}\) for each curve in Fig. 3(c). Figure 5: Uemura plot showing the superconducting transition temperature \(T_{\rm c}\) vs the Fermi temperature \(T_{\rm F}\), where CaSb\({}_{2}\) is shown as blue triangle assuming a 2D and 3D nature of charge carriers. The Bose-Einstein condensation temperature is shown as the \(T_{\rm B}\) line and the Fermi temperature as the \(T_{\rm F}\) line. Data for materials in literature are also plotted [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. We highlight the region of unconventional superconductors in grey at the top left region and conventional superconductors in the bottom right region. \(\mu_{0}H_{c2}(0)\) compared with the applied fields, and may be related to the faster suppression of smaller gap to the superfluid density. Such behavior has been discussed in other two-gap superconductors such as NbSe\({}_{2}\)[50], MgB\({}_{2}\)[51], and SmFeAsO\({}_{0.85}\)[52]. We notice a peak in the low-temperature field-dependence around 3.3 mT, where such a peak typically appears around \(\mu_{0}H_{c1}(0)\). The likely presence of multiple gaps along with the possibility of gap anisotropy both can affect the temperature dependence of FLL, and the high field compared with upper critical field used in our TF-\(\mu\)SR experiments make it difficult to makes a definitive statement on the gap symmetry based on our relaxation rate data. The relatively high field used in our TF-\(\mu\)SR experiment makes corrections to extract the penetration depth from the relaxation rate data difficult, due to the likely distortions to the FLL state in our case. Nevertheless, we give an estimate using of the penetration depth \(\lambda_{ab}=426\) nm. We compare the superconducting state in CaSb\({}_{2}\) with other known superconductors using the well-known Uemura plot. We plot the superconducting transition temperature against the Fermi temperature for CaSb\({}_{2}\) along with various other superconductors in Fig. 5, where we highlight the region of unconventional superconductors in grey at the top left region and conventional superconductors in the bottom right region. Considering the quasi-2D nature of CaSb\({}_{2}\), we estimate the Fermi temperature assuming a 2D system via the relation \(T_{F}=\frac{(\hbar^{2}\pi)m_{2D}}{k_{B}m^{2}}\) and for a 3D system via the relation \(T_{F}=(\hbar^{2}/2)(3\pi^{2})^{2/3}n^{2/3}/k_{B}m^{*}\)[38]. We use the previously reported carrier concentration and effective mass \(m*\)[17]. Based on our estimates, CaSb\({}_{2}\) appears in a region in between conventional superconductors and unconventional superconductors, where the estimate assuming a 2D system falls closer to the unconventional region. ### Zero-Field Muon and Time Reversal Symmetry Breaking We utilize muon spin relaxation measurements in zero-field (ZF) to search for any sign of spontaneous magnetic fields associated with breaking of TRS in the superconducting state. ZF spectra for CaSb\({}_{2}\) collected above \(T_{\rm c}\) and at the lowest temperature \(\sim 30\) mK are shown in Fig. 6(b) and (c) for muon spins perpendicular to the \(ab\)-plane and parallel to the \(ab\)-plane, respectively. In the absence of any static electronic moments, the muon polarization decay is due to the randomly oriented nuclear magnetic moments, which is generally described by the Gaussian Kubo-Toyabe function G\({}_{KT}\)(t) \[G_{\rm KT}(t)=\frac{1}{3}+\frac{2}{3}\left(1-\sigma^{2}t^{2}\right)\exp\left( -\frac{\sigma^{2}t^{2}}{2}\right)\] where \(\sigma\) reflects the width of the field experienced by muons due to nuclear dipoles. We fit the ZF spectra with the following relaxation function \[A(t)=A_{1}G_{\rm KT}(t)\exp(-\Lambda t)+A_{\rm BG}\] where \(A_{1}\) is the sample asymmetry and \(A_{BG}\) is the background asymmetry. The additional \(\exp(-\Lambda t)\) represents any additional relaxation in the sample, such as broken TRS. The zero field muon spin relaxation rates of CaSb\({}_{2}\) with muon spin perpendicular and parallel to the \(ab\)-plane show a consistent contribution from nuclear dipole moments as expected. However, the contribution to the additional term shows a change in \(\Lambda\) at low temperature only when the muon spin is parallel to the \(ab\)-plane. Interestingly, this increase in \(\Lambda\) shows a linear dependence at low temperature and seems to appear at 1.0 K, which is well below \(T_{\rm c}\)\(\sim\) 1.6 K. In the case of muon spin perpendicular to the \(ab\)-plane we find no significant change at \(T_{\rm c}\), nor do we find any change at lower temperatures. We explore possible interpretations for this spontaneous TRS breaking, taking into account its dependence on the muon spin direction and the temperature being well below \(T_{\rm c}\). One possible explanation is triplet pairing in the superconducting state, but we exclude this based on Knight shift measurements revealing a decrease below \(T_{\rm c}\)[53]. Breaking of TRS was reported below \(T_{\rm c}\) in SrPtAs, a multigap \(s\)-wave superconductor, and this is discussed in relation to the multicomponent order parameter, belonging to a multidimensional representation, and grain-boundaries in the polycrystalline samples. This is unlikely in CaSb\({}_{2}\) due to the single crystal nature of the sample in our experiment. Another distinction is the appearance of the TRS breaking when muon spins are parallel to the \(ab\)-plane only, which implies that the TRS breaking in CaSb\({}_{2}\) is such that only muons spins parallel to the plane are sensitive to it. We consider the possibility of another superconducting phase breaking TRS that appears at 1 K, analogous with TRS breaking that appears below \(T_{\rm c}\) in UPt\({}_{3}\) in \(\mu\)SR [2] and Kerr effect [54] measurements. A second superconducting phase emerging below \(T_{\rm c}\) has been discussed for PrOs\({}_{4}\)Sb\({}_{12}\), where an enhancement of \(\mu_{0}H_{c1}(0)\) associated with the secondary phase is reported [32]. We considered the possibility of a secondary phase emerging at \(\sim 1\) K in CaSb\({}_{2}\), so we analyzed the \(\mu_{0}H_{c1}(0)\) estimates in Fig. 2 by fitting the data above and below 1 K. We see a slight increase of estimated \(\mu_{0}H_{c1}(0)\) based on data below 1 K compared with that above 1 K when \(H//c*\), while smaller difference appears for data above and below 1 K of \(\mu_{0}H_{c1}(0)\) with \(H//ab\). The possible anomaly in \(\mu_{0}H_{c1}(0)\) in CaSb\({}_{2}\) is similar to that observed in PrOs\({}_{4}\)Sb\({}_{12}\)[32], although if present it is much weaker in CaSb\({}_{2}\) and is only observed with field applied \(H//c*\). The appearance of TRS breaking with a spontaneous field in the \(c*\) direction may be related to the change in the \(T^{2}\) dependence in \(\mu_{0}H_{c1}(0)\) measured in the same direction. However, for both directions a single fit to \(\mu_{0}H_{c1}(0)\) over all temperatures falls within the error bars, which makes the above analysis mere speculation in an attempt to understand the TRS breaking observed. We consider the topology of CaSb\({}_{2}\) in the normal state to explain the observation of TRS breaking. The topologically non-trivial Dirac nodal lines in CaSb\({}_{2}\) have been shown theoretically to support topological superconductivity with \(B_{g}\) pairing symmetry, which has been termed nodal-line superconductivity [55]. Also, the topologically trivial \(A_{g}\) symmetry is also supported in CaSb\({}_{2}\)[55], which is more likely considering the nodeless gap behaviour observed in our TF-\(\mu\)SR measurements and previous works [23; 24]. This leads us to conclude that even if a second superconducting phase emerges in CaSb\({}_{2}\) at low temperatures it must have no extended nodes in its gap. ### Large Diamagnetism in the Normal State Large diamagnetism in the normal state has been observed in Dirac semimetals [56; 57; 58; 59; 60], and discussed in relation to a linear dispersion near the Fermi level [56]. In non-magnetic compounds like CaSb\({}_{2}\), the normal state magnetic response can be dominated by Pauli paramagnetism originating from the spin of free electrons. As well, diamagnetic contributions can arise from Landau diamagnetism, originating from the orbital motion, and a minor contribution from Larmor diamagnetism, originating from core ions. The Larmor diamagnetism in the case of CaSb\({}_{2}\) is expected to be around \(\sim 1.6\times 10^{-4}\) emu mol\({}^{-1}\) Oe\({}^{-1}\)[61], while the experimentally observed value is larger than \(\sim 2.5\times 10^{-4}\) emu mol\({}^{-1}\) Oe\({}^{-1}\). The weak temperature dependence of the magnetic susceptibility down to 50 K, shown in Fig. 7(b), is consistent with strong diamagnetic contribution related to the Dirac nodal-lines [62]. This strong diamagnetic signal is observed despite the paramagnetic contribution expected in the normal states, and indicates a strong contribution from orbital diamagnetism that is likely related to the Dirac electrons in semimetallic CaSb\({}_{2}\). Further investigations of this diamagnetic signal in future studies will help us clarify its origin. The possible contribution from Dirac electrons in the normal state of CaSb\({}_{2}\) to the magnetic properties and evidence for the validity of band structure calculations showing Dirac nodal-line states [17; 19; 22] deepens the interest in the superconducting state realized in CaSb\({}_{2}\). The carrier concentration is reported to be highly influenced by synthesis conditions [19], where changes seem to have little effect on the three-dimensional hole pocket and superconducting \(T_{\rm c}\). This suggests that topology may be tuned in CaSb\({}_{2}\) without affecting the superconductivity observed, such that topology and superconductivity can Figure 6: (a) Temperature dependence of the electronic relaxation rate \(\Lambda\) with muon spins parallel to \(ab\)-plane and perpendicular to \(ab\)-plane (parallel to \(c\)-axis). A clear increase in the extracted rate can be seen below 1 K with muon spin parallel to \(ab\)-plane, indicating the appearance of spontaneous fields inside the superconducting state. (b) ZF \(\mu\)SR spectra collected at 3.5 K and 34 mK for spin perpendicular to \(ab\)-plane (parallel to \(c\)-axis), with fit using Kubo-Toyabe function. (b) ZF \(\mu\)SR spectra collected at 2.00 K and 21 mK for spin parallel to \(ab\)-plane, fit using Kubo-Toyabe function. We can see a clear difference between the asymmetry shown in (c), indicating the presence of spontaneous fields inside the superconducting state. (d) Muon stopping site inside the CaSb\({}_{2}\) crystal structure and spin direction for the experiment in (b) with \(S_{\mu}//c\)-axis. (e) Same muon stopping site as (d), but with \(S_{\mu}//ab\)-plane as in experimental result of (c). be realized in the same sample. In summary, we demonstrated upper and lower critical field anisotropy in the superconducting state using magnetic susceptibility, and study the superconducting state using muon spin-relaxation. The temperature dependence of transverse-field relaxation can be fitted equally well with a single-gap model or two-gap model. A two-gap scenario is more likely considering previously reported tunnel-diode oscillator measurements on CaSb\({}_{2}\). The normal state of CaSb\({}_{2}\) shows a large diamagnetic signal, which is likely related to its Dirac semimetal nature. Zero-field relaxation shows little temperature dependence when the muon-spin is parallel to the \(c\)\(\ast\)-axis, while an increase in relaxation appears below 1 K when the muon-spin is parallel to the \(ab\)-plane. However, we find no discernible anomaly around this temperature in other measurements on CaSb\({}_{2}\). In various materials, such as UPt\({}_{3}\), PrOs\({}_{4}\)Sb\({}_{12}\), and Sr\({}_{2}\)RuO\({}_{4}\), the onset of TRS breaking coincides with \(T_{\rm c}\) under ambient conditions and its detection is independent of the direction of muon spins. Considering the anisotropy and two gap nature of the superconducting state and the emergence of direction-dependent TRS breaking at 1 K, \(\sim 2/3T_{\rm c}\), further investigation of CaSb\({}_{2}\) to clarify the possible emergence of a secondary phase is needed. Interactions between band structure topology, normal state diamagnetism, and superconductivity should be the subject of future studies. Finally, considering that the band structure of CaSb\({}_{2}\) is dominated by Sb atoms sitting at the two distinct sites in the material, investigation of other \(M\)Sb\({}_{2}\) antimonides with similar structures is of great interest to contrast with our current findings. ## IV Acknowledgements We thank TRIUMF staff for their technical support during muon experiment. We thank M. Sigrist, G. Luke, Y. Uemura, J. Sonier, and A. Ramires for discussion on superconductivity. We also thank H. Matsuura and M. Ogata for discussion on diamagnetism in Dirac semimetals. MO acknowledges the support by Stewart Blusson Quantum Matter Institute and the Max Planck-UBC-UTokyo Center for Quantum Materials. JB, DAB, and MCA acknowledge the support by the Natural Sciences and Engineering Research Council of Canada (NSERC). Figure 7: (a) Field-dependent isothermal magnetization of CaSb\({}_{2}\) (\(H//c\ast\)) measured in range of 0 to 7 T at 3 K and 10 K. Analysis of dHvA oscillations was shown in earlier report [17]. (b) Temperature dependent magnetization \(M\) of CaSb\({}_{2}\) measured in an applied field of 1.0 T, where \(H//c\ast\).
2309.13991
Magnetism on the thermal dynamics of 2D antiferromagnetic membranes
We developed a theoretical scheme of incorporating the magnetoelastic contribution into the thermal elastic dynamics for the thin membranes of 2D antiferromagnetic material with restricted geometry. We extended the elastic Gr\"uneisen relation into an effective version which includes the magnetic counterpart to the volume change of internal energy. Based on the specific heat and thermal conductivity from the elastic and magnetic origins we predicted the dependency of observables, such as effective Gr\"uneisen parameter, thermal expansion coefficient, and the damping factor, with respect to a wide range of temperature across the phase transition. Our model of analysis as been validated by applying to the case of FePS3 flake resonator and the theoretical predictions fits well with the reported experiment data.
Xiang Zhang, Makars Siskins, Yaroslav Blanter
2023-09-25T09:43:45Z
http://arxiv.org/abs/2309.13991v2
# Magnetism on the thermal dynamics of 2D antiferromagnetic membranes ###### Abstract We developed a theoretical scheme of incorporating the magnetoelastic contribution into the thermal elastic dynamics for the thin membranes of 2D antiferromagnetic material with restricted geometry. We extended the elastic Gruneisen relation into an effective version which includes the magnetic counterpart to the volume change of internal energy. Based on the specific heat and thermal conductivity from the elastic and magnetic origins we predicted the dependency of observables, such as effective Gruneisen parameter, thermal expansion coefficient, and the damping factor, with respect to a wide range of temperature across the phase transition. Our model of analysis as been validated by applying to the case of FePS\({}_{3}\) flake resonator and the theoretical predictions fits well with the reported experiment data. ## I Introduction In recent decades the 2D magnetic (van der Waals) layered materials have consistently attained the focus of research from both theoretical and experimental aspects [1; 2]. Compared to the three-dimensional counterpart, the 2D magnetic membranes constitute ideal platform to explore fundamental physics of magnetism and also its coupling to other degrees of freedom in the low dimensional regime [3]. The heterostructures build upon the 2D magnetism show susceptibility with respect to external stimuli leading to the emergent interfacial phenomena and novel spintronic devices [1; 4]. Within these materials, the FePS\({}_{3}\) compound is of particular interest because it is measured to be a 2D Ising model with zigzag antiferromagnetic (AFM) order in which the magnetic Fe atom constitute honeycomb lattice [5; 6]. Although the magnetic and electronic structure of this material has been studied intensively, there is limited understanding of its thermal properties and especially the magnetic contribution to the specific heat and thermal flux in the restricted geometry such as the thin membranes of several nanometers in thickness and micrometers in the planar dimension [5; 7; 8]. The knowledge of its thermal properties is important for further application in spin-caloritronics [4] and also stands for another tool of investigating the magnetic phase transition apart from the Raman spectroscopy [5; 9]. In this Chapter, we extend the analysis of magnetoelastic coupling into a wide range of temperature beyond the phase transition, aiming at providing a theoretical explanation for the observed anomaly [9] in thermal transport of FePS\({}_{3}\) flake resonator. Showing in the Fig. 1, the membranes suspended over a cavity undergo a drum-like vibration whose eigenfrequency is related to the planar strain which can be tuned by the gate voltage and also by the environment temperature due to the thermal expansion. At a fixed gate voltage the membrane is pushed down, and the increase of temperature leads to the drop of strain that at around the Neel temperature (\(T_{N}\approx 114\,\)K) the breaking of magnetic stiffness would soften this material and a sudden drop of resonance frequency has been observed [9]. Moreover, the vanishing of magnons as additional thermal carrier after \(T>T_{N}\) would lead to a drop of the overall thermal conductivity which has been measured through the damping factor \(Q^{-1}\) as function of temperature. In order to quantitatively explain experimental findings for thermal phenomena of the hybrid system, we develop a scheme of merging the magnetic contribution into the thermoelastic dynamics and predict the temperature dependence for observables including heat capacity, linear expansion coefficient, and damping factor for the clamped FePS\({}_{3}\) membranes. Starting from the non-magnetic thermoelastic free energy we firstly derive the expression for the damping factor \(Q^{-1}\) of thin membrane/plate which turns out to be a function of the overall thermal expansion coefficient, the specific heat, and Figure 1: (a) Schematic figure for the FePS\({}_{3}\) resonator setup. The device is settled in nearly vacuum environment so that the thermal transfer through air damping can be ignored. Thermal expansion coefficient of the SiO\({}_{2}\) substrate is tiny and the silicon base is also small compared to the FePS\({}_{3}\). The flake thickness is \(h=45\,\)nm and diameter \(d=10\,\mu\)m. (b) Fixed gate voltage pushes down the membranes and as temperature increases the flake expands leading to a decrease of planar tension. Figure quoted from publication [9]. the thermal conductivity (See section II). Then we derive the total specific heat \(C_{V}\) which has origins including the phonon and magnon excitations and also the part of energy required to break the Ising coherence around phase transition. We calculate the thermal conductivity \(\kappa\) as a sum of the phonon and magnon both as heat carriers and showed its magnitude are much smaller than the bulk compound because the limited particle lifetime due to the restricted geometry. Most importantly, by including the magnetoelastic Hamiltonian into the thermoelastic free energy we prove the total thermal expansion coefficient \(\tilde{\alpha}\) retains the usual formalism of Gruneisen relation but with the incorporated effective Gruneisen parameter \(\tilde{\gamma}\). It essentially describes the variation of internal energy including all the components ascribed to the volume change (See section III). Using real material parameters we fitted experimental measurements with our model of analysis. Good agreement with recent experiment data [9; 10] supports the validity of our results (See section IV). The strong magnetic _weight_ as part of the internal energy for this geometry restricted membranes making it an ideal platform to study the optomechanics integrated with the magnetism tuning. It is also expected that the model developed in this work can be useful for further analysis in the 2D spin-caloritronic devices. ## II Bending of thin plate with the temperature gradient In order to calculate the damping coefficient \(Q^{-1}\), we firstly have to solve the coupled dynamics including the degree of freedom from elasticity, magnetism, and temperature field. In the following section III.3, one shall see that the contribution of magnetoelastic coupling can be incorporated into the effective thermoelastic coupling and the governing equations of motion can be narrowed to including only the dynamics of elastic vibration and temperature gradient. In this section, we deal with the round plate with its undeformed surface lying on the \(X-Y\) plane and study its out-of-plane (\(\hat{z}\)) vibration. We use the cylinder coordinate \((r,\varphi,z)\) and assume its thickness \(h\) is much smaller than the plate diameter \(d\), i.e. \(h\ll d\). The displacement \(u_{z}\) and deformation \(\epsilon_{ij}\) for plate are also considered to be small such that \(u_{i}\ll h\) and \(\epsilon_{ij}\ll h\). The displacement fields along \((\hat{r},\hat{\varphi})\) direction are respectively represented by \(u_{r}\) meaning the radial extension and \(u_{\varphi}\) meaning the circumferential distortion. One should note that \(u_{\varphi}\) represents the displaced distance along the \(\hat{\varphi}\) direction not the \(\varphi\) itself, \(u_{\varphi}=rd\varphi\). The strain tensor in cylinder coordinate is expressed in the form [11] \[\begin{split}\epsilon_{rr}&=\frac{\partial u_{r}}{ \partial r},\;\epsilon_{\varphi\varphi}=\frac{1}{r}\frac{\partial u_{\varphi} }{\partial\varphi}+\frac{u_{r}}{r},\;\epsilon_{zz}=\frac{\partial u_{z}}{ \partial z},\\ \epsilon_{\varphi z}&=\frac{1}{2}\left(\frac{1}{r} \frac{\partial u_{z}}{\partial\varphi}+\frac{\partial u_{\varphi}}{\partial z }\right),\;\epsilon_{rz}=\frac{1}{2}\left(\frac{\partial u_{r}}{\partial z}+ \frac{\partial u_{z}}{\partial r}\right),\\ \epsilon_{r\varphi}&=\frac{1}{2}\left(\frac{\partial u _{\varphi}}{\partial r}-\frac{u_{\varphi}}{r}+\frac{1}{r}\frac{\partial u_{r} }{\partial\varphi}\right).\end{split} \tag{1}\] It is easy to show that according to the coordinate transformation, the relation \(\epsilon_{rr}+\epsilon_{\varphi\varphi}+\epsilon_{zz}=\epsilon_{xx}+\epsilon_ {yy}+\epsilon_{zz}\) holds, meaning the volume change, as it should, does not depends on the choice of coordination. Beyond this, the thermoelastic free energy [11] \[\begin{split} F(T)&=F_{0}(T)+\frac{1}{2}K_{T}\left( \epsilon_{i}^{i}\right)^{2}+\mu\sum_{ij}\left(\epsilon_{ij}-\frac{1}{3} \epsilon_{i}^{i}\delta_{ij}\right)^{2}\\ &-K_{T}\alpha\left(T-T_{0}\right)\epsilon_{i}^{i},\end{split} \tag{2}\] and elastic tensor relation for isotropic material \[\sigma_{ij}=K_{T}\epsilon_{i}^{i}\delta_{ij}+2\mu(\epsilon_{ij}-\frac{1}{3} \epsilon_{i}^{i}\delta_{ij}),\quad\epsilon_{i}^{i}=\sum_{i}\epsilon_{ii}, \tag{3}\] also hold true in formalism for any orthogonal coordinates [11]. In order to effective describe the characteristic deformation of the 3D elastic body we establish a concept of neutral surface. Regarding to the bending of thin plate, one side is compressed (the concave side) while the opposite is extended (convex side). Between these two sides, there is a surface which has neither extension nor compression, i.e. \(\epsilon_{i}^{i}=0\), and is referred as the _neutral surface_. Mount the undeformed neutral surface onto the \(z=0\) plane and based on the small deformation assumption, the displacement on the neutral surface is \(u_{r}^{0}=0,\,u_{\varphi}^{0}=0,\,u_{z}^{0}=\zeta(r,\varphi,t)\) with \(\zeta\ll h\). Due to the small deformation, the internal stress on \(z\)-th surface should be much smaller than the stress along the longitudinal direction, \(\sigma_{iz}=0\), which leads to the hypotheses inside the bulk volume [12; 13] \[\epsilon_{rz}=0,\quad\epsilon_{\varphi z}=0,\quad\sigma_{zz}=0. \tag{4}\] With the assumed neutral surface hypotheses, the displacement inside the plate can be expressed by the function of \(\zeta\) that \[u_{r}=-z\frac{\partial\zeta}{\partial r},\quad u_{\varphi}=-\frac{z}{r}\frac {\partial\zeta}{\partial\varphi},\quad u_{z}=\zeta. \tag{5}\] and the remaining strain components are given by \[\begin{split}\epsilon_{rr}&=-z\frac{\partial^{2} \zeta}{\partial r^{2}},\quad\quad\quad\quad\quad\epsilon_{\varphi\varphi}=-z \left(\frac{1}{r}\frac{\partial\zeta}{\partial r}+\frac{1}{r^{2}}\frac{ \partial^{2}\zeta}{\partial\varphi^{2}}\right),\\ \epsilon_{r\varphi}&=-z\frac{\partial}{\partial r} \left(\frac{1}{r}\frac{\partial\zeta}{\partial\varphi}\right),\quad\epsilon_{ zz}=\frac{z\sigma}{1-\sigma}\left(\frac{\partial^{2}\zeta}{\partial r^{2}}+ \frac{1}{r}\frac{\partial\zeta}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2} \zeta}{\partial\varphi^{2}}\right).\end{split} \tag{6}\] Define the Laplace operator on the plane \[\Delta=\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial}{\partial r} +\frac{1}{r^{2}}\frac{\partial^{2}}{\partial\varphi^{2}}, \tag{7}\] then \(\epsilon_{rr}+\epsilon_{\varphi\varphi}=-z\Delta\zeta\) and \(\epsilon_{zz}=\frac{z\sigma}{1-\sigma}\Delta\zeta\). For the case of axial symmetric plate it is reasonable to assume \(\zeta=\zeta(r,t)\) which does not depends on the polar angle \(\varphi\), then the strain can be even simplified into \[\begin{split}\epsilon_{rr}&=-z\frac{\partial^{2} \zeta}{\partial r^{2}},\quad\epsilon_{\varphi\varphi}=-\frac{z}{r}\frac{ \partial\zeta}{\partial r},\\ \epsilon_{zz}&=\frac{z\sigma}{1-\sigma}\Delta\zeta, \quad\Delta=\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial}{ \partial r},\end{split} \tag{8}\] and other components equals to zero. Substituting the strain tensor into the thermoelastic free energy (Eq. 2) one can derive its expression as the function of \(\zeta\) \[F_{\text{el}}=\int_{0}^{2\pi}d\varphi\int_{0}^{R}rdr\frac{Yh^{3}}{12(1-\sigma ^{2})}\left[(1+\sigma)\frac{\alpha}{3}\,I_{T}\Delta\zeta+\frac{1}{2}\left({ \zeta^{\prime\prime}}^{2}+\frac{1}{r^{2}}{\zeta^{\prime}}^{2}+\frac{2\sigma }{r}{\zeta^{\prime}}{\zeta^{\prime\prime}}\right)\right], \tag{9}\] where the thermal inertia \[I_{T}(r)=\frac{12}{h^{3}}\int_{-h/2}^{h/2}z\,\theta(r,z)\,dz, \tag{10}\] in which \(\theta=T-T_{0}\) is the small differences between the temperature within the plate \(T\) and the environment temperature \(T_{0}\). The internal force exerted on to the volume element of unit surface is \(f_{\zeta}=-\delta F_{\text{el}}/\delta\zeta\) and the equation of motion for the vibration of the circular plate is \[\rho h\frac{\partial^{2}\zeta}{\partial t^{2}}+\frac{Yh^{3}}{12(1-\sigma^{2}) }\left[\Delta\Delta\zeta+(1+\sigma)\alpha/3\,\Delta I_{T}\right]=0. \tag{11}\] As for the dynamics of temperature field the heat diffusion equation is a rephrase of energy conservation, that is the heat absorption equals to the energy flows \(T\frac{\partial\beta}{\partial t}=-\nabla\cdot\mathbf{q}=\kappa\Delta T\) with \(\mathbf{q}=-\kappa\nabla T\) is the thermal flux and \(\kappa\) is the heat conduction coefficient [14]. From the thermoelastic coupling we understand that the heat absorption leads to not only increase of particle motion but also the volume expansion, \(dS=dS_{0}(T)+K_{T}\alpha\epsilon_{i}^{i}\). Applying the relation \(\partial S_{0}/\partial T=\rho C_{V}/T\), we have \(\rho C_{V}\partial T\big{/}\partial t=\kappa\Delta T-K_{T}\alpha T_{0}\partial \epsilon_{i}^{i}/\partial t\). The equation of motion for describing the dynamics of small temperature differences within the plate has the general form \[\kappa\Delta\theta+\kappa\frac{\partial^{2}\theta}{\partial z^{2}}=\rho C_{V} \frac{\partial\theta}{\partial t}+K\alpha T_{0}\frac{\partial\epsilon_{i}^{i} }{\partial t}. \tag{12}\] As from Ref. [15] we make an approximation that the temperature gradient is small in the longitudinal direction compared to the vertical direction, \(\Delta\theta\ll\partial^{2}\theta/\partial z^{2}\). Combing the strain components from Eq. 8 the governing equation for the dynamics of temperature field in thin plate then becomes \[\kappa\frac{\partial^{2}\theta}{\partial z^{2}}=\rho C_{V}\frac{\partial\theta }{\partial t}-zK_{T}\alpha T_{0}\frac{1-2\sigma}{1-\sigma}\frac{\partial \Delta\zeta}{\partial t}. \tag{13}\] Inserting the ansatz solution \(\zeta=\zeta_{0}e^{i\omega t}\) and \(\theta=\theta_{0}e^{i\omega t}\) into the Eq. 13 we have the equation for temperature field which can be solved by the boundary condition that there is no thermal conduction on the top and bottom surface, \[\frac{\partial\theta_{0}}{\partial z}=0\quad\text{at}\;z=\pm\frac{h}{2}. \tag{14}\] The solved temperature profile across the plate is given by \[\theta_{0}(r,z)=\frac{K_{T}\alpha T_{0}}{\rho C_{V}}\frac{1-2\sigma}{1-\sigma} \left[z-\frac{\sin{(mz)}}{m\cos{(mh/2)}}\right]\Delta\zeta_{0}, \tag{15}\] with the wave vector \[m=\sqrt{-\frac{i\omega\rho C_{V}}{\kappa}}=(1-i)\sqrt{\frac{\omega\rho C_{V}}{ 2\kappa}}. \tag{16}\] Applying this temperature profile into the moment of inertia (Eq. 10) and the elastic equation of motion (Eq. 11) becomes an eigen-equation \[\begin{split}\rho h\omega^{2}\zeta_{0}&=\frac{Yh^{ 3}}{12(1-\sigma^{2})}[1+\Delta_{Y}(1+f(\omega))]\Delta\Delta\zeta_{0}\\ &=\frac{Y_{\omega}h^{3}}{12(1-\sigma^{2})}\Delta\Delta\zeta_{0}, \end{split} \tag{17}\] with the modified Young's modulus \(Y_{\omega}=[1+\Delta_{Y}(1+f(\omega))]\) is frequency-dependent and the adiabatic degree \(f(\omega)\) ranges from \(-1\) to \(0\) for low and high vibrating frequency identifying the isothermal and adiabatic extremes, \[f(\omega)=\frac{24}{m^{3}h^{3}}\left[\frac{mh}{2}-\tan{\left(\frac{mh}{2} \right)}\right]. \tag{18}\] The quantity \(\Delta_{Y}\) which is a measure of _thermal relaxation strength_ acquires the from \[\Delta_{Y}=\frac{1+\sigma}{1-\sigma}\frac{Y\alpha^{2}T_{0}}{\rho C_{V}}. \tag{19}\] Letting \(12(1-\sigma^{2})\rho\omega^{2}/Y_{\omega}h^{2}=q^{4}\), then the Eq. 17 becomes \(\Delta\Delta\zeta_{0}=q^{4}\zeta_{0}\) and can be solved by \(\Delta\zeta_{0}=q^{2}\zeta_{0}\) with \(\zeta_{0}=AJ_{0}(qr)+BY_{0}(qr)+CI_{0}(qr)+DK_{0}(qr)\). Here \((J_{0},Y_{0},I_{0},K_{0})\) are the first and second Bessel functions of the zero-th order respectively. Due to the finite value of \(\zeta_{0}\) at \(r=0\), the \(B=D=0\) and \(\zeta_{0}=AJ_{0}(qr)+CI_{0}(qr)\) with the coefficient \((A,C)\) to be defined by the boundary condition. For the case of clamped plate, the boundary condition (\(a\) is plate radius) has the form \(\zeta_{0}\big{|}_{r=a}=0,\ \partial\zeta_{0}\big{/}\partial r\big{|}_{r=a}=0\), which can be satisfied by \((q_{n}a)^{2}\equiv\mathcal{C}_{n}=\{10.21,39.38,89.10,\cdots\}\). The complex eigenfrequency then reads \[\omega=\omega_{0}\sqrt{1+\Delta_{Y}(1+f(\omega_{0}))}, \tag{20}\] with the unperturbed eigenfrequency for the \(n\)-th vibration mode is \[\omega_{0}=q_{n}^{2}h\sqrt{\frac{Y}{12\rho(1-\sigma^{2})}}=\mathcal{C}_{n} \frac{h}{a^{2}}\sqrt{\frac{Y}{12\rho(1-\sigma^{2})}}. \tag{21}\] Due to the complex value of frequency \(\omega\), the time dependency \(e^{i\omega t}\) of physical quantity decays along with the oscillation. Assuming \(\omega=\omega_{0}(1+i\eta)\) then the displacement decays as \(\zeta(t)\sim e^{i\omega t}e^{-\eta\omega_{0}t}\). The damping for this oscillating system is captured by the damping factor \(Q^{-1}\) which is defined to be the ratio of energy loss per radian to the energy stored in the oscillator. Because the oscillating energy is quadratic to the displacement field so we have \(E(t)\sim e^{-2\eta\omega_{0}t}\) leading to the fractional energy loss per radian is \(1-e^{-2\eta}\approx 2\eta\). Thus the system damping for elastic oscillator is qualified by the \(Q^{-1}=2|\text{Im}(\omega)/\text{Re}(\omega)|\). Shortening the parameter \(mh\) within the function \(f(\omega)\) into a single variable \(\xi\)[15] \[\xi=h\sqrt{\frac{\omega_{0}\rho C_{V}}{2\kappa}}, \tag{22}\] the thermoelastic damping \(Q^{-1}\) can be derived as \[Q^{-1}=\Delta_{Y}\left(\frac{6}{\xi^{2}}-\frac{6}{\xi^{3}}\frac{\sinh\xi+\sin \xi}{\cosh\xi+\cos\xi}\right)=\frac{1+\sigma}{1-\sigma}\frac{Y\alpha^{2}T_{0}} {\rho C_{V}}\left(\frac{6}{\xi^{2}}-\frac{6}{\xi^{3}}\frac{\sinh\xi+\sin\xi} {\cosh\xi+\cos\xi}\right). \tag{23}\] Since the thermoelastic variables such as \(\alpha\), \(\kappa\) and \(C_{V}\) are temperature dependent, it is easy to understand the damping factor \(Q^{-1}\) also changes with \(T_{0}\) and it will show anomaly in the present of second order phase transition with which the specific heat \(C_{V}\) has observed discontinuity. For convenience, in the following we will replace the environment temperature \(T_{0}\) by the symbol \(T\) with consensus. ## III Thermal observables for elastic plate hybrid with magnetic phase transition In this section we study the thermal observables for the elastic plate hybrid with magnetism for a wild range of temperature across the phase transition. To this aim we start with deriving the heat capacity and thermal conductivity due to the bosons. Then we shows the incorporation of magnetoelastic coupling into the effective thermoelastic free energy and derive the effective expansion coefficient \(\tilde{\alpha}\) and damping factor \(Q^{-1}\) for the thermal-magnetic-elastic vibrating system. In general, below the phase transition the material's heat capacity \(C=dQ\big{/}dT\) comes from the thermal excitation of the bosons, which are quasi-particles mainly the phonons for ordinary insulators and also include magnons for FM and AFM materials. If the temperature is homogeneous then the Bose-Einstein density of excited bosons is uniformly distributed across the material. However, the existence of temperature field leads to the excess number of quasi-particles staying out of equilibrium and then transport according to the temperature gradient. If the environment temperature is close to the range of magnetic phase transition, the coherence of precession between the neighbouring spins breaks down and an additional contribution to the specific heat should be taken into account. The decaying of magnetization \(\mathbf{M}\) as the heating procedure leads to an accompanying decrease of the effective exchange field \(H_{E}\) and anisotropy field \(H_{A}\) in magnon's dispersion equation \[\omega_{\mathbf{k}}=\gamma\mu_{0}\sqrt{H_{A}^{2}+2H_{E}H_{A}+H_{E}^{2}(1-\psi_{\bm {k}}^{2})}, \tag{24}\] in which \(\psi_{\mathbf{k}}\) is the structure factor defined by \(\psi_{\mathbf{k}}=(1/z)\sum_{\mathbf{\delta}}e^{i\mathbf{k}\cdot\mathbf{\delta}}\) and \(\mathbf{\delta}\) is the vector connecting the \(z\) nearest neighbouring spins of opposite orientations. This energy renormalization [16; 17] should also be incorporated into the calculation of magnon's specific heat and thermal conductivity. Mathematically the heat capacity due to the bosons is \[C_{V}=\frac{1}{V}\frac{\partial}{\partial T}\sum_{\mathbf{k}}\hbar\omega_{\mathbf{k}} \bar{n}_{\mathbf{k}},\quad\bar{n}_{\mathbf{k}}=\frac{1}{e^{\beta\hbar\omega_{\mathbf{k}}}- 1}, \tag{25}\] where \(\bar{n}_{\mathbf{k}}\) is the Bose-Einstein's equilibrium amount of bosons of energy \(\hbar\omega_{\mathbf{k}}\). The thermal conductivity is defined as the coefficient for heat flux due to the temperature gradient, \(\mathbf{q}=-\kappa\nabla T\). From kinetic transfer theory this thermal flux can be calculated by \[\begin{split}\mathbf{q}&=-\frac{1}{V}\sum_{\mathbf{k}}\hbar \omega_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}(\tau_{\mathbf{k}}\mathbf{v}_{\mathbf{k}}\cdot\nabla\bar{n}_{ \mathbf{k}})\\ &=-\frac{1}{V}\frac{\partial}{\partial T}\sum_{\mathbf{k}}\hbar \omega_{\mathbf{k}}\bar{n}_{\mathbf{k}}\tau_{\mathbf{k}}(\nabla T\cdot\mathbf{v}_{\mathbf{k}})\mathbf{ v}_{\mathbf{k}},\end{split} \tag{26}\] in which an isotropic \(\kappa\) can be extracted if the particle velocity \(\mathbf{v}_{k}\) is homogeneous to each direction. However, if the particle velocity has directional bias then \(\kappa\) depends on the orientation and thermal transfer shows anisotropy. In the simplest case, if the particle's lifetime \(\tau_{k}=\tau_{0}\) and velocity \(\mathbf{v}_{k}=\bar{v}\) does not depends on wavevector \(\mathbf{k}\) we see that the thermal flux can be simplified as \(\mathbf{q}=-\frac{\bar{v}^{2}\tau_{0}}{V}\frac{\partial}{\partial T}\sum_{\mathbf{k}} \hbar\omega_{k}\bar{n}_{k}\nabla T\equiv-C\bar{v}^{2}\tau_{0}\cdot\nabla T\) leading to the simple form \(\kappa=C\bar{v}^{2}\tau_{0}\). Once we know the dispersion relation \(\omega_{k}\) and the lifetime \(\tau_{k}\) for the mobile quasi-particles we can determine specific heat \(C_{V}\) and thermal conductivity \(\kappa\) at least in a numerical way. The elastic specific heat and thermal conductivity can be derived from the statistics of low lying phonon modes (acoustic modes) based on the general equations of 25 and 26 with the sound wave dispersion relation \(\omega_{\mathbf{k}}=\bar{v}\mathbf{k}\), and \(\bar{v}\) is the Debye averaged acoustic velocity \[\begin{split} C_{\rm db}(T)&=\frac{\hbar^{2}}{2\pi} \frac{3}{k_{B}T^{2}}\int_{0}^{k_{\rm db}}dk\frac{k\omega_{k}^{2}e^{\beta\hbar \omega_{k}}}{(e^{\beta\hbar\omega_{k}}-1)^{2}},\\ \kappa_{\rm db}(T)&=\frac{\hbar^{2}}{4\pi}\frac{3 \bar{v}^{2}}{k_{B}T^{2}}\int_{0}^{k_{\rm db}}dk\frac{\tau_{k}k\omega_{k}^{2}e^ {\beta\hbar\omega_{k}}}{(e^{\beta\hbar\omega_{k}}-1)^{2}}.\end{split} \tag{27}\] The elastic zone boundary can be defined by the Debye temperature as \(k_{\rm db}=k_{B}T_{\rm db}/\hbar\bar{v}\). Note that here we have assumed the elastic lattice is of 2D while the vibration is still 3 dimensional. It can be adjusted easily to the longitudinal or shear polarization only by replacing the factor 3 into 1 or 2 respectively. ### Specific heat and thermal conduction due to the magnon excitations Specific to the 2D AFM material, we set the external field \(H_{0}=0\) and according to Eq. 24 the dispersion relation depends on the direction of wave vector \(\mathbf{k}\). Contrary to detailed treatment as in the Refs. [7], in this work we simplify the detailed 2D lattice structure and make the homogeneous assumption (\(a=b\)) such that we rephrase the \(\psi_{\mathbf{k}}\) of Eq. 24 into \(\psi_{k}=\cos{(\pi k/2k_{m})}\) being isotropic with \(k\in[0,k_{m}]\) is limited to the first Brillouin zone and \(k_{m}\) is defined from the spherical energy boundary assumption [16; 18], \[\sum_{\mathbf{k}}=\frac{V}{(2\pi)^{2}}\int d\mathbf{k}=\frac{Na^{2}}{(2\pi)^{2}}\int_ {0}^{k_{m}}2\pi kdk=N, \tag{28}\] such that \(k_{m}a/2=\sqrt{\pi}\). Thus, the dispersion relation for AFM magnon becomes \[\hbar\omega_{k}=\gamma\mu_{0}H_{E}\sqrt{\sin^{2}{(\pi k/2k_{m})}+\eta^{2}+2 \eta}, \tag{29}\] with \(\eta=H_{A}/H_{E}\) is the ratio of anisotropy field to the exchange field. For some AFM material such as the RbMnF\({}_{3}\) which has very small anisotropy \(H_{A}=4.5\) Oe, the exchange field is as large as \(H_{E}=830\) kOe leading to \(\eta\approx 0\) and it is considered as a typical 3D Heisenberg antiferromagnet [19; 20]. For other materials such as the FeF and the FePS\({}_{3}\) used in our experiment the magnetic anisotropy is strong and comparable to the exchange field, resulting in \(\eta\gtrapprox 1\) which makes them a quasi Ising system [19; 21]. As environment temperature goes up, the spontaneous magnetization \(M(T)\) decays because the thermal magnon excitation [16; 22] and also the decoherence between neighbouring spins for the \(T\lessapprox T_{N}\). Since \(M=-g\mu_{B}NS\), the effective spin magnitude \(S(T)\) decays which results in the decreasing of \(H_{E}=2Sz|J|/\mu_{0}\gamma\) and \(H_{A}=2SA/\mu_{0}\gamma\) in the dispersion relation. As a consequence, the temperature dependence for the \(\omega(T)\) should be taken into account in deriving the magnon specific heat and thermal conductivity. For simple treatment one can apply the molecular field approximation (mean field theory) in which the magnetization \(M(T)=M_{0}B(x)\) with \(B(x)\) is the Brillouin function and \(x=\mu_{0}n_{w}M(T)g\mu_{B}S_{0}/k_{B}T\) is the normalized energy [23]. Although this mean field approach does not provide the correct magnetization around phase transition, it leads to good results of magnon spectra at temperatures \(T<0.8T_{N}\)[16; 17]. With the derived dispersion relation, the heat capacity due to thermal magnons excitation in the 2D AFM model is \[\begin{split} C_{\rm mag}(T)&=\frac{\hbar^{2}}{2 \pi}\frac{1}{k_{B}T^{2}}\int_{0}^{k_{m}}dk\frac{k\,\omega_{k}^{2}\cdot e^{\beta \hbar\omega_{k}}}{(e^{\beta\hbar\omega_{k}}-1)^{2}}\\ &=\frac{\hbar^{2}k_{m}^{2}}{2\pi k_{B}}\frac{1}{T^{2}}\int_{0}^{1} dq\frac{q\,\omega_{q}^{2}\cdot e^{\beta\hbar\omega_{q}}}{(e^{\beta\hbar\omega_{q}}-1)^ {2}},\end{split} \tag{30}\] where the explicit temperature dependence on \(\omega_{k}(T)\) has been suppressed and we replace \(k\) with the normalized wave vector \(q=k/k_{m}\) ranging from 0 to 1. As a comparison, we plot the \(C_{\rm mag}(T)\) derived from the 2-D integral of \((q_{x},\,q_{y})\) of the dispersion relation \(\hbar\omega_{\mathbf{q}}=\gamma\mu_{0}H_{E}\sqrt{(1-\psi_{\mathbf{q}}^{2})+\eta^{2}+2\eta}\) with \(\psi_{\mathbf{q}}=\cos{(q_{x}\pi/2)}\cos{(q_{y}\pi/2)}\). Showing in Fig. 2(a) we see the complete 2-D integral and the simplified one result in almost exactly the same curve which validates that we can indeed ignore the direction of \((q_{x},\,q_{y})\) and shorten the \(\psi_{\mathbf{q}}\) into 1-D integral on \(q\) with the \(\psi_{q}=\cos{(q\pi/2)}\). For the magnon's thermal conductivity, it is defined from the heat flux (Eq. 26) that \[\mathbf{q}=\frac{-1}{(2\pi)^{2}}\int\,d\mathbf{k}\frac{1}{k_{B}T^{2}}\frac{\tau_{k}\,( \hbar\omega_{k})^{2}\,e^{\beta\hbar\omega_{k}}}{(e^{\beta\hbar\omega_{k}}-1)^ {2}}(\nabla T\cdot\mathbf{v}_{k})\mathbf{v}_{k}. \tag{31}\] Using the 2-D dispersion relation we derive the velocity of magnons to be \[\begin{split}(v_{x},v_{y})&=\frac{\gamma\mu_{0}H_{E}}{2 \hbar k_{m}}\frac{\pi\psi_{k}}{\sqrt{(1-\psi_{k}^{2})+\eta^{2}+2\eta}}\times\\ &\left(\sin\frac{k_{x}\pi}{2k_{m}}\cos\frac{k_{y}\pi}{2k_{m}},\, \cos\frac{k_{x}\pi}{2k_{m}}\sin\frac{k_{y}\pi}{2k_{m}}\right),\end{split} \tag{32}\] with which the integral can be performed as \[\int(\mathbf{\nabla}\mathbf{T}\cdot\mathbf{v}_{k})\mathbf{v}_{k}\,d\mathbf{k}=\mathbf{\nabla}\mathbf{T}\, \frac{1}{2}\int(v_{x}^{2}+v_{y}^{2})\,d\mathbf{k}, \tag{33}\] where we have used the fact that \(\int v_{x}^{2}d\mathbf{k}=\int v_{y}^{2}\mathbf{k}\) due to the symmetry consideration. From the heat flux expression we extract the thermal conductivity and written into the form of 2-D integral, \[\begin{split}\kappa_{\text{mag}}&=\left(\frac{ \gamma\mu_{0}H_{E}}{8}\right)^{2}\frac{1}{k_{B}T^{2}}\int\frac{\tau_{k}\,e^{ \beta\hbar\omega_{q}}}{(e^{\beta\hbar\omega_{q}}-1)^{2}}\left[\left(\frac{ \gamma\mu_{0}H_{E}}{\hbar}\right)^{2}-\left(\omega_{q}^{2}-\omega_{0}^{2} \right)\right](1-\cos q_{x}\pi\cdot\cos q_{y}\pi)d\mathbf{q},\\ &=\left(\frac{\gamma\mu_{0}H_{E}}{8}\right)^{2}\frac{\pi}{k_{B} T^{2}}\int_{0}^{1}\frac{\tau_{k}\,\omega_{q}^{2}e^{\beta\hbar\omega_{q}}}{(e^{ \beta\hbar\omega_{q}}-1)^{2}}\frac{q\sin^{2}q\pi}{\sin^{2}q\pi/2+\eta^{2}+2 \eta}dq,\end{split} \tag{34}\] where the second equality is reached by the homogeneous lattice assumption that the integral can be simplified into the 1-D and \(\hbar\omega_{q}=\gamma\mu_{0}H_{E}\sqrt{\sin^{2}q\pi/2+\eta^{2}+2\eta}\). Showing in Fig. 2(b) we notice again the difference between complete 2-D integral and the simplified one is small enough for the case of \(\kappa_{\text{mag}}\), and in the following we shall use the 1-D integral of \(q\) for the thermal observables. At low temperatures the heat capacity and thermal conductivity share the same growing curve due to the fact that \(\kappa\approx Cvl=Cv^{2}\tau\) and \(v,\tau\) are almost constant for small \(T\). At large enough temperatures the exchange fields \(H_{E}(T)\) decays in step with the decreasing of \(M(T)\) which leads to the softening of magnons and after phase transition there is no existence of magnons. Thus we see the drop of \(C_{\text{mag}}\) and \(\kappa_{\text{mag}}\) for \(T>T_{N}\). Additionally the particle's lifetime (or its inverse \(\tau^{-1}=\eta\) the relaxation rate) plays an important role in their transport properties. In general, the relaxation rate for various particles, either bosons or fermions, comes from several origins [24] that \(\eta=\eta_{\text{bd}}+\eta_{\text{pt}}+\eta_{\text{nlnsc}}\), with \(\eta_{\text{bd}}\) is the boundary deflection by material edges, \(\eta_{\text{pt}}\) is the scattering with the point defects, and \(\eta_{\text{nlnsc}}\) stands for the non-linear scattering among particles themselves. Usually \(\eta_{\text{bd}}+\eta_{\text{pt}}=\eta_{0}\equiv\tau_{0}^{-1}\) is a constant which does not depend on wavevector \(k\) and temperature \(T\). The non-linear scattering has several origins for different particles but it is generally proportional to \(T\) for the 3-particle scattering and \(T^{2}\) for the 4-particle scattering process [25; 26]. Therefore \(\eta_{k}=\eta_{0}(1+b_{k}T+c_{k}T^{2})\) and the coefficients can be calculated by studying the detailed process. However, in this work of membranes setup both the phonon and magnon's lifetime are limited by the defect and boundary scattering [27]. Therefore we shall ignore the non-linear scattering between quasi-particles and claim the lifetime \(\tau=\tau_{0}\) is a constant which does not depends on the wave vector nor the temperature. ### Specific heat due to the break of spin coherence around phase transition As the environment temperature close to the phase transition regime the magnetic specific heat is dominated by energy absorption for the breaking of spin coherence and due to the nature of second order phase transition the anomaly of \(C_{M}\) near \(T_{N}\) should be expected [23]. The derivation for anomaly of \(C_{M}\) depends on the detailed lattice structure. In this chapter we focus on the material FePS\({}_{3}\) which is an Ising-type 2D antiferromagnet of the honeycomb (hexagon) lattice [6; 7; 8; 28]. According to the references [29; 30; 31], the partition function for honeycomb lattice reads \[\frac{1}{N}\log Z(T)=\log 2+\frac{1}{16\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi} \,d\theta_{1}d\theta_{2}\log\big{[}\cosh^{3}2K+1-\sinh^{2}2K\cdot P_{\mathbf{ \theta}}\big{]}, \tag{35}\] Figure 2: (a) The magnon’s specific heat and (b) thermal conductivity derived from the complete 2-D integral and the simplified 1-D integral respectively. Here we assumed the lifetime for magnon is approximately 1.8 ps and does not depends on the modes for simplicity. The results indicate the difference between these two integral strategy is small and we can use the simplified version for further calculations. where \(K=J^{\prime}/k_{B}T\equiv\beta J^{\prime}\) is the normalized temperature in which \(J^{\prime}\) is the effective coupling energy from the exchange Hamiltonian \(H=-2J\sum\mathbf{S}_{i}\cdot\mathbf{S}_{j}\equiv-J^{\prime}\sum\mathbf{\hat{S}}_{i}\cdot\bm {\hat{S}}_{j}\), thus \(J^{\prime}=2JS^{2}\)[30]. The integrand parameter is \(P_{\mathbf{\theta}}=\cos\theta_{1}+\cos\theta_{2}+\cos\left(\theta_{1}+\theta_{2}\right)\)[31]. The critical point for honeycomb lattice is reached as \(\sinh 2K_{c}=\sqrt{3}\) and the Neel temperature is \[T_{N}=\frac{2J^{\prime}}{k_{B}\log\left(2+\sqrt{3}\right)}. \tag{36}\] Thus one can derive the effective coupling energy \(J^{\prime}\) based on the measured Neel temperature. Following the procedures of differentiating \(E_{\text{Is}}=-\frac{d\log Z}{d\beta}\) and \(C_{\text{Is}}=\frac{dE_{\text{Is}}}{dI^{\star}}\), we have the specific heat due to the breaking of spin coherence reads \[\frac{1}{Nk_{B}}C_{\text{Is}}(T)=\frac{K^{2}}{16\pi^{2}}\int_{0 }^{2\pi}\int_{0}^{2\pi}d\theta_{1}d\theta_{2}\left\{\phantom{ should be extended to the one including the magnetic contribution \(\tilde{\alpha}=\alpha_{E}+\alpha_{M}\). The magnetic Gruneisen relation \(\alpha_{M}=\beta_{T}\rho\gamma_{M}C_{M}\) is almost similar to the elastic counterpart (\(\alpha_{E}=\beta_{T}\gamma_{E}\rho C_{V}\)) meaning the thermal and magnetic properties both originate from the variation of spin coherence and it is the magnetic Gruneisen parameter makes them a connection. Therefore the overall thermal expansion coefficient for the hybrid system can be written into the form \[\tilde{\alpha} =\beta_{T}\rho\gamma_{E}C_{E}+\beta_{T}\rho\gamma_{M}C_{M}=\beta _{T}\rho\left(\gamma_{E}C_{E}+\gamma_{M}C_{M}\right)\] \[=\beta_{T}\rho\tilde{\gamma}C_{V}, \tag{44}\] which maintains the Gruneisen relation formalism but with \(C_{V}=C_{E}+C_{M}\) is the total specific heat combining the elastic and magnetic ones and with the effective Gruneisen parameter defined as \[\tilde{\gamma}=\frac{\gamma_{E}C_{E}+\gamma_{M}C_{M}}{C_{E}+C_{M}}. \tag{45}\] Although the elastic and magnetic Gruneisen parameters are both almost independent of temperature [35; 37], the effective Gruneisen parameter usually presents a peak at phase transition \(T_{N}\) (as shown in Fig. 4). This phenomenon originates from the anomaly of magnetic specific heat near phase transition rendering the \(\tilde{\gamma}\approx\gamma_{E}\) for \(T\) far away from \(T_{N}\) and \(\tilde{\gamma}\approx\gamma_{M}\) for the \(T\) close to \(T_{N}\). Usually the \(\gamma_{M}\) is several times larger than the elastic \(\gamma_{E}\) and it can be theoretically predicted based on detailed study of magnetic structure [36]. In this work, however, we shall simplify the analysis by assuming a phenomenological factor \(\nu=\gamma_{M}\big{/}\gamma_{E}\) which can be further determined by fitting the theoretical prediction of the thermal observables such as the \(\tilde{\alpha}\) and \(Q^{-1}\) to the measured values. In this way the part of thermal expansion mediated by magnetostriction can be effectively absorbed into the non-magnetic formalism simply by replacing \(\alpha_{E}\) with \(\tilde{\alpha}\). Together, the specific heat and thermal conductivity in the elastic and thermal dynamics equation (Eq. 11 and Eq. 13) should also be replaced by the total specific heat \(C_{V}=C_{E}+C_{M}\) and total thermal conductivity \(\kappa=\kappa_{E}+\kappa_{M}\)[32]. The overall damping coefficient \(Q^{-1}\) for the elastic and magnetic hybrid plate has the form (Eq. 23) \[Q^{-1}=\frac{1+\sigma}{1-\sigma}\frac{Y\tilde{\alpha}^{2}T}{\rho C_{V}}\left( \frac{6}{\xi^{2}}-\frac{6}{\xi^{3}}\frac{\sinh\xi+\sin\xi}{\cosh\xi+\cos\xi} \right),\quad\xi=h\sqrt{\frac{\omega_{0}\rho C_{V}}{2\kappa}}, \tag{46}\] with the \(\tilde{\alpha}\), \(C_{V}\), and \(\kappa\) are thermal observables which can be measured and predicted based on the theory developed in this chapter. ## IV Model validation through the thermal observables measured for the 2D AFM material FePS3 Here we validate the theory developed in this chapter by calculating the linear thermal expansion coefficient \(\alpha_{L}\) and damping factor \(Q^{-1}\) of the Ising-type 2D antiferromagnetic material FePS\({}_{3}\) whose phase transition temperature is about \(T_{N}=114K\)[9]. In the published paper (Ref. [9]), Siskins and etc. have measured the vibration frequency of the base model \(f_{0}\) for the membrane-plate of FePS\({}_{3}\) in the setup of Fig. 1. According to Ref. [38] the resonance frequency of the round resonator in the membrane-plate regime can be approximated by \[f_{0}=\sqrt{f_{\rm membrane}^{2}+f_{\rm plate}^{2}}\,, \tag{47}\] in which the plate frequency is \(f_{\rm plate}=\omega_{0}\big{/}2\pi\) according to the Eq. 21 and the membranes fundamental frequency is \[f_{\rm membranes}=\frac{2.4048}{2\pi a}\sqrt{\frac{N}{\rho h}}, \tag{48}\] with \(N=N_{0}+Yh\epsilon_{r}^{\rm th}\big{/}(1-\sigma)\) is the in-plane tension along radial direction. \(N_{0}\) is the initial tension introduced by fabrication and can be further tuned by the Figure 4: Temperature dependence for the effective Grüneisen parameter \(\tilde{\gamma}\) derived from Eq. 45. The elastic parameter is calculated to be to be \(\gamma_{E}=1.798\)[9] and the ratio is chosen to be \(\gamma_{M}/\gamma_{E}=\nu=4\). The \(\tilde{\gamma}\) start from \(\gamma_{E}\) because the \(C_{M}\approx 0\) for small temperatures. external gate voltage \(V_{G}\). The second part comes from the thermal expansion of the membranes which becomes a solely factor for the temperature dependency of the resonance frequency \(f_{0}\) if we assume the plate frequency \(f_{\rm plate}\) is independent to the environmental temperature because the Young's modulus \(Y\) and Poisson coefficient \(\sigma\) are almost independent to a small range of \(T\) varying from \(0\) to \(200\,K\) in the Siskins' experiment. The thermal strain is relate to the linear expansion coefficient of the resonator and silicon substrate by the relation \(d\epsilon_{r}^{\rm th}/dT=-(\alpha_{L}-\alpha_{\rm si})\). As a consequence, by measuring the temperature dependency of \(f_{0}(T)\) one can derive the thermal expansion coefficient of FePS\({}_{3}\) such that \[\alpha_{L}=\alpha_{\rm si}-\left(\frac{2\pi a}{2.4048}\right)^{2}\frac{\rho(1 -\sigma)}{Y}\frac{d(f_{0}^{2})}{dT}. \tag{49}\] The experimental measurement are presented in Fig. 5 and one indeed observes the \(\alpha_{L}\) anomaly around the phase transition. From the theoretical point of view, the linear expansion coefficient is one-third of the volume expansion coefficient developed in the previous section of the hybrid system, namely \(\alpha_{L}=\tilde{\alpha}/3\) based on Eq. 44. In order to derive the theoretical prediction of \(\alpha_{L}\), one needs to calculate the specific heat of the elastic and magnetic parts. Firstly, for the magnetic specific of the Ising origin (Eq. 37), the effective coupling energy \(J^{\prime}\) is derived from the measured Neel temperature \(T_{N}=114K\) and according to Eq. 36 we have \(J^{\prime}=6.48\,{\rm meV}\). Therefore the nearest neighbour spin-to-spin coupling energy in the Hamiltonian \(H=-2J\sum\mathbf{S}_{i}\cdot\mathbf{S}_{j}\) has the value \(J=J^{\prime}/2S^{2}=0.81\,{\rm meV}\) since the atomic spin for FePS\({}_{3}\) is \(S=2\). One sees that the derived \(J\) is very close to the first-nearest neighbour interaction (shown in Fig. 6) \(J_{1}=2J\approx 1.5\,{\rm meV}\) measured in the neutron scattering experiment [7; 8]. Using this derived \(J^{\prime}\) we plot the \(C_{\rm ls}\) in Fig. 3(b). Secondly, for the magnetic specific of the magnon origin (Eq. 30), it is necessary to figure out the exchange and anisotropy field on the sublattices in order to apply the dispersion relation in Eq. 29. However, according to the magnetostriction effect the inter-atomic interaction are modulated by the strain and varies with the membranes thickness [28]. Here we simplify the analysis by selecting the effective field as \(\mu_{0}H_{E}=69\,{\rm Tesla}\) and \(\mu_{0}H_{A}=138\,{\rm Tesla}\) in order to best fit the derived \(C_{M}(T)\) and \(\alpha_{L}(T)\) with the measured data. According to the relation \(H_{E}=2|J|zS/\mu_{0}\gamma\), the effective interaction between sublattices then becomes \(J_{\rm sub}\approx-1\,{\rm meV}\) and anisotropy is \(A\approx 6\,{\rm meV}\) which are close to the measured data whose values take \(J_{2}=-0.04\,{\rm meV}\), \(J_{3}=-0.96\,{\rm meV}\), and \(A=3.78\,{\rm meV}\) as quoted from Refs. [7; 8]. The calculated \(C_{\rm mag}\) is shown in Fig. 2(a) and the total magnetic specific heat \(C_{M}\) is shown in Fig. 7(a). Obtained from first-principle calculation, the elastic parameters of FePS\({}_{3}\) are \(Y=103\,{\rm GPa}\), \(\sigma=0.304\), \(\rho=3375\,{\rm kg\,m^{-3}}\) and \(\bar{v}=3823\,{\rm m\,s^{-1}}\)[9]. According to the Ref. [10], the elastic specific heat for FePS\({}_{3}\) is a mixing of Debye and Einstein parts with the Debye temperature \(T_{\rm db}=236\,{\rm K}\) and Einstein temperature \(T_{\rm ei}=523\,{\rm K}\). The suggested combination ratio is \(0.54\) and the elastic specific heat \(C_{E}=(1-0.54)C_{\rm db}+0.54C_{\rm ei}\) can be derived from Eq. 27. In Fig. 7(b) we present the calculated \(C_{E}\) as doted blue line and the total specific heat \(C_{V}=C_{E}+C_{M}\) as solid red line. Our theoretical predictions fit well the measured data shown in Fig. 8 and therefore validate the choice of parameters and the applicability of our model. Furthermore, using these parameters we get the elastic Gruneisen factor \(\gamma_{E}=\frac{3}{2}\left(\frac{1+\sigma}{2-3\sigma}\right)=1.798\) and the compressibility \(\beta_{T}=1.14\times 10^{-11}\,{\rm Pa^{-1}}\). By assuming the ratio Figure 5: (a) Solid-blue line: the measured fundamental resonator frequency \(f_{0}\) as function of temperature for the FePS\({}_{3}\) plate-membranes. Solid-red line: the derivative of \(f_{0}^{2}\) to \(T\). (b) Derived linear thermal expansion coefficient of FePS\({}_{3}\) plate-membranes according to Eq. 49. Quoted from the Fig.2 in Ref. [9]. Figure 6: Schematic of the magnetic lattice for FePS\({}_{3}\) quoted from Ref. [8]. White dots mean the spin pointing out of the page and the black dots mean the spins pointing into the page. \(J_{1},J_{2},J_{3}\) are the first-, second-, and third nearest neighbour interaction for the Hamiltonian \(H=-\sum_{i,j}J_{i,j}\mathbf{S}_{i}\cdot\mathbf{S}_{j}\)[7]. The magnon dispersion relation with the effective exchange field is calculated based on the sub-lattice structure indicated by the red and blue rhombus. Total spin of magnetic Fe atom is \(S=2\) and the coordination number for sublattice is \(z=2\). \(\nu=\gamma_{M}/\gamma_{E}=4\) and applying the derived specific heats we calculate and plot the effective Gruneisen parameter \(\tilde{\gamma}\) as function of temperature in Fig. 4. It is then straight-forwards to derive the overall linear expansion coefficient for the hybrid system \(\alpha_{L}=\tilde{\alpha}/3\) based on equation 44. Bear in mind that if one uses molar specific heat from the Fig. 7(b), the density should also chosen to be the molar density which is \(\rho=18443\,\mathrm{mol}\,\mathrm{m}^{-3}\) for FePS\({}_{3}\). Showing in Fig. 9 the theoretical prediction for \(\alpha_{L}\) fits well the measured data which consolidates the scheme of merging the magnetoelastic coupling into the non-magnetic equation of motions for the hybrid system. In order to calculate and plot the damping coefficient \(Q^{-1}\) according to Eq. 46 one still needs to know the temperature dependence of thermal conductivity \(\kappa\) especially in the hybrid materials whose thermal conduction has several different origins. As for the FePS\({}_{3}\), we have \(\kappa=\kappa_{\mathrm{ph}}+\kappa_{\mathrm{mag}}\) and we can ignore the scattering between phonons and magnons because the magnon's energy in antiferromagnetic is usually at the range of THz while the phonon's energy is usually of several GHz which means the coupling between these two quasi-particles is small. As stated in the previous section, particle lifetime is limited by the boundary scattering and can be treated as a constant \(\tau=\tau_{0}\). The \(\kappa_{\mathrm{mag}}\) can be derived according to Eq. 34 together with material constants and the fitting parameter \(\tau_{0,\mathrm{mag}}\approx 3.8\,\mathrm{ps}\)[39]. As for the phonon's contribution, we simplify the analysis by utilizing the Debye averaged sound velocity and apply the fitting parameter \(\tau_{0,\mathrm{ph}}\approx 0.8\,\mathrm{ps}\) such that \(\kappa_{\mathrm{ph}}=C_{E}\bar{v}^{2}\tau_{0,\mathrm{ph}}\). The total thermal conductivity is plotted in Fig. 10(a) and we see it is much smaller than the measured value for bulk FePS\({}_{3}\) compound which has \(\kappa\approx 1\,\mathrm{W}/\mathrm{mK}\) at room temperature [5]. This is due to the membranes geometry whose thickness is only \(h=45\,\mathrm{nm}\) which limits mobility of phonons and thus the small thermal conductivity. The transverse thermal time constant \(\tau_{z}=h^{2}\rho C_{V}/\pi\kappa\), which measures the time for establishing the temperature equilibrium across the plate, is also plotted and it is close to the Sikkins measurement. With the parameter \(\xi=\pi\sqrt{f_{0}\,\tau_{z}}\) and based on the previously derived expansion coefficient \(\tilde{\alpha}\) and total specific heat \(C_{V}\), we have the damping coefficient \(Q^{-1}\) derived and shown in Fig. 11. We see the agreement between theoretical prediction and experiment data is good enough and the drop of thermal transfer after phase transition can be ascribed to the depletion of magnons as thermal carriers. ## V Summary and outlook In conclusion we studied the magnetoelastic effect on the thermal transfer within the thin AFM plate for a wide range of temperature across the magnetic phase transition. In specific, we developed a theory of merging the exchange magnetoelastic interaction into the thermal elastic free energy and further predicted the temperature dependence for observables such as specific heat \(C_{V}\), linear expansion coefficient \(\tilde{\alpha}\), and damping factor \(Q^{-1}\) for the quasi-2D Ising AFM material FePS\({}_{3}\). Compared to the experimentally measured data, our theoretical pre Figure 8: Measured Specific heat for FePS\({}_{3}\) quoted from Takano’s paper [10]. (a) the experimental data and Takano’s prediction for \(C_{M}\). In his calculation, the magnetic specific heat instantly decays to zero which does not fits into the measurements whereas my plotting fits better. (b) the experimental data for the total specific heat. Note here the temperature ranges from 0 to 300 K while in my plotting the temperature stops at 200 K. Figure 7: (a) Magnetic specific heat \(C_{M}=C_{\mathrm{ls}}+C_{\mathrm{mag}}\) is the sum of the 2D Ising statistics and the magnon’s contribution. (b) Solid red: total specific heat \(C_{V}=C_{E}+C_{M}\) of the FePS\({}_{3}\). It shows anomaly around the phase transition because the divergence of magnetic \(C_{M}\). Doted blue: the elastic specific heat \(C_{E}=(1-0.54)C_{\mathrm{ch}}+0.54C_{\mathrm{el}}\) according to Ref. [10]. We point out that there are 5 mol of atoms per 1 mol molecule for the FePS\({}_{3}\) compound. Figure 9: Solid red: theoretical predicted linear expansion coefficient \(\alpha_{L}=\tilde{\alpha}/3\) based on the Eq. 44 with the derived specific heat from Fig. 7 and effective Grüneisen parameter \(\tilde{\gamma}\) from Fig. 4. Solid blue: experimental derived \(\alpha_{L}\) from Eq. 49(b). dictions agree very well especially for the specific heat and linear expansion coefficient. As for the transport related property, the theoretical plot of \(Q^{-1}(T)\) shows the overall trend consistent with the measured curve and it still has rooms for improvement. It is because in this work we have simplified the magnon and phonon velocity \(\mathbf{v}_{k}\) to be homogeneous and utilized an isotropic thermal conductivity for analysis. According to the quasi 2D material these assumptions may not sufficient enough and one can improve these transport properties by studying the detailed lattice structure [8]. It may also helpful to find a double peak effect [40] for the \(\kappa(T)\) is helpful to explain the secondary surging of \(Q^{-1}\) after \(T>T_{N}\). However, our theoretical treatment builds a general scheme to study the thermal observables for the magnetic-elastic-thermal integrated system. The key is generalizing the Gruneisen relation by incorporating various contributions and arriving at an effective Gruneisen coefficient \(\tilde{\gamma}\) (Eq. 45). This quantity essentially describes the variation of internal energy with respect to the volume change and its temperature dependency represents the changing of _weight_ in the internal energy for each components in the hybrid system. Therefore the scheme developed in this chapter can be extended to include other contributors such as electrons in the spintronic and spin-caloritronic devices.
2309.15338
Counterintuitive patterns on angles and distances between lattice points in high dimensional hypercubes
Let $\mathcal{S}$ be a finite set of integer points in $\mathbb{R}^d$, which we assume has many symmetries, and let $P\in\mathbb{R}^d$ be a fixed point. We calculate the distances from $P$ to the points in $\mathcal{S}$ and compare the results. In some of the most common cases, we find that they lead to unexpected conclusions if the dimension is sufficiently large. For example, if $\mathcal{S}$ is the set of vertices of a hypercube in $\mathbb{R}^d$ and $P$ is any point inside, then almost all triangles $PAB$ with $A,B\in\mathcal{S}$ are almost equilateral. Or, if $P$ is close to the center of the cube, then almost all triangles $PAB$ with $A\in \mathcal{S}$ and $B$ anywhere in the hypercube are almost right triangles.
Jack Anderson, Cristian Cobeli, Alexandru Zaharescu
2023-09-27T00:59:14Z
http://arxiv.org/abs/2309.15338v1
Counterintuitive patterns on angles and distances between lattice points in high dimensional hypercubes ###### Abstract. Let \(\mathcal{S}\) be a finite set of integer points in \(\mathbb{R}^{d}\), which we assume has many symmetries, and let \(P\in\mathbb{R}^{d}\) be a fixed point. We calculate the distances from \(P\) to the points in \(\mathcal{S}\) and compare the results. In some of the most common cases, we find that they lead to unexpected conclusions if the dimension is sufficiently large. For example, if \(\mathcal{S}\) is the set of vertices of a hypercube in \(\mathbb{R}^{d}\) and \(P\) is any point inside, then almost all triangles \(PAB\) with \(A,B\in\mathcal{S}\) are almost equilateral. Or, if \(P\) is close to the center of the cube, then almost all triangles \(PAB\) with \(A\in\mathcal{S}\) and \(B\) anywhere in the hypercube are almost right triangles. Key words and phrases:Hypercubes, lattice points, Euclidean distance 2020 Mathematics Subject Classification: 11B99; 11K99, 11P21, 51M20, 52Bxx ## 1. Introduction Recent developments in network communications [12, 17] or artificial intelligence [6] have shed new light on studies of graphs and special models based on sets explored in combinatorial geometry or related to lattice points in multidimensional spaces [1, 3, 7, 10, 11, 14, 15]. Our object in this article is to present a few results related to the fact that in high dimensional hypercubes, a random pick of lattice points to find some that are at an 'exceptional distance' apart from each other has zero chance of success if the dimension goes to infinity. (Here, an _exceptional distance_ is any one that is different from the average.) Let \(\mathcal{S}\subset\mathbb{R}^{d}\), \(d\geq 1\), be a finite set and let \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\). If we look from a distant point \(\boldsymbol{a}\) to the points in \(\mathcal{S}\), we find that they are all at about the same distance, which is the closer to a certain value the farther away from \(\mathcal{S}\) our point of view is. On the contrary, if our point of view is close to \(\mathcal{S}\), even in \(\mathcal{S}\) or in its convex envelope, we see the variety of distances ranging from zero to the diameter of \(\mathcal{S}\). But what we observe is largely influenced by the size of the space and its dimensions. Our goal here is to highlight some counterintuitive phenomena, some of them somehow related to the ones that have been studied for the set of lattice points visible from each other in a hypercube [3]. In order to illustrate two of these phenomena, let us note that if we randomly pick triangles with vertices at the lattice points of a cube that is large enough, the likelihood of encountering a significant number of some special triangles is low. For instance, in Figure 1, we can see two type of selections, each with a distinct feature in dimensions 2 and 3. The first type of choice conditions the triangles to have a common vertex, while the second one requires that two of the triangle's vertices be chosen randomly from the cube's vertices, while the third one remains unrestricted. Then we can wonder, what are the odds of getting similar triangles in the first case or non-degenerate isosceles triangles in the second case? Certainly, the questions may appear uninteresting, as the answer is so small in both situations. Furthermore, as the size of the cube and the dimension increases, the variety of these triangles increases immensely, and the attempt to randomly find the special ones seems completely in vain. Despite this, the situation is not like that at all, but rather the complete opposite. Thus, Theorem 1 shows that, if the dimension of the hypercube becomes large enough, then almost all triangles that have two vertices at the corners of the hypercube and the third a lattice point inside are almost isosceles. And on the same note, if both the size of the hypercube and the dimension become sufficiently large, then Theorem 3 shows that almost all triangles with vertices anywhere on the lattice points of the hypercube, which have a certain common vertex, not only are nearly isosceles but also have a particular shape, being almost all almost similar. To make things precise, let \(N\geq 1\) be integer and let \(\mathcal{W}=\mathcal{W}(d,N)\) be the maximal hypercube of lattice points from \([0,N]^{d}\). Since we are interested both in the discrete case and in the limit process, a good coverage of the phenomenon is taken if we choose \(\mathcal{S}\subseteq\mathcal{W}\). We measure the distance between points \(\boldsymbol{v}^{\prime},\boldsymbol{v}^{\prime\prime}\in\mathbb{R}^{d}\) with the Euclidean distance \[\mathfrak{d}(\boldsymbol{v}^{\prime},\boldsymbol{v}^{\prime\prime})=\big{(} (v_{1}^{\prime\prime}-v_{1}^{\prime})^{2}+\cdots+(v_{d}^{\prime\prime}-v_{d}^ {\prime})^{2}\big{)}^{1/2}\] and, to compare with each other sizes from different dimensions, we use the _normalized distance_: \[\mathfrak{d}_{d}(\boldsymbol{v}^{\prime},\boldsymbol{v}^{\prime\prime})= \frac{1}{\sqrt{d}N}\big{(}(v_{1}^{\prime\prime}-v_{1}^{\prime})^{2}+\cdots+(v _{d}^{\prime\prime}-v_{d}^{\prime})^{2}\big{)}^{1/2}.\] Then the normalized distance between two opposite vertices, the farthest away points in \(\mathcal{W}\), is \(\mathfrak{d}_{d}\left((0,\ldots,0),(N,\ldots,N)\right)=1\). In direct contrast, besides '_the thickest_' hyperlane' \(\mathcal{W}\), we also consider '_the thinnest_' one, that of dimension zero, the set of vertices of \([0,N]^{d}\), which we denote by \(\mathcal{V}=\mathcal{V}(d,N)\). For orientation in \(\mathcal{W}\) or around, a useful support from some point \(\boldsymbol{a}\) turns out to be the distance from \(\boldsymbol{a}\) to the center of the cube, \(\boldsymbol{c}\). That is why we denote \(r_{\boldsymbol{a}}:=\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{c})\). From an arbitrary point \(\boldsymbol{a}\), in Sections 2 and 5, we find exact formulas for the average distances to points in \(\mathcal{V}\) or in \(\mathcal{W}\), respectively. Also, we calculate the second moments about these averages in both cases. They are the main tool that allow us to draw catching properties that most pairs or triples of points in the hypercube have in high dimensions. We mention that similar procedures were used recently in other different settings. For example, in a continuum case, in order to provide a framework for studying multifractal geometry, the authors of [2] and [16] study the average distance and the asymptotic behavior of higher moments of self-similar measures on self-similar subsets of \(\mathbb{R}\), and on graph-directed self-similar subsets of \(\mathbb{R}\). Corresponding characteristic properties of lattice points that are Figure 1. Random triangles, in 2D and 3D, with vertices of integer coordinates in \([0,100]\). In each image, the triangles are chosen in such a way that they meet one of the following conditions: A. All triangles have a common vertex, initially chosen randomly but later fixed. B. All triangles have two vertices randomly chosen from the vertices of the cube, while the third vertex is free. visible from each others were observed in [3]. Averages of relative distances from points in geometric figures were also the object of study in the articles [5, 8, 9, 12, 13]. To exemplify our results, regarding, for example, to the vertices of the hypercube, one may ask what is the expected distance from them to a fixed arbitrary point \(\mathbf{a}\) and what is the probability that such a distance is close to the average. In Section 3, we show that, for any fixed point \(\mathbf{a}\in\mathcal{W}\), almost all vertices are at a normalized distance from \(\mathbf{a}\) that is close to \(\sqrt{1/4+r_{\mathbf{a}}^{2}}\), so long as the dimension \(d\) is sufficiently large. As a consequence, it follows that almost all triangles formed from \(\mathbf{a}\) and two vertices of the hypercube will be nearly an isosceles triangle, since the distances from \(\mathbf{a}\) to each of the two vertices will both be close to the same value. **Theorem 1**.: _For all \(\varepsilon>0\), there exists an integer \(d_{\varepsilon}\) such that, for all integers \(d\geq d_{\varepsilon}\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), the proportion of triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) such that_ \[|\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{1}})-\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{2}})|\leq\varepsilon,\] _where \(\mathbf{v_{1}},\mathbf{v_{2}}\in\mathcal{V}\), is greater than or equal to \(1-\varepsilon\)._ Another consequence arises from noticing that, for any vertex \(\mathbf{v}\in\mathcal{V}\), the square of the normalized distance from the center of the cube to \(\mathbf{v}\) is \(1/4\). As a result, for almost all vertices \(\mathbf{v}\), the square of the distance from \(\mathbf{a}\) to \(\mathbf{v}\) is almost the sum of the squares of the distances from \(\mathbf{c}\) to \(\mathbf{a}\) and from \(\mathbf{c}\) to \(\mathbf{v}\). Therefore, it is natural to ponder if \((\mathbf{a},\mathbf{c},\mathbf{v})\) may be close to a right triangle, and in fact this is the case so long as \(\mathbf{a}\) is not too near to \(\mathbf{c}\). **Theorem 2**.: _For all \(\varepsilon>0\), there exists an integer \(d_{\varepsilon}\), and a function \(f(d)\leq 1/2\), such that for all integers \(d\geq d_{\varepsilon}\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\) with \(\mathfrak{d}_{d}(\mathbf{a},\mathbf{c})\geq f(d)\) (where \(\mathbf{c}\) is the center of the hypercube), the proportion of triangles \((\mathbf{a},\mathbf{c},\mathbf{v})\) with \(\mathbf{v}\in\mathcal{V}\) and whose angle \(\theta_{\mathbf{c}}(\mathbf{v})\) at \(\mathbf{c}\) satisfies_ \[|\cos\theta_{\mathbf{c}}(\mathbf{v})|\leq\varepsilon,\] _is greater than or equal to \(1-\varepsilon\)._ Precise estimates and the effective bounds versions of Theorems 1 and 2 are proved in Section 4. In the second part of our manuscript, starting with Section 5, we turn our focus to looking at distances from a fixed point \(\mathbf{a}\) to any integer point \(\mathbf{w}\) in the cube. We similarly find that almost all points \(\mathbf{w}\in\mathcal{W}\) are at a normalized distance from \(\mathbf{a}\) which is close to \(\sqrt{1/12+1/(6N)+\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{c})}\), provided that the dimension \(d\) is sufficiently large. Furthermore, we will also show that almost all pairs of points in the cube are at a relative distance close to \(\sqrt{1/6+1/(3N)}\). As a consequence, we find that almost all triangles with one vertex at \(\mathbf{a}\) and the other two anywhere in \(\mathcal{W}\) are nearly identical. We summarise this fact in the following theorem, which, in explicit and effective form, we prove in Section 6. **Theorem 3**.: _For any \(\varepsilon>0\), there exist positive integers \(d_{\varepsilon}\), \(N_{\varepsilon}\), such that, for all integers \(d\geq d_{\varepsilon}\), \(N\geq N_{\varepsilon}\), and any point \(\mathbf{a}\in\mathcal{W}\), the proportion of triangles \((\mathbf{a},\mathbf{w}_{1},\mathbf{w}_{2})\), with \(\mathbf{w}_{1},\mathbf{w}_{2}\in\mathcal{W}\), in which_ \[\left|\mathfrak{d}_{d}(\mathbf{w}_{1},\mathbf{w}_{2})-\frac{1}{\sqrt{6}}\right|\leq \varepsilon,\text{ and }\left|\mathfrak{d}_{d}(\mathbf{a},\mathbf{w}_{j})-\sqrt{\frac{1}{12}+r_{\mathbf{a}}} \right|\leq\varepsilon,\text{ for }j=1,2\] _is greater than or equal to \(1-\varepsilon\). (Here, \(r_{\mathbf{a}}=\mathfrak{d}_{d}(\mathbf{a},\mathbf{c})\) denotes the normalized distance from \(\mathbf{a}\) to the center of \(\mathcal{W}\).)_ For a probabilistic description of some natural expectations in high dimensional hypercubes we refer the reader to [3, Section 8]. It is a super-fast approach to the subject, although, there, the discussion is done in a continuum and the positions of both the observer and the viewed point are variable, while in this paper, most of the time the observer has a fixed position. ## 2. Distances between any fixed point and the vertices of the hypercube For any \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\in\mathcal{W}\), in the following we denote \(\|\boldsymbol{a}\|^{2}:=a_{1}^{2}+\cdots+a_{d}^{2}\) and \(\boldsymbol{|a|}:=a_{1}+\cdots+a_{d}\). Let \(\mathcal{V}\) denote the set of all vertices of \([0,N]^{d}\). This cube has \(2^{d}\) vertices and each of them has components equal to \(0\) or \(N\). Notice that if \(\mathcal{V}\) is seen as a subset of the set of lattice points \(\mathcal{W}\), then no two vertices in \(\mathcal{V}\) are visible from each other, since there are always other points of integer coordinates in \([0,N]^{d}\) that interfere between them provided that \(N\geq 2\). The set of points in \(\mathcal{W}\) that are visible from each other was the object of study in [3]. ### The average \(A_{\boldsymbol{a},\mathcal{V}}(d,N)\) Let \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\) be fixed and let \(A_{\boldsymbol{a},\mathcal{V}}(d,N)\) denote the average of the squares of the distances from \(\boldsymbol{a}\) to all vertices \(\boldsymbol{v}\in\mathcal{V}\). We have \[A_{\boldsymbol{a},\mathcal{V}}(d,N) =\frac{1}{\#\mathcal{V}}\sum_{\boldsymbol{v}\in\mathcal{V}} \mathfrak{d}^{2}(\boldsymbol{v},\boldsymbol{a})\] \[=\frac{1}{2^{d}}\sum_{\boldsymbol{v}\in\mathcal{V}}\left((v_{1}-a _{1})^{2}+\cdots+(v_{d}-a_{d})^{2}\right)\] \[=\frac{1}{2^{d}}\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{j=1}^{d} v_{j}^{2}-\frac{1}{2^{d-1}}\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{j=1}^{d}v_{j}a_ {j}+\frac{1}{2^{d}}\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{j=1}^{d}a_{j}^{2}.\] For any fixed \(j\), there are \(2^{d-1}\) vertices \(\boldsymbol{v}\in\mathcal{V}\) with the \(j\)-th component equal to \(0\), while the remaining ones have the \(j\)-th component equal to \(N\). Then \[A_{\boldsymbol{a},\mathcal{V}}(d,N) =\frac{1}{2^{d}}\sum_{j=1}^{d}2^{d-1}N^{2}-\frac{1}{2^{d-1}}\sum _{j=1}^{d}a_{j}2^{d-1}N+\frac{1}{2^{d}}\,\|\boldsymbol{a}\|^{2}\,2^{d}\] \[=\frac{1}{2}dN^{2}-\boldsymbol{|a|}N+\|\boldsymbol{a}\|^{2}\,.\] We state the result in the next lemma. **Lemma 1**.: _Let \(\mathcal{V}\) be the set of vertices of the hypercube \([0,N]^{d}\), where \(N\geq 1\) and \(d\geq 1\) are integers. Let \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\) be fixed. Then, the average of all the squares of distances from \(\boldsymbol{a}\) to points in \(\mathcal{V}\) is_ \[A_{\boldsymbol{a},\mathcal{V}}(d,N)=\frac{1}{2^{d}}\sum_{j=1}^{d}2^{d-1}N^{2}= \frac{1}{2}dN^{2}-\boldsymbol{|a|}N+\|\boldsymbol{a}\|^{2}\,. \tag{1}\] In particular, Lemma 1 says that the average distance from the origin to the vertices of \([0,N]^{d}\) equals \(\sqrt{dN^{2}/2}\), which is the same as saying that the average normalized distance is \(1/\sqrt{2}\). ### The second moment about the average distances to the vertices Starting with the definition of the second moment, which we denote by \(\mathfrak{M}_{2;\mathbf{a},\mathcal{V}}(d,N)\), we rearrange the terms in its defining summation to aggregate the average and make use of Lemma 1. Thus, writing shortly \(A_{\mathbf{a},\mathcal{V}}\) instead of \(A_{\mathbf{a},\mathcal{V}}(d,N)\), we have: \[\begin{split}\mathfrak{M}_{2;\mathbf{a},\mathcal{V}}(d,N)& :=\frac{1}{\#\mathcal{V}}\sum_{\mathbf{v}\in\mathcal{V}}\left( \mathfrak{d}^{2}(\mathbf{v},\mathbf{a})-A_{\mathbf{a},\mathcal{V}}\right)^{2}\\ &=\frac{1}{2^{d}}\sum_{\mathbf{v}\in\mathcal{V}}\left(\mathfrak{d}^{4} (\mathbf{v},\mathbf{a})-2\,\mathfrak{d}^{2}(\mathbf{v},\mathbf{a})A_{\mathbf{a},\mathcal{V}}+A_{ \mathbf{a},\mathcal{V}}^{2}\right)\\ &=\frac{1}{2^{d}}\left(\sum_{\mathbf{v}\in\mathcal{V}}\mathfrak{d}^{4 }(\mathbf{v},\mathbf{a})-2A_{\mathbf{a},\mathcal{V}}\sum_{\mathbf{v}\in\mathcal{V}}\mathfrak{d} ^{2}(\mathbf{v},\mathbf{a})+\sum_{\mathbf{v}\in\mathcal{V}}A_{\mathbf{a},\mathcal{V}}^{2} \right)\\ &=\frac{1}{2^{d}}\cdot\Sigma_{\mathbf{a},\mathcal{V}}-A_{\mathbf{a}, \mathcal{V}}^{2}.\end{split} \tag{2}\] To find the sum denoted by \(\Sigma_{\mathbf{a},\mathcal{V}}\) in (2), we write it explicitly: \[\Sigma_{\mathbf{a},\mathcal{V}}:=\sum_{\mathbf{v}\in\mathcal{V}}\mathfrak{d}^{4}(\mathbf{ v},\mathbf{a})=\sum_{\mathbf{v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}h(v_{m},v_{n},a_{ m},a_{n}), \tag{3}\] where \(h(v_{m},v_{n},a_{m},a_{n})=(v_{m}-a_{m})^{2}(v_{n}-a_{n})^{2}\) is the sum of the following nine monomials: \[\begin{split} h(v_{m},v_{n},a_{m},a_{n})=& v_{m}^{2}v_{n}^{2}-2v_{m}^{2}v_{n}a_{n}+v_{m}^{2}a_{n}^{2}\\ &-2v_{m}a_{m}v_{n}^{2}+4v_{m}a_{m}v_{n}a_{n}-2v_{m}a_{m}a_{n}^{2} \\ &+a_{m}^{2}v_{n}^{2}-2a_{m}^{2}v_{n}a_{n}+a_{m}^{2}a_{n}^{2}. \end{split} \tag{4}\] Next we take into account the contribution of each monomial in (4) to the corresponding sum in (2). For this we separate the group of the \(d\) diagonal terms (those with \(m=n\)) from the group of the \(d^{2}-d\) off-diagonal terms, and then count the number of vertices with the non-zero components at the right place. We have: \[\begin{split} S_{1}(\mathbf{a},\mathcal{V})&=\sum_{ \mathbf{v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}v_{n}^{2}=N^{4} \left(d2^{d-1}+(d^{2}-d)2^{d-2}\right);\\ S_{2}(\mathbf{a},\mathcal{V})&=\sum_{\mathbf{v}\in\mathcal{ V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}v_{n}a_{n}=N^{3}\left(2^{d-1}|\,\mathbf{a} |+(d-1)2^{d-2}|\,\mathbf{a}|\right);\\ S_{3}(\mathbf{a},\mathcal{V})&=\sum_{\mathbf{v}\in\mathcal{ V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}a_{n}^{2}=N^{2}d2^{d-1}\,\|\mathbf{a} \|^{2}\,;\end{split} \tag{5}\] then \[\begin{split} S_{4}(\boldsymbol{a},\mathcal{V})&=S_{2}( \boldsymbol{a},\mathcal{V})=\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{m=1}^{d} \sum_{n=1}^{d}v_{m}a_{m}v_{n}^{2}\\ &=N^{3}\left(2^{d-1}\boldsymbol{|}\,\boldsymbol{a}\boldsymbol{| }+(d-1)2^{d-2}\boldsymbol{|}\,\boldsymbol{a}\boldsymbol{|}\right);\\ S_{5}(\boldsymbol{a},\mathcal{V})&=\sum_{\boldsymbol {v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a_{m}v_{n}a_{n}=N^{2}\left(2 ^{d-1}\,\|\boldsymbol{a}\|^{2}+2^{d-2}(\boldsymbol{|}\,\boldsymbol{a}\boldsymbol {|}^{2}-\|\boldsymbol{a}\|^{2})\right);\\ S_{6}(\boldsymbol{a},\mathcal{V})&=\sum_{\boldsymbol {v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a_{m}a_{n}^{2}=N2^{d-1} \boldsymbol{|}\,\boldsymbol{a}\,\|\,\|\boldsymbol{a}\|^{2}\,;\end{split} \tag{6}\] and \[\begin{split} S_{7}(\boldsymbol{a},\mathcal{V})&=S_{ 3}(\boldsymbol{a},\mathcal{V})=\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{m=1}^ {d}\sum_{n=1}^{d}a_{m}^{2}v_{n}^{2}=N^{2}d2^{d-1}\,\|\boldsymbol{a}\|^{2}\,; \\ S_{8}(\boldsymbol{a},\mathcal{V})&=S_{6}( \boldsymbol{a},\mathcal{V})=\sum_{\boldsymbol{v}\in\mathcal{V}}\sum_{m=1}^{d} \sum_{n=1}^{d}a_{m}^{2}v_{n}a_{n}=N2^{d-1}\boldsymbol{|}\,\boldsymbol{a}\,\| \,\|\boldsymbol{a}\|^{2}\,;\\ S_{9}(\boldsymbol{a},\mathcal{V})&=\sum_{\boldsymbol {v}\in\mathcal{V}}\sum_{m=1}^{d}\sum_{n=1}^{d}a_{m}^{2}a_{n}^{2}=2^{d}\,\| \boldsymbol{a}\|^{4}\,.\end{split} \tag{7}\] On adding the sums in (5), (6) and (7) as many times as indicated by the appearances of their defining monomials in (4), we find that the sum \(\Sigma_{\boldsymbol{a},\mathcal{V}}\) from (3) is equal to \[\begin{split}\Sigma_{\boldsymbol{a},\mathcal{V}}&= \left(S_{1}(\boldsymbol{a},\mathcal{V})-2S_{2}(\boldsymbol{a},\mathcal{V})+S_ {3}(\boldsymbol{a},\mathcal{V})\right)\\ &\qquad\qquad\qquad-\left(2S_{4}(\boldsymbol{a},\mathcal{V})-4S_ {5}(\boldsymbol{a},\mathcal{V})+2S_{6}(\boldsymbol{a},\mathcal{V})\right)\\ &\qquad\qquad\qquad\qquad\qquad+\left(S_{7}(\boldsymbol{a}, \mathcal{V})-2S_{8}(\boldsymbol{a},\mathcal{V})+S_{9}(\boldsymbol{a}, \mathcal{V})\right)\\ &=2^{d-2}\left((d^{2}+d)N^{4}-4(d+1)\boldsymbol{|}\,\boldsymbol{a }\boldsymbol{|}N^{3}-8\boldsymbol{|}\,\boldsymbol{a}\boldsymbol{|}\,\| \boldsymbol{a}\|^{2}\,N\right.\\ &\qquad\qquad\qquad\qquad\qquad\left.+4\big{(}\boldsymbol{|}\, \boldsymbol{a}\boldsymbol{|}^{2}+(d+1)\,\|\boldsymbol{a}\|^{2}\,\big{)}N^{2}+4 \,\|\boldsymbol{a}\|^{4}\right).\end{split} \tag{8}\] Then, inserting the results from (8) and (1) in formula (2), we arrive at a closed form expression for \(\mathfrak{M}_{2;\boldsymbol{a},\mathcal{V}}(d,N)\), which we state in the next lemma. **Lemma 2**.: _Let \(d,N\geq 1\) be integers and let \(\mathcal{V}\) be the set of vertices of the hypercube \([0,N]^{d}\). Then, the second moment about the mean \(A_{\boldsymbol{a},\mathcal{V}}(d,N)\) equals_ \[\mathfrak{M}_{2;\boldsymbol{a},\mathcal{V}}(d,N)=\frac{1}{\#\mathcal{V}}\sum_ {\boldsymbol{v}\in\mathcal{V}}\left(\mathfrak{d}^{2}(\boldsymbol{v}, \boldsymbol{a})-A_{\boldsymbol{a},\mathcal{V}}(d,N)\right)^{2}=\frac{1}{4}dN^{4 }-\boldsymbol{|}\,\boldsymbol{a}\boldsymbol{|}N^{3}+\|\boldsymbol{a}\|^{2}\,N^ {2}.\] ## 3. The average of the squares and the square root of the average Since the normalized second moment \(\mathfrak{M}_{2;\boldsymbol{a},\mathcal{V}}(d,N)/d^{2}N^{4}=o(1)\) as \(d\to\infty\), it follows that for any fixed \(\boldsymbol{a}\in\mathcal{W}\), almost all normalized distances from \(\boldsymbol{a}\) to the vertices of \(\mathcal{W}\) are close to \(\sqrt{A_{\boldsymbol{a},\mathcal{V}}(d,N)/dN^{2}}\). This is the object of the following theorem. **Theorem 4**.: _Let \(B_{\mathbf{a},\mathcal{V}}:=A_{\mathbf{a},\mathcal{V}}(d,N)/dN^{2}\) denote the average of the squares of the normalized distances from \(\mathbf{a}\) to the vertices of \([0,N]^{d}\). Let \(\eta\in(0,1/2)\) be fixed. Then, for any integers \(d\geq 2\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), we have_ \[\frac{1}{\#\mathcal{V}}\#\left\{\mathbf{v}\in\mathcal{V}:\mathfrak{d}_{d}(\mathbf{a}, \mathbf{v})\in\left[\sqrt{B_{\mathbf{a},\mathcal{V}}}-\frac{1}{d^{\eta}},\ \sqrt{B_{\mathbf{a},\mathcal{V}}}+\frac{1}{d^{\eta}}\right]\right\}\geq 1 -\frac{1}{d^{1-2\eta}}\,.\] Proof.: Let \(\eta,d,N,\mathbf{a}\) be as in the statement of the theorem. Since \[-\mathbf{\mid}\mathbf{a}\mathbf{\mid}N+\mathbf{\mid}\mathbf{\mid}^{2}\leq 0,\] from Lemma 2 we find that \[\frac{\mathfrak{M}_{2;\mathbf{a},\mathcal{V}}(d,N)}{d^{2}N^{4}}\leq\frac{1}{4d}\,. \tag{9}\] On the other hand, for any parameters \(b,T>0\), \[\frac{\mathfrak{M}_{2;\mathbf{a},\mathcal{V}}(d,N)}{d^{2}N^{4}} =\frac{1}{\#\mathcal{V}}\times\sum_{\mathbf{v}\in\mathcal{V}}\left( \mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\right)^{2} \tag{10}\] \[\geq\frac{1}{\#\mathcal{V}}\times\sum_{\begin{subarray}{c}\mathbf{v }\in\mathcal{V}\\ \mid\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\mid\geq\frac{1}{ bT}\end{subarray}}\left(\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a}, \mathcal{V}}\right)^{2}\] \[\geq\frac{1}{\#\mathcal{V}}\times\sum_{\begin{subarray}{c}\mathbf{v }\in\mathcal{V}\\ \mid\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\mid\geq\frac{1}{ bT}\end{subarray}}\frac{1}{b^{2}T^{2}}\,.\] Then, on combining (9) and (10), we see that \[\frac{1}{\#\mathcal{V}}\#\left\{\mathbf{v}\in\mathcal{V}\colon\left.\left|\mathfrak {d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\right|\geq\frac{1}{bT}\right\} \leq\frac{b^{2}T^{2}}{4d}. \tag{11}\] Now, by Lemma 1 and the definiton of \(B_{\mathbf{a},\mathcal{V}}\) in the hypothesis, we find that \[\sqrt{B_{\mathbf{a},\mathcal{V}}}=\sqrt{\frac{1}{2}+\frac{1}{dN^{2}}\left(\mathbf{ \mid}\mathbf{a}\mathbf{\mid}^{2}-N\mathbf{\mid}\mathbf{\mid}\mathbf{a}\mathbf{\mid}\right)}\geq\sqrt{ \frac{1}{2}-\frac{1}{4}}=\frac{1}{2}\,. \tag{12}\] (Here we have taken into account the fact that the minimum of \(\mathbf{\mid}\mathbf{\mid}\mathbf{\mid}^{2}-N\mathbf{\mid}\mathbf{\mid}\mathbf{a}\mathbf{\mid}\) is attained in the middle of the hypercube, which is a consequence of the fact that, independently on each of the \(d\) coordinates, the minimum of \(x\mapsto x^{2}-Nx\) is reached for \(x=N/2\).) Then, using inequality (12) and the fact that \(\mathfrak{d}_{d}(\mathbf{a},\mathbf{v})\geq 0\), it follows that \[\left|\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{v})-B_{\mathbf{a},\mathcal{V}}\right| =\left|\mathfrak{d}_{d}(\mathbf{a},\mathbf{v})-\sqrt{B_{\mathbf{a},\mathcal{V }}}\right|\left(\mathfrak{d}_{d}(\mathbf{a},\mathbf{v})+\sqrt{B_{\mathbf{a},\mathcal{V}}}\right)\] \[\geq\frac{1}{2}\left|\mathfrak{d}_{d}(\mathbf{a},\mathbf{c})-\sqrt{B_{ \mathbf{a},\mathcal{V}}}\right|.\] Therefore, we can tighten the restriction in the set on the left-hand side of inequality (11) by taking \(b=2\), and, as a consequence, we find that \[\frac{1}{\#\mathcal{V}}\#\left\{\mathbf{v}\in\mathcal{V}\colon\left.\left| \mathfrak{d}_{d}(\mathbf{a},\mathbf{v})-\sqrt{B_{\mathbf{a},\mathcal{V}}}\right|\geq\frac{ 1}{T}\right\}\leq\frac{T^{2}}{d}.\] Finally, we take \(T=d^{\eta}\) and then see that this completes the proof of Theorem 4. ## 4. Triangles involving the Vertices of the Hypercube In this section we analyze the set of triangles in which at least one vertex is a corner of the hypercube. We count them to see how many of them are close or away from the average. ### Triangles formed from any fixed point and two vertices of the cube From Theorem 4, it will follow that almost all pairs of distinct vertices \((\boldsymbol{v_{1}},\boldsymbol{v_{2}})\in\mathcal{V}^{2}\) are each at a distance close to \(\sqrt{B_{\boldsymbol{a},\mathcal{V}}}\) from \(\boldsymbol{a}\). As a result, one can say that almost all triangles formed by \(\boldsymbol{a}\) and two vertices are 'almost isosceles'. If we denote by \(\mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}\subset\mathcal{V}^{2}\) the set of all pairs of vertices \((\boldsymbol{v_{1}},\boldsymbol{v_{2}})\), which form together with \(\boldsymbol{a}\) a non-degenerate triangle (that is, triangles with distinct vertices), then \[\begin{split}\frac{1}{\#\mathcal{T}_{\boldsymbol{a},\mathcal{V}^ {2}}}&\#\left\{(\boldsymbol{v_{1}},\boldsymbol{v_{2}})\in \mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}\colon\,\mathfrak{d}_{d}( \boldsymbol{a},\boldsymbol{v_{1}}),\mathfrak{d}_{d}(\boldsymbol{a}, \boldsymbol{v_{2}})\in\left[\sqrt{B_{\boldsymbol{a},\mathcal{V}}}-\frac{1}{d!},\sqrt{B_{\boldsymbol{a},\mathcal{V}}}+\frac{1}{d!}\right]\right\}\\ &\geq\frac{1}{\#\mathcal{V}}\left(\#\left\{\boldsymbol{v_{1}}\in \mathcal{V}\colon\,\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{1}})\in \left[\sqrt{B_{\boldsymbol{a},\mathcal{V}}}-\frac{1}{d!},\sqrt{B_{\boldsymbol {a},\mathcal{V}}}+\frac{1}{d!}\right]\right\}-1\right)\\ &\quad\times\frac{1}{\#\mathcal{V}}\left(\#\left\{\boldsymbol{v_{2 }}\in\mathcal{V}\colon\,\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{2}}) \in\left[\sqrt{B_{\boldsymbol{a},\mathcal{V}}}-\frac{1}{d!},\sqrt{B_{ \boldsymbol{a},\mathcal{V}}}+\frac{1}{d!}\right]\right\}-2\right),\end{split} \tag{13}\] where we subtract \(1\) and \(2\), respectively, from the two terms in the right-hand side of (13) to account for the possibilities that \(\boldsymbol{a},\boldsymbol{v_{1}},\boldsymbol{v_{2}}\) form a degenerate triangle. From Theorem 4, we see that the right-hand side of (13) is bounded below by \[\begin{split}\left(1-\frac{1}{d^{1-2\eta}}-\frac{1}{2^{d}}\right) &\left(1-\frac{1}{d^{1-2\eta}}-\frac{2}{2^{d}}\right)\\ &=1-\frac{2}{d^{1-2\eta}}+\frac{1}{d^{2-4\eta}}-\frac{3}{2^{d}}+ \frac{3}{2^{d}d^{1-2\eta}}+\frac{2}{2^{2d}}\\ &\geq 1-\frac{2}{d^{1-2\eta}},\end{split}\] for \(d\geq 8\), since in that range \(1/d^{2-4\eta}-3/2^{d}\geq 0\). We now arrive at the following theorem on isosceles triangles. **Theorem 5**.: _Let \(\boldsymbol{a}\in\mathcal{W}\) be fixed and let \(\mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}\) denote the set of triangles with distinct vertices \(\boldsymbol{a}\) and \(\boldsymbol{v}_{1},\boldsymbol{v}_{2}\in\mathcal{V}\). Let \(\eta\in(0,1/2)\) be fixed. Then, for any integers \(d\geq 8\) and \(N\geq 1\), we have_ \[\frac{1}{\#\mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}}\#\left\{( \boldsymbol{v_{1}},\boldsymbol{v_{2}})\in\mathcal{T}_{\boldsymbol{a}, \mathcal{V}^{2}}\colon\,|\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{1}}) -\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{2}})|\leq\frac{2}{d^{\eta}} \right\}\geq 1-\frac{2}{d^{1-2\eta}}.\] Seeing that almost every such triangle is almost isosceles, we may wonder if any of these triangles can be equilateral, or perhaps right triangle. Let \(\boldsymbol{c}=\left(\frac{N}{2},\ldots,\frac{N}{2}\right)\) be the center of the hypercube. Notice that \(\boldsymbol{c}\) may belong or not to \(\mathcal{W}\), but in any case, if \(N\) had been odd, than the distance from \(\boldsymbol{c}\) to a point in \(\mathcal{W}\) would have been not greater than \(\sqrt{d}/2\). This is the same as saying that the normalized distance from \(\boldsymbol{c}\) to \(\mathcal{W}\) is at most \(1/(2N)\) and we may make reasoning with a point with integer coordinates that is close to \(\boldsymbol{c}\) instead of \(\boldsymbol{c}\). For simplicity we may assume here that \(N\) is even, but this is not necessary, since in fact, in the proofs of Theorems 4 and 5, we did not make use of the fact that the coordinates of \(\boldsymbol{a}\) are integers. Note that all vertices from \(\mathcal{V}\) are equally far from the center and \[\mathfrak{d}_{d}(\boldsymbol{v},\boldsymbol{c})=\frac{1}{\sqrt{d}N}\Big{(} \sum_{1\leq j\leq d}\left(N/2\right)^{2}\Big{)}^{1/2}=\frac{1}{2},\text{ for }\boldsymbol{v}\in\mathcal{V}, \tag{14}\] while for arbitrary \(\boldsymbol{a}=(a_{1},\ldots,a_{d})\), the normalized distance to \(\boldsymbol{c}\) is \[\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{c})=\frac{1}{\sqrt{d}N}\Big{(} \sum_{1\leq j\leq d}\left(a_{j}-N/2\right)^{2}\Big{)}^{1/2}=\frac{1}{\sqrt{d}N }\left(\frac{dN^{2}}{4}-\boldsymbol{\mid}\boldsymbol{a}\boldsymbol{\mid}N+ \boldsymbol{\mid}\boldsymbol{a}\boldsymbol{\mid}^{2}\right)^{1/2}. \tag{15}\] Now let us point out the following two observations. _Remark 1_.: (1) Taking \(\boldsymbol{a}\) in \(\mathcal{V}\), Theorem 4 tells us that the normalized distance between almost any two vertices in \(\mathcal{V}\) is close to \(1/\sqrt{2}\). (2) By Lemma 1, the normalized average of the squares of distances from \(\boldsymbol{a}\) to vertices in \(\mathcal{V}\) is \(B_{\boldsymbol{a},\mathcal{V}}=A_{\boldsymbol{a},\mathcal{V}}/(dN^{2})=1/2- \boldsymbol{|\boldsymbol{a}|}/(dN)+\left\|\boldsymbol{a}\right\|^{2}/(dN^{2})\). Then, by (14) and (15) this can further be expressed as \[B_{\boldsymbol{a},\mathcal{V}}=\frac{1}{4}+\left(\frac{1}{4}-\frac{\boldsymbol {|\boldsymbol{a}|}}{dN}+\frac{\left\|\boldsymbol{a}\right\|^{2}}{dN^{2}} \right)=\mathfrak{d}_{d}^{2}(\boldsymbol{v},\boldsymbol{c})+\mathfrak{d}_{d}^ {2}(\boldsymbol{a},\boldsymbol{c}),\text{ for any }\boldsymbol{v}\in\mathcal{V}. \tag{16}\] In particular, (16) shows that the average \(B_{\boldsymbol{a},\mathcal{V}}\) depends only on the normalized distance \(\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{c})\), which we shall further denote by \(r_{\boldsymbol{a}}\). On combining Theorem 5, (14), and the observations from Remark 1, we see that almost all triangles in \(\mathcal{T}_{\boldsymbol{a},\mathcal{V}^{2}}\) have one side lengths close to \(1/\sqrt{2}\), while the other are both close to \(\sqrt{1/4+r_{\boldsymbol{a}}^{2}}\). Since, by normalization, we know that \(0\leq r_{\boldsymbol{a}}\leq 1/2\), it follows that \[\frac{1}{2}\leq\sqrt{\frac{1}{4}+r_{\boldsymbol{a}}^{2}}\leq\frac{1}{\sqrt{2}}. \tag{17}\] In particular, if \(r_{\boldsymbol{a}}=1/2\), which occurs when \(\boldsymbol{a}\) is a vertex, we see that almost all triangles have each of their side lengths close to \(1/\sqrt{2}\). In other words, almost all triangles formed by three vertices of the hypercube are 'almost equilateral'. On the other hand, if \(r_{\boldsymbol{a}}=0\), which occurs when \(\boldsymbol{a}=\boldsymbol{c}\) is at the center of the hypercube, we see that almost all triangles have side lengths close to \(1/\sqrt{2}\), \(1/2\), and \(1/2\), respectively, that is, they are almost isosceles with an almost right angle in \(\boldsymbol{c}\). Making this more explicit, we argue similarly as in (13) to find the proportion of non-degenerate triangles \((\boldsymbol{a},\boldsymbol{v_{1}},\boldsymbol{v_{2}})\) such that \[\begin{split}\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{1}}),\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{2}})&\in\left[ \sqrt{B_{\boldsymbol{a},\mathcal{V}}}-\frac{1}{d^{\eta}},\sqrt{B_{\boldsymbol{ a},\mathcal{V}}}+\frac{1}{d^{\eta}}\right],\text{ and }\\ \mathfrak{d}_{d}(\boldsymbol{v_{1}},\boldsymbol{v_{2}})& \in\left[\frac{1}{\sqrt{2}}-\frac{1}{d^{\eta}},\frac{1}{\sqrt{2}}+\frac{1}{d^{ \eta}}\right].\end{split} \tag{18}\] Firstly, for any \(0<\eta<1/2\), from Theorem 4, we know that for any vertex \(\boldsymbol{v}\in\mathcal{V}\), the proportion of vertices \(\boldsymbol{v_{1}}\in\mathcal{V}\) such that \[\mathfrak{d}_{d}(\boldsymbol{a},\boldsymbol{v_{1}})\not\in\left[\sqrt{B_{ \boldsymbol{a},\mathcal{V}}}-\frac{1}{d^{\eta}},\sqrt{B_{\boldsymbol{a}, \mathcal{V}}}+\frac{1}{d^{\eta}}\right]\text{ and }\mathfrak{d}_{d}( \boldsymbol{v_{1}},\boldsymbol{v})\not\in\left[\frac{1}{\sqrt{2}}-\frac{1}{d^ {\eta}},\frac{1}{\sqrt{2}}+\frac{1}{d^{\eta}}\right],\] is bounded above by \[\frac{1}{d^{1-2\eta}}+\frac{1}{d^{1-2\eta}}=\frac{2}{d^{1-2\eta}}.\] Therefore, where \(\boldsymbol{v}\in\mathcal{V}\) can be taken to be any vertex, the proportion of non-degenerate triangles formed by distinct vertices \((\boldsymbol{v_{1}},\boldsymbol{v_{2}},\boldsymbol{a})\), which satisfy conditions in (18), is bounded below by \[\frac{1}{\#\mathcal{V}}\bigg{(}\#\left\{\mathbf{v_{1}}\in\mathcal{V} \colon\,\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{1}})\in\left[\sqrt{B_{\mathbf{a},\mathcal{V}}}- \frac{1}{d^{\eta}},\sqrt{B_{\mathbf{a},\mathcal{V}}}+\frac{1}{d^{\eta}}\right]\right\} -1\bigg{)}\] \[\times \frac{1}{\#\mathcal{V}}\bigg{(}\#\bigg{\{}\mathbf{v_{2}}\in\mathcal{V} \colon\,\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{2}})\in\left[\sqrt{B_{\mathbf{a},\mathcal{V} }}-\frac{1}{d^{\eta}},\sqrt{B_{\mathbf{a},\mathcal{V}}}+\frac{1}{d^{\eta}}\right], \text{ and }\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mathfrak{d}_{d}(\mathbf{v},\mathbf{v_{2}})\in\left[\frac{1}{\sqrt{2}}-\frac{1}{d^{ \eta}},\frac{1}{\sqrt{2}}+\frac{1}{d^{\eta}}\right]\bigg{\}}-2\bigg{)}\] \[\geq\left(1-\frac{1}{d^{1-2\eta}}-\frac{1}{2^{d}}\right)\left(1- \frac{2}{d^{1-2\eta}}-\frac{2}{2^{d}}\right)\] \[\geq 1-\frac{3}{d^{1-2\eta}},\] for \(d\geq 6\). As a consequence, we now have the following theorem. **Theorem 6**.: _Let \(\mathcal{T}_{\mathbf{a},\mathcal{V}^{2}}\) be the set of triangles with distinct vertices \(\mathbf{a}\) and \(\mathbf{v}_{1},\mathbf{v}_{2}\in\mathcal{V}\). Let \(\eta\in(0,1/2)\) be fixed. Then, for any integers \(d\geq 6\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), we have_ \[\frac{1}{\#\mathcal{T}_{\mathbf{a},\mathcal{V}^{2}}}\#\left\{(\mathbf{v_{1}},\mathbf{v_{2 }})\in\mathcal{T}_{\mathbf{a},\mathcal{V}^{2}}:\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}}\text{ satisfy \eqref{eq: _where, for any vertex \(\mathbf{v}\in\mathcal{V}\), \(\theta_{\mathbf{v}}\) is the angle between the lines going from \(\mathbf{c}\) to \(\mathbf{a}\) and \(\mathbf{v}\) respectively._ In plain words, Theorem 7 says that as long as \(\mathbf{a}\) is not too close to the center of the cube, almost all triangles formed by \(\mathbf{a}\), \(\mathbf{c}\), and a vertex of the cube are almost right triangles. ## 5. The spacings between a fixed point and the lattice points in the hypercube In this section we first calculate the mean distance from a fixed points to all the lattice points in \(\mathcal{W}\). Afterwards, we use the result to find the second moment about the mean of these distances. This is the farthest opposite case in terms of the dimension from the problem dealt with before. Here, the whole hypercube of lattice points plays the previous role of the vertices. ### The average \(A_{\mathbf{a},\mathcal{W}}(d,N)\) Let \(\mathbf{a}=(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\) be fixed and denote \[A_{\mathbf{a},\mathcal{W}}(d,N):=\frac{1}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W} }\mathfrak{d}^{2}(\mathbf{a},\mathbf{v})\,.\] Using the definitions and rearranging the terms, we find that \[\begin{split} A_{\mathbf{a},\mathcal{W}}(d,N)&=\frac{ 1}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W}}\sum_{j=1}^{d}(v_{j}-a_{j})^{2}\\ &=\frac{1}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W}}\sum_{j=1}^{ d}v_{j}^{2}-\frac{2}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W}}\sum_{j=1}^{d}a_{j}v_ {j}+\|\mathbf{a}\|^{2}\,.\end{split} \tag{19}\] Here, changing the order of summation, the sum of the squares is \[\sum_{j=1}^{d}\sum_{\mathbf{v}\in\mathcal{W}}v_{j}^{2}=d\sum_{\mathbf{v}\in\mathcal{W }}v_{1}^{2}=d(N+1)^{d-1}\sum_{v=0}^{N}v^{2}=\frac{dN(N+1)^{d}(2N+1)}{6}. \tag{20}\] In the same way, the mixed sum in (19) can be written as \[\sum_{j=1}^{d}\sum_{\mathbf{v}\in\mathcal{W}}a_{j}v_{j}=\left|\!\left|\mathbf{a}\right| \!\right|(N+1)^{d-1}\sum_{v=0}^{N}v=\left|\!\left|\mathbf{a}\right|\!\right|\frac{ N(N+1)^{d}}{2}. \tag{21}\] On inserting the results (20) and (21) in (19) we find a closed form expression for \(A_{\mathbf{a},\mathcal{W}}(d,N)\), which we state in the next lemma. **Lemma 3**.: _Let \(d,N\geq 1\) be integers, and let \(\mathbf{a}\in\mathbb{R}^{d}\) be fixed. Let \(\mathcal{W}\) be the set of lattice points in \([0,N]^{d}\). Then, the average of all squares of distances from \(\mathbf{a}\) to points in \(\mathcal{W}\) is_ \[A_{\mathbf{a},\mathcal{W}}(d,N)=\frac{1}{\#\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{W} }\mathfrak{d}^{2}(\mathbf{a},\mathbf{v})=\frac{dN(2N+1)}{6}-\left|\!\left|\mathbf{a} \right|\!\right|N+\left|\!\left|\mathbf{a}\right|\!\right|^{2}. \tag{22}\] ### The second moment about the mean The second moment about the mean for the whole hypercube, denoted by \(A_{\mathbf{a},\mathcal{W}}=A_{\mathbf{a},\mathcal{W}}(d,N)\), is defined by \[\mathfrak{M}_{2;\mathbf{a},\mathcal{W}}(d,N):=\frac{1}{\#\mathcal{W}}\sum_{\mathbf{ v}\in\mathcal{W}}\left(\mathfrak{d}^{2}(\mathbf{v},\mathbf{a})-A_{\mathbf{a},\mathcal{W}} \right)^{2}.\] Rearranging the terms on the summation, we may rewrite \(\mathfrak{M}_{2;\mathbf{a},\mathcal{W}}\) as \[\begin{split}\mathfrak{M}_{2;\mathbf{a},\mathcal{W}}(d,N)& =\tfrac{1}{(N+1)^{d}}\sum_{\mathbf{v}\in\mathcal{W}}\big{(}\mathfrak{ 0}^{4}(\mathbf{v},\mathbf{a})-2\,\mathfrak{0}^{2}(\mathbf{v},\mathbf{a})A_{\mathbf{a},\mathcal{W} }+A_{\mathbf{a},\mathcal{W}}^{2}\big{)}\\ &=\tfrac{1}{(N+1)^{d}}\bigg{(}\sum_{\mathbf{v}\in\mathcal{W}} \mathfrak{0}^{4}(\mathbf{v},\mathbf{a})-2A_{\mathbf{a},\mathcal{W}}\sum_{\mathbf{v}\in\mathcal{ W}}\mathfrak{0}^{2}(\mathbf{v},\mathbf{a})+\sum_{\mathbf{v}\in\mathcal{W}}A_{\mathbf{a}, \mathcal{W}}^{2}\bigg{)}\\ &=\tfrac{1}{(N+1)^{d}}\cdot\Sigma_{\mathbf{a},\mathcal{W}}-A_{\mathbf{a},\mathcal{W}}^{2}.\end{split} \tag{23}\] Here the terms collected in \(\Sigma_{\mathbf{a},\mathcal{W}}\) are the analogs of that from relation (3), so that their sum is \[\Sigma_{\mathbf{a},\mathcal{W}}=\sum_{\mathbf{v}\in\mathcal{W}}\mathfrak{0}^{4}(\mathbf{v},\mathbf{a})=\sum_{\mathbf{v}\in\mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}h(v_{m},v_{n},a_{m},a_{n}), \tag{24}\] where \(h(v_{m},v_{n},a_{m},a_{n})=(v_{m}-a_{m})^{2}(v_{n}-a_{n})^{2}\) is the same sum of nine monomials from (4). Next we calculate the contribution of each of the nine type of terms to the total sum. In the process, we change the order of summation and take care if the terms are on the diagonal (that is, if \(m=n\)) or not. We denote by \(T_{k}(N)\) the sum of the first \(N\)\(k\)-powers of positive integers, that is, \(T_{k}(N)=1^{k}+2^{k}+\cdots+N^{k}\). Thus, we obtain: \[\begin{split} S_{1}(\mathbf{a},\mathcal{W})&=\sum_{\bm {v}\in\mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}v_{n}^{2}\\ &=d(N+1)^{d-1}T_{4}(N)+(d^{2}-d)(N+1)^{d-2}T_{2}^{2}(N);\\ S_{2}(\mathbf{a},\mathcal{W})&=\sum_{\mathbf{v}\in\mathcal{W }}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}v_{n}a_{n}\\ &=\big{|}\,\mathbf{a}\big{|}(N+1)^{d-1}T_{3}(N)+\big{|}\,\mathbf{a}\big{|} (d-1)(N+1)^{d-2}T_{1}(N)T_{2}(N);\\ S_{3}(\mathbf{a},\mathcal{W})&=\sum_{\mathbf{v}\in\mathcal{W }}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}^{2}a_{n}^{2}=\|\mathbf{a}\|^{2}\,d(N+1)^{d-1} T_{2}(N);\end{split} \tag{25}\] then \[\begin{split} S_{4}(\mathbf{a},\mathcal{W})&=S_{2}(\bm {a},\mathcal{W})=\sum_{\mathbf{v}\in\mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a _{m}v_{n}^{2}\\ &=\big{|}\,\mathbf{a}\big{|}(N+1)^{d-1}T_{3}(N)+\big{|}\,\mathbf{a} \big{|}(d-1)(N+1)^{d-2}T_{1}(N)T_{2}(N);\\ S_{5}(\mathbf{a},\mathcal{W})&=\sum_{\mathbf{v}\in\mathcal{W }}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a_{m}v_{n}a_{n}\\ &=\|\mathbf{a}\|^{2}\,(N+1)^{d-1}T_{2}(N)+\left(\big{|}\,\mathbf{a} \big{|}^{2}-\|\mathbf{a}\|^{2}\right)(N+1)^{d-2}T_{1}^{2}(N);\\ S_{6}(\mathbf{a},\mathcal{W})&=\sum_{\mathbf{v}\in\mathcal{W }}\sum_{m=1}^{d}\sum_{n=1}^{d}v_{m}a_{m}a_{n}^{2}=\big{|}\,\mathbf{a}\big{|}\,\| \mathbf{a}\|^{2}\,(N+1)^{d-1}T_{1}(N);\end{split} \tag{26}\] and \[S_{7}(\boldsymbol{a},\mathcal{W}) =S_{3}(\boldsymbol{a},\mathcal{W})=\sum_{\boldsymbol{v}\in \mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}a_{m}^{2}v_{n}^{2}=\left\|\boldsymbol{a }\right\|^{2}d(N+1)^{d-1}T_{2}(N); \tag{27}\] \[S_{8}(\boldsymbol{a},\mathcal{W}) =S_{6}(\boldsymbol{a},\mathcal{W})=\sum_{\boldsymbol{v}\in \mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}a_{m}^{2}v_{n}a_{n}=\left|\boldsymbol{a }\right|\left\|\boldsymbol{a}\right\|^{2}(N+1)^{d-1}T_{1}(N);\] \[S_{9}(\boldsymbol{a},\mathcal{W}) =\sum_{\boldsymbol{v}\in\mathcal{W}}\sum_{m=1}^{d}\sum_{n=1}^{d}a_ {m}^{2}a_{n}^{2}=\left\|\boldsymbol{a}\right\|^{4}(N+1)^{d}.\] On combining (25), (26), (27), and (4), we find that \(\Sigma_{\boldsymbol{a},\mathcal{W}}\) from (24) equals \[\Sigma_{\boldsymbol{a},\mathcal{W}}= \big{(}S_{1}(\boldsymbol{a},\mathcal{W})-2S_{2}(\boldsymbol{a}, \mathcal{W})+S_{3}(\boldsymbol{a},\mathcal{W})\big{)} \tag{28}\] \[\qquad\qquad\qquad-\big{(}2S_{4}(\boldsymbol{a},\mathcal{W})-4S_ {5}(\boldsymbol{a},\mathcal{W})+2S_{6}(\boldsymbol{a},\mathcal{W})\big{)}\] \[\qquad+\big{(}S_{7}(\boldsymbol{a},\mathcal{W})-2S_{8}( \boldsymbol{a},\mathcal{W})+S_{9}(\boldsymbol{a},\mathcal{W})\big{)}\] \[= (N+1)^{d}\bigg{(}\Big{(}\frac{1}{9}d^{2}-\frac{4}{45}d\Big{)}\,N^ {4}+\Big{(}\frac{1}{9}d^{2}+\frac{17}{90}d+\Big{(}-\frac{2}{3}d-\frac{1}{3} \Big{)}\left|\boldsymbol{a}\right|\Big{)}\,N^{3}\] \[\qquad\qquad\qquad+\Big{(}\frac{1}{36}d^{2}+\frac{1}{180}d+ \Big{(}-\frac{1}{3}d-\frac{2}{3}+\left|\boldsymbol{a}\right|\Big{)}\left| \boldsymbol{a}\right|\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \Big{(}\frac{2}{3}d+\frac{1}{3}\Big{)}\left\|\boldsymbol{a}\right\|^{2}\Big{)} \,N^{2}\] \[\qquad\qquad\qquad+\Big{(}-\frac{1}{30}d+\Big{(}\frac{1}{3}d+ \frac{2}{3}-2\left|\boldsymbol{a}\right|\Big{)}\left\|a\right\|^{2}\Big{)}\,N +\left\|\boldsymbol{a}\right\|^{4}\bigg{)}.\] Finally, inserting the results from (28) and (22) into (23), we obtain the needed formula for \(\mathfrak{M}_{2;\boldsymbol{a},\mathcal{W}}(d,N)\). **Lemma 4**.: _Let \(d,N\geq 1\) be integers, and let \(A_{\boldsymbol{a},\mathcal{W}}(d,N)\) be the average distance from a fixed point \(\boldsymbol{a}\) to the points in the hypercube \(\mathcal{W}\). Then, the second moment about \(A_{\boldsymbol{a},\mathcal{W}}(d,N)\) is_ \[\mathfrak{M}_{2;\boldsymbol{a},\mathcal{W}}(d,N) =\frac{1}{(N+1)^{d}}\sum_{\boldsymbol{v}\in\mathcal{W}}\big{(} \mathfrak{d}^{4}(\boldsymbol{v},\boldsymbol{a})-2\,\mathfrak{d}^{2}( \boldsymbol{v},\boldsymbol{a})A_{\boldsymbol{a},\mathcal{W}}+A_{\boldsymbol{a },\mathcal{W}}^{2}\big{)}\] \[=\frac{4}{45}dN^{4}+\Big{(}\frac{17}{90}d-\frac{1}{3}\left| \boldsymbol{a}\right|\Big{)}\,N^{3}\] \[\quad+\Big{(}\frac{1}{180}d-\frac{2}{3}\left|\boldsymbol{a} \right|+\frac{1}{3}\left\|\boldsymbol{a}\right\|^{2}\Big{)}\,N^{2}+\Big{(}- \frac{1}{30}d+\frac{2}{3}\left\|a\right\|^{2}\Big{)}\,N.\] ## 6. The chance to find points in \(\mathcal{W}\) that are at some uncommon spacing from each other The formulas for the average and the second moment obtained in Section 5 allows us to estimate the chance to find points in \(\mathcal{W}\) that are situated at uncommon (away from the average) spacing from each other. It turns out that, as the dimension \(d\) gets larger, the probability to select at random two points from \(\mathcal{W}\) that are closer or farther away from the average is smaller and smaller, reducing to zero as \(d\) tends to infinity. Following the same argument used in the proof of Theorem 4, we obtain the following result, which shows that for any fixed \(\boldsymbol{a}\in\mathbb{R}^{d}\), almost all normalized distances from \(\boldsymbol{a}\) to the points in \(\mathcal{W}\) are close to \(\sqrt{A_{\boldsymbol{a},\mathcal{W}}/dN^{2}}\). **Theorem 8**.: _Let \(\eta\in(0,1/2)\) be fixed. Let \(B_{\mathbf{a},\mathcal{W}}=A_{\mathbf{a},\mathcal{W}}/dN^{2}\) denote the normalized average of the square distance from \(\mathbf{a}\) to points in \(\mathcal{W}\). Then, for any integers \(d\geq 2\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathbb{R}^{d}\), we have_ \[\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v}\in\mathcal{W}:\mathfrak{d}_{d}(\mathbf{a}, \mathbf{v})\in\left[\sqrt{B_{\mathbf{a},\mathcal{W}}}-\frac{1}{d^{\eta}},\sqrt{B_{\bm {a},\mathcal{W}}}+\frac{1}{d^{\eta}}\right]\right\}\geq 1-\frac{51}{15}\frac{1}{d ^{1-2\eta}}.\] We can continue our quest by looking for triplets of points in \(\mathcal{W}\). In the same way as we proceeded in Section 4, we see that almost all pairs of distinct points \((\mathbf{v_{1}},\mathbf{v_{2}})\in\mathcal{W}^{2}\) have components situated at a distance close to \(\sqrt{B_{\mathbf{a},\mathcal{W}}}\) from \(\mathbf{a}\). This means that almost all triangles formed by \(\mathbf{a}\) and two other points in the cube are 'almost isosceles'. We can make the argument explicit, as we did in Theorem 5 for vertices, to find the following analogous result. **Theorem 9**.: _Let \(\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}\subset\mathcal{W}^{2}\) be the set of all pairs of integer points \((\mathbf{v_{1}},\mathbf{v_{2}})\) which form a non-degenerate triangle together with \(\mathbf{a}\). Let \(\eta\in(0,1/2)\) be fixed. Then, for any integers \(d\geq 2\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), we have_ \[\frac{1}{\#\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}}\#\left\{(\mathbf{v_{1}},\mathbf{v_{2 }})\in\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}:\,|\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{1 }})-\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{2}})|\leq\frac{2}{d^{\eta}}\right\}\geq 1- \frac{102}{15}\frac{1}{d^{1-2\eta}}.\] The sides of these triangles can be found explicitly. To see this, we first use Lemma 3 to express the normalized average solely on the distance from the center of the cube to \(\mathbf{a}\). Thus, using (15), we have \[B_{\mathbf{a},\mathcal{W}} =\frac{1}{3}+\frac{1}{6N}-\frac{|\mathbf{a}|}{dN}+\frac{\|\mathbf{a}\|^{2 }}{dN^{2}}\] \[=\frac{1}{12}+\frac{1}{6N}+\left(\frac{1}{4}-\frac{|\mathbf{a}|}{dN}+ \frac{\|\mathbf{a}\|^{2}}{dN^{2}}\right)\] \[=\frac{1}{12}+\frac{1}{6N}+\mathfrak{d}_{d}^{2}(\mathbf{a},\mathbf{c}).\] Here, employing Theorem 8, the first thing of which we make note, is that for almost all points \(\mathbf{v}\in\mathcal{W}\), the square distance \(\mathfrak{d}_{d}^{2}(\mathbf{c},\mathbf{v})\) is close to \(1/12+1/(6N)\). It also follows that, for almost all pairs of points \(\mathbf{v_{1}},\mathbf{v_{2}}\in\mathcal{W}\), their mutual distance \(\mathfrak{d}_{d}(\mathbf{v_{1}},\mathbf{v_{2}})\) is close to \(\sqrt{B_{\mathbf{v_{1}},\mathcal{W}}}\), which is itself close to \(\sqrt{1/6+1/(3N)}\). Therefore, with our earlier notation \(r_{\mathbf{a}}:=\mathfrak{d}_{d}(\mathbf{a},\mathbf{c})\), we find that almost all triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) have side lengths close to \(\sqrt{1/6+1/(3N)}\), \(\sqrt{1/12+1/(6N)+r_{\mathbf{a}}^{2}}\), and \(\sqrt{1/12+1/(6N)+r_{\mathbf{a}}^{2}}\). If \(r_{\mathbf{a}}=0\), which occurs when \(\mathbf{a}\) is the center of the cube, we see that almost all triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) are almost right triangles. On the other hand, if \(r_{\mathbf{a}}\) is close to \(\sqrt{1/12+1/(6N)}\), then almost all triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) are almost equilateral. In order to make this remarks explicit, we first use the analogue of (11) in the proof of Theorem 8 to see that \[\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v_{1}}\in\mathcal{W}:\left|\mathfrak{d}_{ d}^{2}(\mathbf{c},\mathbf{v_{1}})-\left(\frac{1}{12}+\frac{1}{6N}\right)\right|\geq \frac{1}{2\sqrt{6}d^{\eta}}\right\}\leq\frac{102}{15}\frac{1}{d^{1-2\eta}}.\] Furthermore, if \(\mathbf{v_{1}}\) is a fixed point such that \[\mathfrak{d}_{d}^{2}(\mathbf{c},\mathbf{v_{1}})\in\left[\frac{1}{12}+\frac{1}{6N}- \frac{1}{2\sqrt{6}d^{\eta}},\ \frac{1}{12}+\frac{1}{6N}+\frac{1}{2\sqrt{6}d^{\eta}}\right],\] then, \[\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v_{2}}\in\mathcal{W}:\left| \mathfrak{d}_{d}(\mathbf{v_{1}},\mathbf{v_{2}})-\sqrt{\frac{1}{6}+\frac{1}{3N}}\right| \geq\frac{1}{d^{\eta}}\right\}\] \[\leq\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v_{2}}\in\mathcal{W}: \left|\mathfrak{d}_{d}^{2}(\mathbf{v_{1}},\mathbf{v_{2}})-\left(\frac{1}{6}+\frac{1}{3 N}\right)\right|\geq\frac{1}{\sqrt{6}d^{\eta}}\right\}\] \[\leq\frac{1}{\#\mathcal{W}}\#\left\{\mathbf{v_{2}}\in\mathcal{W}: \left|\mathfrak{d}_{d}^{2}(\mathbf{v_{1}},\mathbf{v_{2}})-B_{\mathbf{v_{1}},\mathcal{W}} \right|\geq\frac{1}{2\sqrt{6}d^{\eta}}\right\}\] \[\leq\frac{102}{15}\cdot\frac{1}{d^{1-2\eta}}.\] Then, we can argue just as we did in the proof of Theorem 6 to find the proportion of non-degenerate triangles \((\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}})\) such that \[\mathfrak{d}_{d}(\mathbf{a},\mathbf{v_{1}}),\mathfrak{d}_{d}(\mathbf{a},\mathbf{v _{2}}) \in\left[\sqrt{B_{\mathbf{a},\mathcal{W}}}-\frac{1}{d^{\eta}},\ \sqrt{B_{\mathbf{a},\mathcal{W}}}+\frac{1}{d^{\eta}}\right],\ \text{and} \tag{29}\] \[\mathfrak{d}_{d}(\mathbf{v_{1}},\mathbf{v_{2}}) \in\left[\sqrt{\frac{1}{6}+\frac{1}{3N}}-\frac{1}{d^{\eta}},\ \sqrt{\frac{1}{6}+\frac{1}{3N}}+\frac{1}{d^{\eta}}\right],\] and arrive at the following result. **Theorem 10**.: _Let \(\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}\subset\mathcal{W}^{2}\) be the set of all pairs of integer points \((\mathbf{v_{1}},\mathbf{v_{2}})\), which together with \(\mathbf{a}\), form a non-degenerate triangle. Fix \(\eta\in(0,1/2)\). Then, for any integers \(d\geq 2\), \(N\geq 1\), and any point \(\mathbf{a}\in\mathcal{W}\), we have_ \[\frac{1}{\#\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}}\#\left\{(\mathbf{v_{1}},\mathbf{v_{2 }})\in\mathcal{T}_{\mathbf{a},\mathcal{W}^{2}}:\mathbf{a},\mathbf{v_{1}},\mathbf{v_{2}}\text{ satisfy \eqref{eq:def}}\right\}\geq 1-\frac{102}{5}\frac{1}{d^{1-2\eta}}.\]
2309.14026
Magneto-optical trap performance for high-bandwidth applications
We study the dynamics of a magneto-optical trap (MOT) operating at high-bandwidth. We find the absolute importance of high recapture efficiency between cycles to maintain a practical atom number. We develop a simple model accounting for MOT trapping forces and pressure induced collisions and validate with experimental data using $\mathrm{{}^{87}Rb}$. This is then applied to quantum sensing predicting a shot noise limited sensitivity of $\mathrm{10^{-7}g/\sqrt{Hz}}$ for a gravimeter at 100 Hz operation. The results are useful for understanding MOT operation at high-bandwidth, particularly in the context of developing mobile high-bandwidth quantum inertial sensors targeting dynamic environments and navigation applications.
Benjamin Adams, Sachin Kinge, Kai Bongs, Yu-Hung Lien
2023-09-25T10:43:26Z
http://arxiv.org/abs/2309.14026v1
# Magneto-optical trap performance for high-bandwidth applications ###### Abstract We study the dynamics of a magneto-optical trap (MOT) operating at high-bandwidth. We find the absolute importance of high recapture efficiency between cycles to maintain a practical atom number. We develop a simple model accounting for MOT trapping forces and pressure induced collisions and validate with experimental data using \({}^{87}\)Rb. This is then applied to quantum sensing predicting a shot noise limited sensitivity of \(1\times 10^{-7}\frac{\mathrm{g}}{\sqrt{\mathrm{Hz}}}\) for a gravimeter at 100 Hz operation. The results are useful for understanding MOT operation at high-bandwidth, particularly in the context of developing mobile high-bandwidth quantum inertial sensors targeting dynamic environments and navigation applications. ## I Introduction The magneto-optical trap (MOT) has been the workhorse of cold atomic and molecular physics since its first demonstration [1; 2]. It efficiently cools and traps target species to a sub-millikelvin temperature and is indispensable to the generation of quantum gases, i.e. BEC and degenerate Fermi gas [3; 4]. The exploration of these fields has resulted in numerous applications in fundamental research and increasingly real-world scenarios such as metrology [5], sensing [6], quantum simulation [7; 8], quantum information processing [9; 10] and so on. Despite the remarkable progress in cold atom physics over the past few decades, most experiments are still conducted in laboratory settings due to the optical, radiofrequency and vacuum requirements for generating and manipulating cold atoms. However, the potential of cold atom technology has been increasingly recognised with efforts made to move experiments out of the laboratory for real-world benefits. Notably, this trend is evident in the area of quantum gravity sensing, with various demonstrator systems performing trials in different application environments [11; 12; 13; 14]. Promising application areas include geophysics, space, civil engineering and oil and mineral prospecting. The potential of the technology is based on its inherent and unparalleled sensitivity, along with the capability of providing drift-free measurements compared to classical approaches. Inertial navigation presents another promising application area for this technology. However, its practical implementation is hindered by the low sampling rate or bandwidth of quantum sensors making them less suited to highly dynamic environments. This limitation primarily arises from the time required for atomic sample preparation, which mainly involves loading the atomic trap, also known as the MOT loading time. As a result, bandwidth is typically limited to roughly 1 Hz. To increase bandwidth, there are various approaches available. One such method is to perform interleaved measurements, starting the next measurement while the previous one is still underway. This approach has demonstrated sampling rates of 3.75 Hz with a measurement time of 801 ms, but it relies on a long drop distance, resulting in a large form factor [15]. While sensitive, this implementation competes with the goal of creating small, robust, deployable devices and does not significantly increase bandwidth. Another approach involves using sequential measurements with a considerably reduced cycle time. This method has the potential to increase measurement bandwidth while minimising dead time due to replenishing trapped atoms between cycles. This approach trades bandwidth for reduced sensitivity and system demands. However, achieving 100 Hz operation restricts the cycle time to 10 ms, leaving only a few milliseconds for loading. Consequently, this approach utilises a short drop distance to maintain a high atom number. This smaller displacement ensures that most atoms can be recaptured between cycles, leading to a significant bandwidth increase. Alternatively, one could consider a short loading time with a long measurement time and adopt a 2D MOT or Zeeman slower to enhance the loading rate [16; 17]. However, this approach will also conflict with the desire for simpler, compact deployable systems. Quantum sensing is not widely explored at high-bandwidth although some atom interferometry has been performed, achieving sensitivities at the \(\sim\)pg/\(\sqrt{\mathrm{Hz}}\) level [18; 19; 20; 21]. This raises the question of how MOT dynamics and bandwidth are fundamentally connected and the implications for quantum sensing. In this paper, we explore high-bandwidth MOT dynamics in detail, making connections between MOT theory and experimental observations. We build a simple model and validate with experimental data before discussing the critical nature of efficient recapture; optimum parameters and limitations of the mechanism are also explored. The results are then applied to quantum sensing exploring the sensitivity performance limits of a high-bandwidth atom interferometer. This work highlights the utility of simple MOT physics in predicting the feasibility of MOT generation for a given bandwidth, duty cycle, trap size and other cloud properties. Study is performed with the \({}^{87}\)Rb\(\,\)D\({}_{2}\,(5^{2}\)S\({}_{1/2}\to 5^{2}\)P\({}_{3/2})\) transition. However, general findings apply to a broader range of cold atom experiments targeting higher bandwidth operation. ## II Model To simulate MOT dynamics we adopt the low-intensity theory of optical molasses for a two level atom in 1D illustrated in Fig. 1 [22]. This framework can be extended to obtain an expression for the MOT restoring force: \(\delta\) corresponds to the detuning from resonance, the \(\pm\) subscript accounts for the different detunings of the right and left directed beams, s denotes the saturation parameter and \(\Gamma\) is the natural linewidth of the transition. This force is numerically integrated to simulate atomic trajectories. Fig. 2 demonstrates the MOT restoring force acting on individual \({}^{87}\)Rb atoms with different initial velocities. This work concerns the \({}^{87}\)Rb D\({}_{2}\) (\(5^{2}\)S\({}_{1/2}\to 5^{2}\)P\({}_{3/2}\)) transition for which \(\Gamma=2\pi\times 6.065(9)\) MHz and \(\lambda=780.241\) nm. \[\mathrm{F}_{\mathrm{MOT}}=\hbar\mathrm{k}\frac{\Gamma}{2}\Bigg{[}\frac{\mathrm{ s}}{1+\mathrm{s}+(\frac{2\delta_{\wedge}}{\Gamma})^{2}}-\frac{\mathrm{s}}{1+ \mathrm{s}+(\frac{2\delta}{\Gamma})^{2}}\Bigg{]}, \tag{1}\] ## III Dynamics ### Intensity dependence For modelling purposes, a simulation cycle is split into two distinct regimes, drop and recapture. For lower bandwidth applications, requirements on MOT loading time are less stringent and so after dropping atoms, loading from background vapour is standard. The timescale for this is pressure dependent but typically takes a few 100 ms. Consequently, efficient recapture of atoms between cycles is essential for high-bandwidth operation. The recapture efficiency will not be 100% but the atom number does not decay to zero as atoms are loaded from the background vapour during recapture. There are two main mechanisms inhibiting recapture; the finite MOT restoring time and collisions between atoms in the MOT and the background vapour. We start by considering the finite restoration time. During freefall atoms move primarily along the vertical and so trajectories are modelled in 1D. For high-bandwidth applications the drop time (T\({}_{\mathrm{drop}}\)) will be \(\sim 5\) ms leading to an atom falling 0.13 mm. Given a typical trap radius of \(\sim 5\) mm, an atom will not fall far from the trap centre. However, despite this short distance, the recapture time is still finite limited by the restoring force towards the MOT centre. Fig. 3 shows a numerical simulation of single atom trajectories over multiple cycles, highlighting that for insufficient power the restoring force is too weak and the atom will not be recaptured. This can be seen in the loss of periodicity for the s = 1 trajectory. Therefore, to maximise bandwidth in experiments, an intensity significantly above the saturation intensity is required to minimise recapture time. Figure 1: Two counter-propagating laser beams of frequency \(\omega\) less than the atomic resonance frequency \(\omega_{0}\) illustrating 1D optical molasses. Atom propagates with velocity \(v_{z}\) towards the rightmost beam. Figure 3: Single atom trajectories in a 100 Hz \({}^{87}\)Rb MOT for variable intensity. s = 1 (blue solid), 3 (yellow dotted), 5 (green dash-dot) and 10 (red dashed). \(\Delta=-3\), duty cycle = 0.75, A = 16 G/cm. The white and grey regions correspond to the drop and recapture phases respectively. Figure 2: Numerical simulation of single atom trajectories for \({}^{87}\)Rb atoms with variable initial velocities illustrating under-damped motion occurring for s = 1, \(\Delta=-3\), A = 16 G/cm. Initial velocity v\({}_{0}\) (ms\({}^{-1}\)): 0.5 (green dashed), 0.2 (orange dash-dotted), 0.1 (blue solid). ### Temperature dependence To extend this, the dynamics of an atomic cloud are explored by simulating a 1000 atoms with numerical trajectories similar to those in Fig. 3. The atomic positions and velocities are normally distributed with \(\sigma_{\rm MOT}\) and \(\sigma_{\rm x}\) respectively. \(\sigma_{\rm MOT}\) is the cloud radius and \(\sigma_{\rm v}=\sqrt{\rm k_{B}T_{\rm MOT}/m_{\rm atom}}\) is the cloud's velocity spread where, \(\rm T_{\rm MOT}\) is the cloud temperature and \(\rm m_{\rm atom}\) is the mass of a single atom. To quantify capture, an atom is considered trapped if its final position is \(\rm|x_{f}|<0.1\) mm from the trap centre and its final speed is \(\rm|v_{f}|<\sigma_{v\,Doppler}\), where \(\sigma_{v\,Doppler}\) is the Doppler velocity. For cooling on the \({}^{87}\)Rb D\({}_{2}\) line, the Doppler cooling limit, \(\rm T_{D}=140\) uK, giving \(\sigma_{v\,Doppler}=0.12\,ms^{-1}\)[22]. The fraction of atoms satisfying the capture criteria at the end of the cycle is the restored fraction, \(\rm P_{\rm restored}\). Unless stated, we fix our bandwidth at \(\rm 100\,Hz\) giving a cycle length of 10 ms. Increasing duty cycle increases the drop time and reduces the recapture time. When the recapture time is \(<3\) ms, there is insufficient time to restore atoms to the MOT centre and the recapture efficiency declines. The restored fraction tends to a finite value for short recapture times (\(\sim 0.05\)). This results from the spatial extent of the MOT with respect to the capture region. For short recapture times, a fraction of atoms have not left the capture criteria region and are considered recaptured. Furthermore, our simple model applies a Gaussian intensity profile across the 1D trap and so for higher temperatures and longer drop times, atoms move further away from the central most intense region and experience weaker restoring forces. In general, low temperature is critical for cold-atom experiments with our simulations highlighting why this can aid recapture and bandwidth. ### Pressure dependence During an operational cycle, atoms in the cloud can also be lost through collisions with atoms in the background vapour. The probability of this not occurring for an atom during a cycle is given by \(\rm P_{\rm no\,collision}\) in Eq. (2). \(\tau\) is the mean free collision time and \(\rm T_{\rm cycle}\) is the time for a complete cycle (drop and recapture) as atoms can be lost from background collisions throughout an entire cycle. \[\rm P_{\rm no\,collision}=e^{-\frac{T_{\rm cycle}}{\tau}}. \tag{2}\] For recapture times \(>3\) ms, restoration losses are typically negligible (\(\rm P_{\rm restored}=1\)) and so Eq. (2) effectively represents the recaptured atom fraction for a single shot. Unless stated, we use MOT parameters of: \(\rm s=3\), \(\Delta=-3\), \(\rm A=14\)\(\rm G/cm\), \(\rm T_{\rm MOT}=300\) uK, \(\sigma_{\rm MOT}=0.5\) mm, \(4\sigma_{\rm r}=20\) mm (\(1/\rm e^{2}\)) diameter, Vapour Pressure \(=2.9\times 10^{-7}\) mbar, \(\rm R=4.5\times 10^{9}\,s^{-1}\), \(\rm L=16.0\,s^{-1}\), \(\sigma_{0}=1\times 10^{-13}\,cm^{2}\), \(\rm C_{v}=21\,ms^{-1}\). \(\sigma_{\rm r}\) defines the trap size, \(\rm C_{v}\) is the capture velocity and R and L define the MOT loading and loss rates respectively. A defines the trap field gradient and \(\sigma_{0}\) defines the collision cross section. More explicit details on these parameters will be given in the subsequent section. Fig. 5 shows the results of computing \(\rm P_{\rm no\,collision}\) and the mean free time over the \(10^{-9}-10^{-6}\) mbar range. For pressures approaching \(10^{-6}\) mbar, the collision timescale is comparable to the cycle time, reducing the recaptured fraction significantly. Note, modelling only considers background collisions with \({}^{87}\)Rb atoms and assumes the absence of other species. ## IV Atom number ### Mott loading The rate of change of atoms in the MOT is given by the balance between loading and loss of atoms, integrating Figure 4: Simulating restored atom fraction for a cloud of \({}^{87}\)Rb atoms in a 100 Hz MOT for variable duty cycle and cloud temperature. \(\rm T_{\rm MOT}\): 10 μK (blue solid), 100 μK (orange dashed), 1000 μK (green dash-dot). Figure 5: \(\rm P_{\rm no\,collision}\) (red solid) and mean free time (blue dashed) for variable pressure for \(\rm T_{\rm cycle}=10\,ms\)[23]. this gives the number of atoms after loading for a period of time, t in Eq. (3a). R and L are the loading and loss rate of the MOT and are given by Eqs. (3b) and (3c) respectively. \(\mathrm{A_{s}}\) is the trap surface area (\(4\pi\sigma_{\mathrm{r}}^{2}\)), the capture velocity, \(\mathrm{C_{v}}\) is assumed to be \(21\,\mathrm{ms^{-1}}\) - see appendix A for details. \(\mathrm{n_{b}}\) is the number density of particles in the background vapour, \(\sigma_{0}\) is the collision cross section and \(\mathrm{v_{th}}\) is the average thermal velocity of the background gas. The number density of the particles is calculated from the ideal gas equation \(\mathrm{n_{b}}=\frac{\mathrm{P}}{\mathrm{ET}}\) with the vapour pressure obtained from the model in [24]. \[\mathrm{N(t)} =\frac{\mathrm{R}}{\mathrm{L}}(1-\mathrm{e^{-Lt}}). \tag{3a}\] \[\mathrm{R} =\frac{2\mathrm{A_{s}}\mathrm{C_{v}^{4}n_{b}}}{\pi^{2}\mathrm{v _{th}^{3}}}.\] (3b) \[\mathrm{L} =\frac{1}{\tau}=\mathrm{n_{b}}\sigma_{0}\mathrm{v_{th}} \tag{3c}\] The rate equation sometimes includes an additional loss for inelastic collisions between atoms in the MOT. This changes the loss rate to \(\mathrm{L}\rightarrow\mathrm{L}+\beta\bar{\mathrm{n}}\), where \(\bar{\mathrm{n}}\) is the mean cloud density and \(\beta\) is a constant characterising this mechanism. This implies that two-body collisions can be neglected if \(\beta\bar{\mathrm{n}}<<\mathrm{L}\). \(\beta\sim 1\times 10^{-11}\,\mathrm{cm^{3}s^{-1}}\) has been reported for a laser detuning of \(\delta=-\Gamma\) and an intensity of \(\mathrm{s}\approx 10\), which are fairly typical operating parameters [25]. Assuming a MOT of around \(10^{8}\) atoms with a radius of \(1\,\mathrm{mm}\) gives a number density of \(\bar{\mathrm{n}}\sim 1\times 10^{10}\,\mathrm{cm^{-3}}\). For typical pressure \(\mathrm{L}\sim 1-10\,\mathrm{s^{-1}}\) which is 1-2 orders higher than the two body loss term. This justifies why this term can be neglected in our simulations. For 100 Hz operation the MOT loading time is only a few ms. Even for relatively high pressures in the low \(10^{-7}\) mbar range the loading rate is a few \(10^{9}/\mathrm{ms}\). This means at most \(\sim 10^{7}\) atoms can be loaded from the background vapour after a few ms; a small fraction of the steady state population reached in the experimental data in Fig. 6. This highlights how efficient recapture of atoms between cycles is essential for high-bandwidth operation. In this regime MOT composition is recapture dominated with a small contribution from background loading. Consider a high-bandwidth MOT containing \(10^{7}\) atoms with a recapture period of \(\sim 1\,\mathrm{ms}\). Assuming recapture is \(90\%\) efficient with a MOT loading rate of \(\mathrm{R}\sim 10^{9}\,\mathrm{s^{-1}}\) the atom number will remain steady. By considering losses from the finite restoration time and collisions independently, an iterative equation is formed describing the shot to shot atom number. \[\mathrm{N_{i+1}}=\mathrm{N_{i}P_{no\,collision}P_{restored}}+\frac{\mathrm{R}}{ \mathrm{L}}(1-\mathrm{e^{-LTR_{\mathrm{load}}}}). \tag{4}\] \(\mathrm{N_{i}}\) denotes the atom number in the \(\mathrm{i^{th}}\) cycle. The first term describes the contribution from recaptured atoms with \(\mathrm{P_{no\,collision}P_{restored}}\) representing the constant shot to shot recapture fraction. The second term describes background loading and is the MOT loading equation with terms as defined in Eq. (3a). The time for loading and recapture is given by \(\mathrm{T_{\mathrm{reload}}}\). Iterating until \(\mathrm{N_{i+1}}=\mathrm{N_{i}}\) gives the operational steady state atom number for the MOT. For higher pressure the loading rate is larger and so more atoms are loaded from the background but fewer atoms are recaptured due to more background collisions and vice versa for lower pressure. Steady state corresponds to the point at which the number of atoms lost due to inefficient recapture is perfectly balanced by the atoms loaded from the background vapour. In Fig. 7 the behaviour of a traditional MOT is simulated and contrasted with a high-bandwidth MOT with a duty cycle of 0.65. In this configuration there are about \(20\%\) the number of atoms when compared with a MOT fully loaded from background vapour. Even with our relatively high pressure, without recapture it would take \(10\mathrm{x}\) longer to load this many atoms. This limits bandwidth to at most \(30\) Hz showing the importance of recapture in maximising bandwidth. Figure 7: Traditional non-dynamic MOT loading (solid), \(100\) Hz high-bandwidth MOT loading simulation at a duty cycle of 0.65 (dashed). Figure 6: Experimental MOT loading data. The following parameters are extracted, \(\mathrm{R}=4.5\times 10^{9}\,\mathrm{s^{-1}}\), \(\mathrm{L}=16.0\,\mathrm{s^{-1}}\) and a \({}^{87}\)Rb vapour pressure of \(2.9\times 10^{-7}\) mbar. Duty cycle A key parameter determining MOT operation is the duty cycle describing the useful fraction of the experimental cycle. In this context it denotes the free-fall time. The remaining portion constitutes time for recapturing and loading atoms back into the trap for the next cycle. Optimising duty cycle is important for experimental applications as increasing measurement time will compromise time available for reloading atoms into the MOT. Naturally, some balance must be achieved within a cycle. To investigate this we vary the parameter experimentally and compare with our simple dynamics model. Fig. 8 presents data at 100 Hz bandwidth, as drop time tends to 0 ms the atom number tends towards the value in Fig. 7 for non-dynamic MOT operation. For increasing drop times up to 6 ms the atom number decays gradually as less cycle time is devoted to reloading. In this regime, the recapture efficiency stays constant as the restoration force is sufficient to recapture atoms for reloading time \(>\) 3.5 ms (P\({}_{\text{restored}}=1\)). The imperfect recapture efficiency comes from the pressure induced collisions with the background vapour, P\({}_{\text{no\,collision}}=85\%\) at 100 Hz. For drop times \(>6.5\) ms the recapture mechanism fails and the atom number declines dramatically with a good fit between model and experimental data. This fit is slightly poorer at 50 Hz but still quite reasonable. Given the 1D model used, further discrepancies might be connected to the 3D nature of the light field, magnetic field and polarisation profiles. To validate our collision model we perform duty cycle scans with fixed cycle times of 2.5, 5, 10 and 20 ms. Using this data we extract the P\({}_{\text{no\,collision}}\) value as drop time tends to 0 ms and plot against Eq. (2) for our operating pressure of \(2.9\times 10^{-7}\) mbar. Fig. 10 presents this data showing a strong fit validating our collision model. To further highlight the importance of recapture we simulate longer drop times with a short reloading time. To model this, the reloading time is fixed, the drop time is incremented and the steady state atom number is computed. After falling \(2\sigma_{\text{r}}=10\) mm, an atom will fall out of the trap centre in \(\sim 45\) ms as reflected in the decline in Fig. 11. For drop times \(\ll 45\) ms the dynamics are recapture dominated as atoms do not fall out of the trapping region. For drop times \(>45\) ms the MOT is no longer in the trapping region and so recapture is not viable. Consequently, the MOT consists entirely of atoms loaded from the background vapour. For longer loading times the drop off is less pronounced highlighting the need for a significant increase in reloading time when leaving the recapture dominated regime. Our model is further validated by calculating and measuring the reloading time for a steady state MOT of \(10^{8}\) atoms. As anticipated, the recapture efficiency experiences a decline to zero at 45 ms of drop time. For small drop times the loading time required tends to the MOT restoration time for a \({}^{87}\)Rb atom (\(\sim 3\) ms) in this regime. When recapture fails, the time required is determined entirely by background loading and is given by \(\frac{1\times 10^{8}}{4.5\times 10^{8}}\sim 25\) ms. For lower pressures (\(\sim 10^{-8}\) mbar) this time will be significantly longer due to the reduced loading rate. Overall, a good fit is observed between the model and experiment. For experiments care is required to ensure sufficient loading time such that recapture is not compromised. Equally, excess time should be avoided to promote measurement bandwidth. To optimise this in different systems analysis similar to Fig. 8 could be performed by increasing the duty cycle until a sharp drop off in atomic signal is observed. This reflects the point at which the recapture mechanism fails determining the necessary trap loading time. Figure 8: Steady state atom number (red solid) and recapture efficiency P\({}_{\text{no\,collision}}\)P\({}_{\text{restored}}\) (blue dashed) for a 100 Hz MOT for variable duty cycle. Experimental data points are scattered. Figure 9: Steady state atom number (red solid) and recapture efficiency P\({}_{\text{no\,collision}}\)P\({}_{\text{restored}}\) (blue dashed) for a 50 Hz MOT for variable duty cycle. Experimental data points are scattered. ## V Discussion ### Application to Quantum Sensing Having validated our simple model for the high-bandwidth MOT we will now apply this to optimise an application. Atom interferometry (AI) was developed in the early 1990s and offers exceptional sensitivity to rotations and accelerations [26]. The technique underpins quantum sensing which shows huge promise for applications in inertial navigation [27; 21]. To explore this we predict the sensitivity performance limit of an atom interferometer operating at 100 Hz. Sensitivity is given by \(\frac{\delta\phi}{\phi}\), where \(\delta\phi\) denotes phase noise and \(\phi\) is the phase signal accumulated over the interrogation period. The noise on a single measurement \(\delta\phi_{\mathrm{s}}\) is limited by quantum projection noise \(\mathrm{N_{Q}}=\sqrt{\mathrm{N_{AI}}}\) and \(\delta\phi_{\mathrm{s}}=\eta\delta\phi_{\mathrm{Q}}=\eta\frac{\mathrm{N_{Q}}} {\mathrm{N_{AI}}}=\frac{\eta}{\sqrt{\mathrm{N_{AI}}}}\), where \(\mathrm{N_{AI}}\) denotes the number of atoms participating in the interferometer with \(\eta\geq 1\) accounting for excessive detection noise. The operating bandwidth is given by \(\mathrm{F}=\frac{1}{(\mathrm{T_{i}}+\mathrm{T_{P}})}\) where \(\mathrm{T_{i}}=\mathrm{T_{drop}}\) is the interrogation (drop) time and \(\mathrm{T_{P}}\) is the sensor preparation time incorporating reloading, cooling and detection. Using these definitions sensitivity can be expressed as in Eq. (5). \[\mathrm{S}=\frac{4\eta}{\mathrm{k_{e}g}\sqrt{\mathrm{N_{AI}}}\sqrt{\mathrm{F} \mathrm{T_{i}^{2}}}}\approx 2.5\times 10^{-8}\frac{\eta}{\sqrt{\mathrm{N_{AI}}}} \frac{\sqrt{\mathrm{F^{3}}}}{(1-\mathrm{FT_{p}})^{2}}. \tag{5}\] For optimal sensitivity the duty cycle requires optimisation to balance the recapture and interrogation periods. Assuming a certain bandwidth, duty cycle and shot noise limited detection the only unknown in Eq. (5) is atoms participating in the interferometer, n. To acquire this the recapture simulation is run for the chosen duty cycle and MOT parameters to obtain the recapture efficiency. The atom number is then computed using Eq. (4). A conservative 1% of atoms are assumed to complete the interferometer, \(\mathrm{N_{AI}=0.01\,\mathrm{N_{MOT}}}\). To account for sub-Doppler cooling, state preparation and launching, a 3 ms preparation time is allocated within the cycle time. We also adopt a cloud temperature of 10 uK following sub-Doppler cooling. Fig. 13 shows the sensitivity simulation at 100 Hz operation for variable duty cycle. For lower duty cycles there are more atoms but the sensitivity improvement from increased interrogation time dominates over the reduced atoms. For reloading times \(<\) 2 ms the capture processes are inhibited and the atom number falls to zero diminishing sensitivity. Fig. 13 suggests a performance limit of \(1\times 10^{-7}\frac{\mathrm{s}}{\sqrt{\mathrm{Hz}}}\) at 100 Hz operation. Given the finite recapture time it is interesting to consider optimal sensitivity for variable bandwidth. To explore this the simulation in Fig. 12 is reprocessed. By adding the drop and reloading time together and including an additional 3 ms of preparation time a certain cycle time and therefore bandwidth is defined. For this bandwidth \(10^{8}\) atoms are generated and so sensitivity can be computed with Eq. (5). Figure 11: Steady state atom number for variable drop time with a fixed loading time: 4.0 ms (blue solid), 10 ms (orange dashed) 50 ms (green dash-dot). Figure 12: Time to load \(10^{8}\) atoms for variable drop time (red solid), recapture efficiency \(\mathrm{P_{no\,collision}P_{restored}}\) (blue dashed). Experimental data points are scattered. Figure 10: Pressure induced collision model, theoretical model (line), experimental data (points). For increasing bandwidth the optimal duty cycle decreases gradually as the necessary reloading time represents a larger fraction of the cycle, see Fig. 14. At a certain bandwidth the cycle time is insufficient to interrogate, recapture and prepare atoms. For short drop time around 2 ms is required to recapture atoms and so with an additional preparation time of 3 ms the limiting bandwidth is \(\frac{1}{\rm 5ms}\simeq 200\) Hz. Given the performance limits it is worth summarising the advantages, disadvantages and future prospects of the high-bandwidth approach for quantum sensing. Quantum sensors offer low bias and high-stability enabling long term inertial navigation measurements not currently feasibly with classical sensors. High-bandwidth quantum sensors would therefore be attractive for navigation where measurement rates \(>\) 100 Hz are needed for operation on mobile platforms. As highlighted bandwidth and sensitivity present a compromise although the reduced free-falling distance at high-bandwidth makes the approach compelling for miniaturisation developing devices more robust to challenging environments [20]. The \(\sim\)\(\rm\mu g/\sqrt{Hz}\) sensitivity offered at high-bandwidth would be useful for inertial navigation with techniques such as large-momentum transfer potentially offering a route to clawing back sacrificed sensitivity [28]. Even presently ship-borne measurements have demonstrated sensitivities at the \(\sim\)\(\rm\mu g\) level [13]. Moreover, hybrid methods have been implemented to increase bandwidth using a quantum sensor to correct a classical device [29]. Further developments could offer potential for absolute positioning on a metre scale independent of environment without satellite navigation. Moreover, high-bandwidth operation would also be desirable for faster civil engineering surveys providing feedback on the condition of water pipes and identifying voids and mine shafts. ## VI Conclusions We show that a simple model simulating atomic trajectories and loss mechanisms performs rather well in explaining experimental MOT dynamics across a range of bandwidths. Traditionally bandwidth is not a primary concern and so traps are loaded to capacity with no concern for recapturing atoms limiting bandwidths to around 1 Hz. In this work we explore the full bandwidth range. At low bandwidth recapture efficiency tends to 0 due to background collisions and atoms falling outside of the trapping region. At high-bandwidth the finite MOT restoring force is critical limiting the recapture time to a few ms for \({}^{87}\)Rb and imposing a maximum bandwidth for MOT generation. We observe that the model provides a good fit to experimental data across a range of bandwidths accounting for pressure, temperature and spatial considerations of the trap. The model is then applied to quantum sensing projecting a performance limit of \(1\times 10^{-7}\,\rm g/\sqrt{Hz}\) at 100 Hz. This is computed by optimising duty cycle for a given bandwidth. Based on this it is deemed beneficial to devote cycle time to interrogation provided recapture is not compromised significantly. In summary, this work shows the power of a simple MOT physics model in predicting the feasibility of MOT generation for a given bandwidth, duty cycle and other trap and cloud properties. More generally, the ubiquitous nature of the MOT means this work could be applied to a broad range of experiments using different atomic species particularly for those targeting higher bandwidth operation. ## Acknowledgments We thank the support of the UK National Quantum Technologies Programme (NQTP) (EP/T001046/1), Defense Science and Technology Laboratory (Dstl) (DSTLXR1000141929) and Toyota Motor Europe. Figure 14: Sensitivity projection for variable bandwidth based on simulation in Fig. 12. For each bandwidth the cycle consists of an additional 3 ms of preparation. Figure 13: Optimising sensitivity by optimising balance between recapture and interrogation time, sensitivity (red solid), participating atoms (blue dashed). The optimised cycle consists of a 5 ms interrogation, 2 ms recapture and a set 3 ms of additional preparation (cooling, state preparation, launching). AI parameters: F = 100 Hz, \(\eta\) = 1, \(\rm N_{AI}=0.01\,\rm N_{MOT}\).
2309.13749
A Minkowski type inequality for manifolds with positive spectrum
The classical Minkowski inequality implies that the volume of a bounded convex domain is controlled from above by the integral of the mean curvature of its boundary. In this note, we establish an analogous inequality without the convexity assumption for all bounded smooth domains in a complete manifold with its bottom spectrum being suitably large relative to its Ricci curvature lower bound. An immediate implication is the nonexistence of embedded compact minimal hypersurfaces in such manifolds. This nonexistence issue is also considered for steady and expanding Ricci solitons.
Ovidiu Munteanu, Jiaping Wang
2023-09-24T20:39:13Z
http://arxiv.org/abs/2309.13749v1
# A Minkowski type inequality for manifolds with positive spectrum ###### Abstract. The classical Minkowski inequality implies that the volume of a bounded convex domain is controlled from above by the integral of the mean curvature of its boundary. In this note, we establish an analogous inequality without the convexity assumption for all bounded smooth domains in a complete manifold with its bottom spectrum being suitably large relative to its Ricci curvature lower bound. An immediate implication is the nonexistence of embedded compact minimal hypersurfaces in such manifolds. This nonexistence issue is also considered for steady and expanding Ricci solitons. ## 1. Introduction On a complete Riemannian manifold \((M,g),\) the Laplacian \(\Delta\) is a self-adjoint operator according to [15]. So the spectrum \(\sigma(M)\) of \(M,\) defined as the spectrum \(\sigma(-\Delta)\) of \(-\Delta,\) is a closed subset of \([0,\infty).\) The bottom spectrum is given by \[\lambda_{1}(M):=\min\{\lambda\in\sigma(M)\}.\] Alternatively, it is characterized as the best constant for the Poincare inequality \[\lambda_{1}\,\int_{M}\phi^{2}\leq\int_{M}\left|\nabla\phi\right|^{2}\] for all compactly supported smooth functions \(\phi\) on \(M.\) A result by McKean [26] says that \(\lambda_{1}(M)\geq\frac{(n-1)^{2}}{4}\) for an \(n\)-dimensional, simply connected, complete manifold \(M^{n}\) with sectional curvature \(K\leq-1.\) The famous Sullivan-Patterson theory [31, 33] computes the bottom spectrum for the quotient space \(\mathbb{H}^{n}/\Gamma\) of the \(n\)-dimensional real hyperbolic space \(\mathbb{H}^{n},\) where \(\Gamma\) is a discrete, finitely generated, group of isometries of \(\mathbb{H}^{n}.\) Namely, \(\lambda_{1}(\mathbb{H}^{n}/\Gamma)=\frac{(n-1)^{2}}{4}\) if \(d_{\Gamma}\leq\frac{n-1}{2}\) and \(\lambda_{1}(\mathbb{H}^{n}/\Gamma)=d_{\Gamma}\left(n-1-d_{\Gamma}\right)\) when \(d_{\Gamma}\geq\frac{n-1}{2},\) where \(d_{\Gamma}\) is the Hausdorff dimension of the limit set of \(\Gamma,\) that is, those points \(\theta\) in the ideal boundary at infinity \(S_{\infty}(\mathbb{H}^{n})\) of \(\mathbb{H}^{n}\) such that \(\theta=\lim_{i\to\infty}\gamma_{i}(x)\) for some \(x\in\mathbb{H}^{n}\) and a sequence of \(\gamma_{i}\in\Gamma.\) Another notable result, due to Brooks [4], is that \(\lambda_{1}(M)>0\) for a covering space \(M\) of a compact manifold \(N\) if and only if the covering group is nonamenable. Finally, we mention a result of Lee [21]. Recall that a Riemannian manifold \((M,g)\) is conformally compact if topologically it is the interior of a compact manifold \(\overline{M}\) with boundary \(N\) and its metric \(g=\rho^{-2}\,g_{\overline{M}}\) for some metric \(g_{\overline{M}}\) on \(\overline{M}\) and smooth function \(\rho\) on \(\overline{M}\) with \(\rho=0\) on \(N\) and \(d\rho\neq 0\) on \(N.\) Note that different pairs of \(\rho\) and \(g_{\overline{M}}\) induce the same conformal class on \(N.\) **Theorem 1.1** (Lee).: _Let \((M^{n},g)\) be a conformally compact Einstein manifold with its Ricci curvature normalized to be \(-(n-1).\) If its boundary \(N\) with the induced conformal metric has nonnegative scalar curvature, then \(\lambda_{1}(M)=\frac{(n-1)^{2}}{4}.\)_ A different proof of the result was given by X. Wang [34]. Concerning the upper bound of the bottom spectrum, we have the following classical result due to Cheng [9]. **Theorem 1.2** (Cheng).: _Let \(M^{n}\) be a complete Riemannian manifold with \(\mathrm{Ric}\geq-(n-1)\kappa\) for some nonnegative constant \(\kappa.\) Then_ \[\lambda_{1}(M)\leq\lambda_{1}\left(\mathbb{H}^{n}(-\kappa)\right)=\frac{(n-1) ^{2}}{4}\,\kappa.\] The rigidity issue has been studied by Li and the second author in [23, 24]. **Theorem 1.3** (Li-Wang).: _Suppose \((M^{n},g)\) is complete, \(n\geq 3,\) with \(\lambda_{1}\geq\frac{(n-1)^{2}}{4}\,\kappa\) and \(\mathrm{Ric}\geq-(n-1)\kappa.\) Then either \(M\) is connected at infinity or \(M^{n}=\mathbb{R}\times N^{n-1}\) for some compact \(N\) with \(g=dt^{2}+e^{2\sqrt{\kappa}\,t}\,g_{N}\) for \(n\geq 3\) or \(g=dt^{2}+\cosh^{2}(\sqrt{\kappa}\,t)\,g_{N}\) when \(n=3.\)_ Note that as \(\kappa\) goes to \(0,\) the result recovers a weak version of the famous Cheeger-Gromoll [5] splitting theorem for complete manifolds with nonnegative Ricci curvature. Our main purpose here is to establish the following Minkowski type inequality for complete manifolds with positive bottom spectrum. **Theorem 1.4**.: _Let \((M^{n},g)\) be a complete Riemannian manifold of dimension \(n\geq 5\) with \(\mathrm{Ric}\geq-\left(n-1\right)\) and_ \[\lambda_{1}\left(M\right)\geq\left(\frac{n-2}{n-1}\right)^{2}\left(2n-3\right).\] _Then for any compact smooth domain \(\Omega\subset M,\)_ \[\frac{2}{3}\sqrt{n}\ \lambda_{1}\left(M\right)\mathrm{Vol}\left(\Omega\right) \leq\int_{\partial\Omega}|H|^{\frac{2n-3}{n-1}}\,,\] _where \(H\) is the mean curvature of \(\partial\Omega.\)_ The result seems to be new even for the hyperbolic space \(\mathbb{H}^{n}.\) We remark that it is necessary to assume \(\lambda_{1}(M)>n-2.\) Indeed, for \(M^{n}=\mathbb{R}\times N^{n-1}\) with \(g=dt^{2}+\cosh^{2}(t)\,g_{N},\)\(\lambda_{1}(M)=n-2\) and \(\mathrm{Ric}\geq-(n-1)\) when \(\mathrm{Ric}_{\mathrm{N}}\geq-(n-2).\) Yet, the domain \(\Omega\) given by \(\left\{0<t<\varepsilon\right\}\) violates the inequality when \(\varepsilon\) is small. Certainly, this example also shows that the result can not hold for \(n=3.\) However, it remains unclear what to expect for \(n=4.\) One may wish to compare the result to the classical Minkowski inequality [27] for the Euclidean space \(\mathbb{R}^{n}\) and that for the hyperbolic space \(\mathbb{H}^{n}\)[16]. The advantage here is that no convexity is assumed for the domains. **Theorem 1.5** (Minkowski).: _If \(\Omega\subset\mathbb{R}^{n},\)\(n\geq 3,\) is a convex domain with smooth boundary \(\Sigma\) and \(\mathrm{H}\) is the mean curvature of \(\Sigma\) with respect to the outward unit normal, then there exists a sharp constant \(c(n)\) so that_ \[\mathrm{Vol}\left(\Omega\right)\leq c(n)\,\left(\int_{\Sigma}H\right)^{\frac{ n}{n-2}}.\] _Equality holds if and only if \(\Omega\) is a ball._ The convexity can be relaxed to mean convex and star shaped by the work of Guan-Li [17, 18], where they produced a different proof using a new mean curvature flow. In fact, their proof yields the more general Alexandrov-Fenchel inequalities of quermassintegrals and extends to other space forms as well. For more related results, we refer to [2, 3, 7, 13, 19]. An immediate consequence of our result is the nonexistence of compact minimal hypersurfaces. **Corollary 1.6**.: _Let \(\left(M^{n},g\right)\) be a complete Riemannian manifold of dimension \(n\geq 5\) with \(\mathrm{Ric}\geq-\left(n-1\right)\) and_ \[\lambda_{1}\left(M\right)=\frac{\left(n-1\right)^{2}}{4}.\] _Then \(M\) has no embedded compact minimal hypersurface. In particular, this holds for a conformally compact Einstein manifold with its boundary having nonnegative scalar curvature._ Note that the result is not true for \(n=3\). Indeed, for \(M^{3}=\mathbb{R}\times N^{2}\) with \(g=dt^{2}+\cosh^{2}(t)\,g_{N},\)\(\lambda_{1}(M)=1\) and \(\mathrm{Ric}\geq-2\) when \(\mathrm{Ric}_{\mathrm{N}}\geq-1.\) Yet, the hypersurface given by \(\{t=0\}\) is totally geodesic. The corollary follows from Theorem 1.4 by verifying that \(\Sigma\) must enclose a bounded domain \(\Omega\) in \(M.\) Indeed, observe that \(M\) must be connected at infinity as otherwise by Theorem 1.3, \[M=\mathbb{R}\times N,\ \ ds_{M}^{2}=dt^{2}+e^{2t}\,ds_{N}.\] Since \(f(t,y)=t\) on \(M\) is convex, by maximum principle, \(\Sigma\) must be one of the level sets of \(f.\) However, each level set has constant mean curvature \(n-1,\) which is a contradiction. The same argument shows that every double cover of \(M\) is connected at infinity as well. One then concludes from a result by Carron and Pedon [6] that the integral homology \(H_{n-1}(M,\mathbb{Z})=0.\) It then follows that \(\Sigma\) must enclose a bounded domain \(\Omega\) in \(M.\) We now quickly sketch the proof of Theorem 1.4. First, there exists \(v>0\) such that \(\Delta v=-\lambda_{1}(M)\,v\). Consider the function \(h=\ln v,\) for which \(\Delta h=-\lambda_{1}(M)-|\nabla h|^{2}\). Then \[\lambda_{1}\left(M\right)\mathrm{Vol}\left(\Omega\right) \leq \int_{\Omega}\left(\lambda_{1}\left(M\right)+\left|\nabla h\right| ^{2}\right)\] \[= -\int_{\Omega}\Delta h\] \[= \int_{\partial\Omega}h_{\nu},\] where \(\nu\) is the inward unit normal to \(\partial\Omega.\) The proof is then reduced to estimating \(\int_{\partial\Omega}h_{\nu}.\) To do so we consider the harmonic function \(u\) on \(M\setminus\Omega\) obtained as \(u=\lim_{R\rightarrow\infty}u_{R},\) where \(\Delta u_{R}=0\) on \(\left(M\setminus\Omega\right)\cap B_{p}(R)\) with \(u_{R}=1\) on \(\partial\Omega\) and \(u_{R}=0\) on \(\partial B_{p}(R).\) The upshot is to show \[c(n)\,\int_{\partial\Omega}h_{\nu}\leq\int_{\partial\Omega}\left(\left|\nabla u \right|^{\alpha}\right)_{\nu}-\int_{\partial\Omega}\left(u^{\beta}\right)_{ \nu}\left|\nabla u\right|^{\alpha},\] where \(\alpha=\frac{n-2}{n-1}\) and \(\beta=\frac{n-2}{3n-5}.\) For that, we drew inspiration from the monotonicity formulas for the Green's function on manifolds with nonnegative Ricci curvature [11, 12, 1], as well as on \(3\)-dimensional manifolds with scalar curvature bounded below [28]. In the process, the following generalized Poincare inequality also comes into play. **Proposition 1.7**.: _Let \(\left(M,g\right)\) be a complete manifold with \(\mathrm{Ric}\geq-\left(n-1\right)\) and \(\lambda_{1}\left(M\right)>0.\) Let \(K\subset M\) be an open subset with (possibly noncompact) boundary \(\partial K.\) Then the Poincare inequality_ \[\lambda_{1}\left(M\right)\int_{K}\phi^{2}\leq\int_{K}\left|\nabla\phi\right|^{ 2}-\int_{\partial K}h_{\nu}\,\phi^{2}\] _holds for any Lipschitz function \(\phi\) with compact support in \(\overline{K},\) where \(\nu\) is the outward unit normal to \(\partial K.\)_ Concerning the nonexistence of compact minimal hypersurfaces, we also extend our consideration to Ricci solitons. **Theorem 1.8**.: _Let \(\left(M^{n},g,f\right)\) be a steady Ricci soliton. If there exists a smooth compact embedded minimal hypersurface \(\Sigma\) in \(M,\) then \(\left(M,g\right)\) splits isometrically as a direct product \(\mathbb{R}\times\Sigma.\)_ A similar result is also established for expanding Ricci solitons. Recall that a gradient Ricci soliton is a manifold \(\left(M,g\right)\) such that there exists a smooth function \(f\) satisfying \[\mathrm{Ric}_{f}=\mathrm{Ric}+\mathrm{Hess}\left(f\right)=\lambda g\] for some constant \(\lambda\in\mathbb{R}.\) Solitons are classified as shrinking, steady or expanding, according to \(\lambda>0,\ \lambda=0\) or \(\lambda<0,\) respectively. The function \(f\) is called the potential. Customarily, the constant \(\lambda\) is assumed to be \(1/2,\)\(0,\) or \(-1/2\) by scaling. Obviously, Ricci solitons are natural generalizations of Einstein manifolds. More significantly, they are the self similar solutions to the Ricci flows, and play a crucial role in the study of singularities of the flows [10]. As pointed out in [29], an important feature of a steady Ricci soliton to us is that its bottom spectrum with respect to the weighted Laplacian \(\Delta_{f},\) defined by \(\Delta_{f}u=\Delta u-\left\langle\nabla u,\nabla f\right\rangle\), always achieves the maximum value \(\frac{1}{4}\) among all weighted manifolds \(\left(M,g,e^{-f}dv\right)\) with \(\mathrm{Ric}_{f}\geq 0\) and \(\left|\nabla f\right|\leq 1.\) The paper is arranged as follows. In Section 2, we prove our main result Theorem 1.4. In Section 3, we consider Ricci solitons and prove Theorem 1.8. ## 2. Minkowski inequality In this section, we prove Theorem 1.4. First, we make some general consideration. Throughout this section, unless otherwise noted, \(\left(M^{n},g\right)\) is assumed to be an \(n\)-dimensional complete manifold with positive spectrum \(\lambda_{1}\left(M\right)>0\) and its Ricci curvature \(\mathrm{Ric}\geq-\left(n-1\right).\) It is well-known [14] that there exists \(v>0\) such that \[\Delta v=-\lambda_{1}\left(M\right)v. \tag{2.1}\] Hence, \[h=\ln v \tag{2.2}\] satisfies \[\Delta h=-\lambda_{1}\left(M\right)-\left|\nabla h\right|^{2}. \tag{2.3}\] Also, by [35] (or see Chapter 6 in [22]), positive solutions of (2.1) satisfy the gradient estimate \[\left|\nabla h\right|\leq\frac{n-1}{2}+\sqrt{\frac{\left(n-1\right)^{2}}{4}- \lambda_{1}\left(M\right)}. \tag{2.4}\] The following generalized Poincare inequality will be of use later. **Proposition 2.1**.: _Let \(\left(M,g\right)\) be a complete manifold with \(\mathrm{Ric}\geq-\left(n-1\right)\) and \(\lambda_{1}\left(M\right)>0.\) Let \(K\subset M\) be an open subset with boundary \(\partial K\). Then_ \[\lambda_{1}\left(M\right)\int_{K}\phi^{2}\leq\int_{K}\left|\nabla\phi\right|^{ 2}-\int_{\partial K}\left\langle\nabla h,\nu\right\rangle\phi^{2}\] _holds for any Lipschitz function \(\phi\) with compact support in \(\overline{K},\) where \(\nu\) is the outward unit normal to the boundary \(\partial K.\) In particular,_ \[\lambda_{1}\left(M\right)\int_{K}\phi^{2}\leq\int_{K}\left|\nabla\phi\right|^ {2}+A\int_{\partial K}\phi^{2},\] _where_ \[A=\frac{n-1}{2}+\sqrt{\frac{\left(n-1\right)^{2}}{4}-\lambda_{1}\left(M\right)}.\] Proof.: According to (2.3), for any Lipschitz function \(\phi\) with compact support in \(\overline{K}\) we have \[\lambda_{1}\left(M\right)\int_{K}\phi^{2} = \int_{K}\left(-\Delta h-\left|\nabla h\right|^{2}\right)\phi^{2}\] \[= \int_{K}\left(\left\langle\nabla h,\nabla\phi^{2}\right\rangle- \left|\nabla h\right|^{2}\phi^{2}\right)\] \[-\int_{\partial K}h_{\nu}\phi^{2}.\] Observe that \[2\phi\left\langle\nabla h,\nabla\phi\right\rangle\leq\left|\nabla h\right|^{ 2}\phi^{2}+\left|\nabla\phi\right|^{2}.\] Therefore, \[\lambda_{1}\left(M\right)\int_{K}\phi^{2}\leq\int_{K}\left|\nabla\phi\right|^ {2}-\int_{\partial K}\left\langle\nabla h,\nu\right\rangle\phi^{2}.\] This proves the result. We will apply this Poincare inequality on the sublevel sets of the harmonic function \(u\) constructed below. Given a compact domain \(\Omega\subset M,\) according to [24], an unbounded component of \(M\setminus\Omega\) is parabolic if it has finite volume, and nonparabolic if it has infinite volume. Let \(E_{1},..,E_{k}\) be all the infinite volume connected components of \(M\setminus\Omega.\) Denote with \(E=E_{1}\cup\cdots\cup E_{k}\) and \[D = M\setminus E\] \[\Sigma = \partial D=\partial E. \tag{2.5}\] Alternatively, \(D\) is the union of \(\Omega\) with all the finite volume components of \(M\setminus\Omega.\) Consider the following function \(u_{i}\) with respect to a sequence \(R_{i}\rightarrow\infty.\) \[\Delta u_{i} = 0\text{\ \ on }B_{p}\left(R_{i}\right)\setminus D\] \[u_{i} = 1\text{\ \ on }\partial D\] \[u_{i} = 0\text{\ on }\partial B_{p}\left(R_{i}\right)\cap E. \tag{2.6}\] As \(\lambda_{1}\left(M\right)>0,\) from [23], the sequence \(\left\{u_{i}\right\}_{i=1}^{\infty}\) converges to a positive nonconstant harmonic function \(u:M\setminus D\rightarrow\left[0,1\right]\) such that \(u=1\) on \(\partial D.\) The strong maximum principle implies that \(\left|\nabla u\right|>0\) on \(\Sigma=\partial D\). Moreover, by [24] \[\int_{M\setminus\left(D\cup B_{p}\left(R\right)\right)}u^{2}\leq C\,e^{-2 \sqrt{\lambda_{1}\left(M\right)}R} \tag{2.7}\] for all \(R>0\) large enough. As \(D\) is the union of \(\Omega\) together with all the finite volume components of \(M\setminus\Omega,\) by [24] the following volume estimate holds. \[\operatorname{Vol}\left(D\setminus B_{p}\left(R\right)\right)\leq C\,e^{-2 \sqrt{\lambda_{1}\left(M\right)}R}. \tag{2.8}\] We denote with \[L\left(\alpha,\beta\right) = \left\{x\in M\setminus D:\alpha<u\left(x\right)<\beta\right\}\] \[\ell\left(t\right) = \left\{x\in M\setminus D:u\left(x\right)=t\right\}.\] Note that these sets may be noncompact in general. However, (2.7) implies that \[\operatorname{Vol}\left(L\left(\alpha,\beta\right)\right)\leq\frac{1}{\alpha^ {2}}\int_{M\setminus D}u^{2}<\infty. \tag{2.9}\] According to [25], \[\zeta=\int_{\ell\left(t\right)}\left|\nabla u\right| \tag{2.10}\] is a constant independent of \(t\in\left[0,1\right]\). Hence, for any function \(F,\) by the co-area formula, \[\int_{L\left(\alpha,\beta\right)}\left|\nabla u\right|^{2}F\left(u\right)= \zeta\int_{\alpha}^{\beta}F\left(t\right)dt. \tag{2.11}\] The gradient estimate for positive harmonic functions states that \[\left|\nabla u\right|\leq C\,u\text{\ \ on }M\setminus D, \tag{2.12}\] where the constant \(C\) depends only on the dimension \(n\) and the maximum of the mean curvature \(\max_{\Sigma}\left|H_{\Sigma}\right|.\) Recall the Bochner formula \[\frac{1}{2}\Delta\left|\nabla u\right|^{2}\geq\left|\nabla^{2}u\right|^{2}- \left(n-1\right)\left|\nabla u\right|^{2} \tag{2.13}\] and the improved Kato inequality \[\left|\nabla^{2}u\right|^{2}\geq\frac{n}{n-1}\left|\nabla\left|\nabla u\right| \right|^{2}. \tag{2.14}\] We begin with the following preliminary estimates. **Lemma 2.2**.: _Let \(\left(M^{n},g\right)\) be a complete manifold with \(\mathrm{Ric}\geq-\left(n-1\right)\) and \(\lambda_{1}\left(M\right)>0.\) There exists a constant \(C>0\) such that for all \(0<t<1,\)_ \[\int_{L\left(t,1\right)}\left(u+\left|\nabla\left|\nabla u\right|\right|^{2} \left|\nabla u\right|^{-1}\right)\leq C\left(1-\ln t\right) \tag{2.15}\] _and_ \[\int_{L\left(\frac{1}{2}t,t\right)}\left(u+\left|\nabla\left|\nabla u\right| \right|^{2}\left|\nabla u\right|^{-1}\right)\leq C. \tag{2.16}\] Proof.: We first prove (2.15). Let \(\psi\) and \(\chi\) be the cut-off functions \[\psi\left(x\right)=\left\{\begin{array}{cc}1&\text{on }B_{p}\left(R\right) \\ R+1-r\left(x\right)&\text{on }B_{p}\left(R+1\right)\setminus B_{p}\left(R\right) \\ 0&\text{on }M\setminus B_{p}\left(R+1\right)\end{array}\right.\] and \[\chi\left(x\right)=\left\{\begin{array}{cc}1&\text{on }L\left(t,1\right) \\ \frac{\ln u\left(x\right)-\ln\left(\frac{1}{2}t\right)}{\ln 2}&\text{on }L\left(\frac{1}{2}t,t \right)\\ 0&\text{otherwise}\end{array}\right..\] We extend \(u=1\) on \(D\), and let \(\phi=u^{\frac{1}{2}}\chi\psi\) in \[\lambda_{1}\left(M\right)\int_{M}\phi^{2}\leq\int_{M}\left|\nabla\phi\right|^ {2}\] to obtain \[\lambda_{1}\left(M\right)\int_{M}u\chi^{2}\psi^{2} \leq 2\int_{M}\left|\nabla u^{\frac{1}{2}}\right|^{2}\chi^{2}\psi^{2}+ 2\int_{M}u\left|\nabla\left(\chi\psi\right)\right|^{2}\] \[\leq \frac{1}{2}\int_{M}\left|\nabla u\right|^{2}u^{-1}\chi^{2}\psi^{ 2}+4\int_{M}u\left|\nabla\chi\right|^{2}\psi^{2}\] \[+4\int_{M}u\left|\nabla\psi\right|^{2}\chi^{2}.\] By (2.11) we immediately see that \[\int_{M}\left|\nabla u\right|^{2}u^{-1}\chi^{2}\psi^{2}\leq\int_{L\left(\frac {1}{2}t,1\right)}\left|\nabla u\right|^{2}u^{-1}=\zeta\ln\frac{2}{t}\] and that \[\int_{M}u\left|\nabla\chi\right|^{2}\psi^{2}\leq\frac{1}{\left(\ln 2\right)^{ 2}}\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{-1}=\frac{ \zeta}{\ln 2}. \tag{2.17}\] Finally, by (2.7) and (2.8) we have \[\int_{M}u\left|\nabla\psi\right|^{2}\chi^{2}\leq\frac{2}{t}\int_{M\setminus B _{p}\left(R\right)}u^{2}\leq\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M\right)}R}. \tag{2.18}\] This proves that \[\int_{L\left(t,1\right)\cap B_{p}\left(R\right)}u\leq C\left(1-\ln t\right)+ \frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M\right)}R}\] for all \(R\geq 1.\) Making \(R\rightarrow\infty\) implies that \[\int_{L\left(t,1\right)}u\leq C\left(1-\ln t\right) \tag{2.19}\] for all \(0<t<1.\) By (2.13) and (2.14) we have that \[\Delta\left|\nabla u\right|\geq\frac{1}{n-1}\left|\nabla\left|\nabla u\right| \right|^{2}\left|\nabla u\right|^{-1}-\left(n-1\right)\left|\nabla u\right|\] on \(M\setminus D\). It then follows that \[\frac{1}{n-1}\int_{M\setminus D}\left|\nabla\left|\nabla u\right| \right|^{2}\left|\nabla u\right|^{-1}\chi^{2}\psi^{2} \leq \int_{M\setminus D}\chi^{2}\psi^{2}\Delta\left|\nabla u\right|\] \[+\left(n-1\right)\int_{M\setminus D}\left|\nabla u\right|\chi^{2 }\psi^{2}.\] Note that by the gradient estimate (2.12) and (2.19) we have \[\int_{M\setminus D}\left|\nabla u\right|\chi^{2}\psi^{2} \leq C\int_{L\left(\frac{1}{2}t,1\right)}u\] \[\leq C\left(1-\ln t\right).\] Moreover, integrating by parts implies that \[\int_{M\setminus D}\chi^{2}\psi^{2}\Delta\left|\nabla u\right| = -\int_{M\setminus D}\left\langle\nabla\left(\chi^{2}\psi^{2} \right),\nabla\left|\nabla u\right|\right\rangle+\int_{\ell\left(1\right)} \left|\nabla u\right|_{\nu}\] \[\leq \frac{1}{2\left(n-1\right)}\int_{M\setminus D}\left|\nabla \left|\nabla u\right|\right|^{2}\left|\nabla u\right|^{-1}\chi^{2}\psi^{2}\] \[+2\left(n-1\right)\int_{M\setminus D}\left|\nabla u\right|\left| \nabla\left(\chi\psi\right)\right|^{2}+\int_{\ell\left(1\right)}\left|\nabla u \right|_{\nu}\] \[\leq \frac{1}{2\left(n-1\right)}\int_{M\setminus D}\left|\nabla \left|\nabla u\right|\right|^{2}\left|\nabla u\right|^{-1}\chi^{2}\psi^{2}\] \[+C\left(1-\ln t\right)+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M \right)}R},\] where we have used (2.12), (2.17) and (2.18) to obtain the last line. Plugging this estimate in (2.20) and making \(R\rightarrow\infty\) we obtain that \[\int_{L\left(t,1\right)}\left|\nabla\left|\nabla u\right|\right|^{2}\left| \nabla u\right|^{-1}\leq C\left(1-\ln t\right)\] as claimed. The second estimate (2.16) follows verbatim from the preceding argument by modifying the function \(\chi\) to \[\chi\left(x\right)=\left\{\begin{array}{cl}1&\text{on }L\left(\frac{1}{2}t,t \right)\\ \frac{\ln u\left(x\right)-\ln\left(\frac{1}{4}t\right)}{\ln 2}&\text{on }L \left(\frac{1}{4}t,\frac{1}{2}t\right)\\ \frac{\ln\left(2t\right)-\ln u}{\ln 2}&\text{on }L\left(t,2t\right)\\ 0&\text{otherwise}\end{array}\right..\] We are ready to prove the main result of this section. **Theorem 2.3**.: _Let \(\left(M^{n},g\right)\) be a complete Riemannian manifold of dimension \(n\geq 5\) with \(\operatorname{Ric}\geq-\left(n-1\right)\) and_ \[\lambda_{1}\left(M\right)\geq\left(\frac{n-2}{n-1}\right)^{2}\left(2n-3\right).\] _Then for any compact smooth domain \(\Omega\subset M,\)_ \[\frac{2}{3}\sqrt{n}\ \lambda_{1}\left(M\right)\operatorname{Vol}\left(\Omega \right)\leq\int_{\partial\Omega}\left|H\right|^{\frac{2n-3}{n-1}},\] _where \(H\) is the mean curvature of \(\partial\Omega.\)_ Proof.: As in (2.5) we let \(D\) be the union of \(\Omega\) with all the finite volume components of \(M\setminus\Omega.\) Define the harmonic function \(u\) on \(M\setminus D\) as the limit of a subsequence of \(\left\{u_{i}\right\}_{i=1}^{\infty}\) from (2.6). Let \(\psi\) and \(\chi\) be the cut-off functions \[\psi\left(x\right)=\left\{\begin{array}{cl}1&\text{on }B_{p}\left(R\right) \\ R+1-r\left(x\right)&\text{on }B_{p}\left(R+1\right)\setminus B_{p}\left(R\right) \\ 0&\text{on }M\setminus B_{p}\left(R+1\right)\end{array}\right.\] and \[\chi\left(x\right)=\left\{\begin{array}{cl}1&\text{on }L\left(t,1\right) \\ \frac{\ln u\left(x\right)-\ln\left(\frac{1}{t}t\right)}{\ln 2}&\text{on }L\left(\frac{1}{2}t,t \right)\\ 0&\text{otherwise}\end{array}\right..\] The Bochner formula (2.13) and the inequality (2.14) imply the inequality (cf. [23]) \[\Delta\left|\nabla u\right|^{\alpha}\geq-\left(n-2\right)\left|\nabla u\right| ^{\alpha}, \tag{2.21}\] where \[\alpha=\frac{n-2}{n-1}. \tag{2.22}\] For \[\beta>\frac{1}{n-1} \tag{2.23}\] to be specified later, we multiply (2.21) by \(u^{\beta}\chi^{2}\psi^{2}\) and integrate it over \(M\setminus D\) to obtain that \[-\int_{M\setminus D}\left(\Delta\left|\nabla u\right|^{\alpha}\right)u^{ \beta}\chi^{2}\psi^{2}\leq\left(n-2\right)\int_{M\setminus D}\left|\nabla u \right|^{\alpha}u^{\beta}\chi^{2}\psi^{2}. \tag{2.24}\] Integrating by parts one sees that the left side of (2.24) becomes \[-\int_{M\setminus D}\left(\Delta\left|\nabla u\right|^{\alpha} \right)u^{\beta}\chi^{2}\psi^{2} = \int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla u^{\beta}\right\rangle\chi^{2}\psi^{2}\] \[+\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla\chi^{2}\right\rangle u^{\beta}\psi^{2}\] \[+\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla\psi^{2}\right\rangle u^{\beta}\chi^{2}\] \[-\int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu},\] where \(\nu=\frac{\nabla u}{\left|\nabla u\right|}\) is the inward unit normal to \(\partial D=\ell\left(1\right).\) By Lemma 2.2, (2.12), and (2.7) we have that \[\left|\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla\psi^{2}\right\rangle u^{\beta}\chi^{2}\right| \leq 2\alpha\int_{L\left(\frac{1}{2}t,1\right)\setminus B_{p}(R)} \left|\nabla\left|\nabla u\right|\right|\left|\nabla u\right|^{\alpha-1}u^{\beta}\] \[\leq Ce^{-\sqrt{\lambda_{1}(M)}R}\int_{L\left(\frac{1}{2}t,1\right)} \left|\nabla\left|\nabla u\right|\right|^{2}\left|\nabla u\right|^{-1}\] \[+Ce^{\sqrt{\lambda_{1}(M)}R}\int_{L\left(\frac{1}{2}t,1\right) \setminus B_{p}(R)}u^{2(\alpha+\beta)-1}\] \[\leq Ce^{-\sqrt{\lambda_{1}(M)}R}\left(1-\ln t\right)\] \[+\frac{C}{t}e^{\sqrt{\lambda_{1}(M)}R}\int_{L\left(\frac{1}{2}t,1 \right)\setminus B_{p}(R)}u^{2}\] \[\leq \frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}.\] Moreover, Lemma 2.2 also implies that \[\left|\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right| ^{\alpha},\nabla\chi^{2}\right\rangle u^{\beta}\psi^{2}\right| \leq C\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla\left|\nabla u \right|\right|\left|\nabla u\right|^{\alpha}u^{\beta-1}\] \[\leq Ct^{\beta-\frac{1}{n-1}}\int_{L\left(\frac{1}{2}t,t\right)} \left|\nabla\left|\nabla u\right|\right|^{2}\left|\nabla u\right|^{-1}\] \[+\frac{C}{t^{\beta-\frac{1}{n-1}}}\int_{L\left(\frac{1}{2}t,t \right)}\left|\nabla u\right|^{1+2\alpha}u^{2\beta-2}\] \[\leq Ct^{\beta-\frac{1}{n-1}}+\frac{C}{t^{\beta-\frac{1}{n-1}}}\int_ {L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{2(\alpha+\beta)-3},\] where in the last line we have applied (2.12). On the other hand, by (2.22), (2.23), and (2.11) we get \[\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{2( \alpha+\beta)-3} = \zeta\int_{\frac{1}{2}t}^{t}r^{2\beta-\frac{n+1}{n-1}}dr\] \[\leq \frac{1}{2\left(\beta-\frac{1}{n-1}\right)}t^{2\left(\beta-\frac {1}{n-1}\right)}\zeta.\] In conclusion, \[\left|\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{\alpha}, \nabla\chi^{2}\right\rangle u^{\beta}\psi^{2}\right|\leq Ct^{\beta-\frac{1}{n- 1}}.\] Hence, this proves that \[-\int_{M\setminus D}\left(\Delta\left|\nabla u\right|^{\alpha} \right)u^{\beta}\chi^{2}\psi^{2} \geq \int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{ \alpha},\nabla u^{\beta}\right\rangle\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}\] \[-\frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}-Ct^{\beta-\frac{1}{n-1}}.\] We now estimate the first term on the right hand side. Integration by parts implies that \[\int_{M\setminus D}\left\langle\nabla\left|\nabla u\right|^{\alpha}, \nabla u^{\beta}\right\rangle\chi^{2}\psi^{2} = -\int_{M\setminus D}\left|\nabla u\right|^{\alpha}\left(\Delta u ^{\beta}\right)\chi^{2}\psi^{2}\] \[-\int_{M\setminus D}\left|\nabla u\right|^{\alpha}\left\langle \nabla u^{\beta},\nabla\left(\chi^{2}\psi^{2}\right)\right\rangle\] \[+\int_{\ell(1)}\left(u^{\beta}\right)_{\nu}\left|\nabla u\right| ^{\alpha}.\] By (2.7) and (2.12) we have that \[\int_{M\setminus D}\left|\nabla u\right|^{\alpha}\left|\left\langle \nabla u^{\beta},\nabla\psi^{2}\right\rangle\chi^{2}\right| \leq C\int_{L\left(\frac{1}{2}t,1\right)\setminus B_{p}(R)}u^{ \alpha+\beta}\chi^{2}\] \[\leq \frac{C}{t^{2-(\alpha+\beta)}}\int_{L\left(\frac{1}{2}t,1\right) \setminus B_{p}(R)}u^{2}\] \[\leq \frac{C}{t}e^{-2\sqrt{\lambda_{1}(M)}R}.\] Moreover, (2.11) and (2.12) imply that \[\int_{M\setminus D}\left|\nabla u\right|^{\alpha}\left\langle \nabla u^{\beta},\nabla\chi^{2}\right\rangle\psi^{2} \leq C\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{ \alpha+2}u^{\beta-2}\] \[\leq C\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{ \beta-\frac{n}{n-1}}\] \[\leq \frac{C}{\beta-\frac{1}{n-1}}t^{\left(\beta-\frac{1}{n-1}\right)}\zeta.\] Plugging these estimates in (2.25) yields that \[-\int_{M\setminus D}\left(\Delta\left|\nabla u\right|^{\alpha} \right)u^{\beta}\chi^{2}\psi^{2} \geq \beta\left(1-\beta\right)\int_{M\setminus D}\left|\nabla u \right|^{\alpha+2}u^{\beta-2}\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}+ \int_{\ell(1)}\left(u^{\beta}\right)_{\nu}\left|\nabla u\right|^{\alpha}\] \[-\frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}-Ct^{\beta-\frac{1}{n-1}}.\] Therefore (2.24) becomes that \[\left(n-2\right)\int_{M\setminus D}\left|\nabla u\right|^{\alpha }u^{\beta}\chi^{2}\psi^{2} \geq \beta\left(1-\beta\right)\int_{M\setminus D}\left|\nabla u \right|^{\alpha+2}u^{\beta-2}\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}+ \int_{\ell(1)}\left(u^{\beta}\right)_{\nu}\left|\nabla u\right|^{\alpha}\] \[-\frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}-Ct^{\beta-\frac{1}{n-1}}.\] We now estimate the left hand side. By Young's inequality we have that \[\left|\nabla u\right|^{\alpha}\leq\frac{\alpha}{\alpha+2}A^{-2}\left|\nabla u \right|^{\alpha+2}u^{-2}+\frac{2}{\alpha+2}A^{\alpha}u^{\alpha},\] where \(A>0\) is a constant to be specified later. Hence, we obtain \[\int_{M\setminus D}\left|\nabla u\right|^{\alpha}u^{\beta}\chi^{2} \psi^{2} \leq \frac{\alpha}{\alpha+2}A^{-2}\int_{M\setminus D}\left|\nabla u \right|^{\alpha+2}u^{\beta-2}\chi^{2}\psi^{2}\] \[+\frac{2}{\alpha+2}A^{\alpha}\int_{M\setminus D}u^{\alpha+\beta} \chi^{2}\psi^{2}.\] Plugging this into (2.27) yields \[\Lambda_{1}\int_{M\setminus D}\left|\nabla u\right|^{\alpha+2}u^ {\beta-2}\chi^{2}\psi^{2} \leq \Lambda_{2}\int_{M\setminus D}u^{\alpha+\beta}\chi^{2}\psi^{2}\] \[+\int_{\ell(1)}\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}- \int_{\ell(1)}\left(u^{\beta}\right)_{\nu}\left|\nabla u\right|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}(M)}R}+Ct^{\beta-\frac{1}{n-1}},\] where \[\Lambda_{1} = \beta\left(1-\beta\right)-\frac{\alpha\left(n-2\right)}{\alpha+2 }A^{-2}\] \[\Lambda_{2} = \frac{2\left(n-2\right)}{\alpha+2}A^{\alpha}.\] We apply Proposition 2.1 for \(K=M\setminus D\) and \(\phi=u^{\frac{\alpha+\beta}{2}}\chi\psi\). As \(\partial K=\ell(1)\), and \(\nu=\frac{\nabla u}{\left|\nabla u\right|}\) is the outward unit normal to \(K\), we obtain that \[\lambda_{1}\left(M\right)\int_{M\setminus D}u^{\alpha+\beta}\chi^{2}\psi^{2} \leq\int_{M\setminus D}\left|\nabla\left(u^{\frac{\alpha+\beta}{2}}\chi\psi \right)\right|^{2}-\int_{\ell(1)}h_{\nu}.\] Note that \[\int_{M\setminus D}\left|\nabla\left(u^{\frac{\alpha+\beta}{2}} \chi\psi\right)\right|^{2} = \frac{\left(\alpha+\beta\right)^{2}}{4}\int_{M\setminus D} \left|\nabla u\right|^{2}u^{\alpha+\beta-2}\chi^{2}\psi^{2}\] \[+\int_{M\setminus D}u^{\alpha+\beta}\left|\nabla\left(\chi\psi \right)\right|^{2}\] \[+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla u^{\alpha+ \beta},\nabla\left(\chi\psi\right)^{2}\right\rangle.\] By (2.7), (2.22), and (2.23) we conclude that \[\int_{M\setminus D}u^{\alpha+\beta}\left|\nabla\psi\right|^{2} \chi^{2} \leq \int_{L\left(\frac{1}{2}t,1\right)\setminus B_{p}(R)}u^{\alpha+ \beta}\chi^{2}\] \[\leq \frac{2}{t}\int_{L\left(\frac{1}{2}t,1\right)\setminus B_{p}(R)} u^{2}\chi^{2}\] \[\leq \frac{C}{t}e^{-2\sqrt{\lambda_{1}(M)}R}.\] Similarly, by additionally using (2.12) we get that \[\frac{1}{2}\int_{M\setminus D}\left\langle\nabla u^{\alpha+\beta},\nabla \psi^{2}\right\rangle\chi^{2}\leq\frac{C}{t}e^{-2\sqrt{\lambda_{1}(M)}R}.\] By (2.11), (2.22), and (2.23) we have \[\int_{M}u^{\alpha+\beta}\left|\nabla\chi\right|^{2}\psi^{2} \leq C\int_{L\left(\frac{1}{2}t,t\right)}\left|\nabla u\right|^{2}u^{ \beta-\frac{n}{n-1}}\] \[\leq \frac{C}{\beta-\frac{1}{n-1}}t^{\beta-\frac{1}{n-1}}.\] Similarly, \[\int_{M}\left\langle\nabla u^{\alpha+\beta},\nabla\chi^{2}\right\rangle\psi^{ 2}\leq\frac{C}{\beta-\frac{1}{n-1}}t^{\beta-\frac{1}{n-1}}.\] Combining all these estimates we arrive at \[\lambda_{1}\left(M\right)\int_{M\setminus D}u^{\alpha+\beta}\chi^ {2}\psi^{2}\] \[\leq \frac{\left(\alpha+\beta\right)^{2}}{4}\int_{M\setminus D}\left| \nabla u\right|^{2}u^{\alpha+\beta-2}\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}h_{\nu}+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M \right)}R}+Ct^{\beta-\frac{1}{n-1}}.\] By Young's inequality, \[\left|\nabla u\right|^{2}\leq\frac{2}{\alpha+2}B^{-\alpha}\left|\nabla u \right|^{\alpha+2}u^{-\alpha}+\frac{\alpha}{\alpha+2}B^{2}u^{2}\] for a constant \(B>0\) to be specified later. This yields \[\int_{M\setminus D}\left|\nabla u\right|^{2}u^{\alpha+\beta-2} \chi^{2}\psi^{2} \leq \frac{2}{\alpha+2}B^{-\alpha}\int_{M\setminus D}\left|\nabla u \right|^{\alpha+2}u^{\beta-2}\chi^{2}\psi^{2}\] \[+\frac{\alpha}{\alpha+2}B^{2}\int_{M\setminus D}u^{\alpha+\beta} \chi^{2}\psi^{2}.\] Plugging this into (2.29), one concludes that \[\left(\lambda_{1}\left(M\right)-\frac{\alpha}{\alpha+2}\frac{ \left(\alpha+\beta\right)^{2}}{4}B^{2}\right)\int_{M\setminus D}u^{\alpha+ \beta}\chi^{2}\psi^{2}\] \[\leq \frac{\left(\alpha+\beta\right)^{2}}{2\left(\alpha+2\right)}B^{- \alpha}\int_{M\setminus D}\left|\nabla u\right|^{\alpha+2}u^{\beta-2}\chi^{2} \psi^{2}\] \[-\int_{\ell(1)}h_{\nu}+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M \right)}R}+Ct^{\beta-\frac{1}{n-1}}.\] By the assumption, \[\lambda_{1}\left(M\right)\geq\frac{\left(n-1\right)^{2}\delta^{2}}{4}\] with \[\delta=\frac{2\left(n-2\right)}{\left(n-1\right)^{2}}\sqrt{2n-3}. \tag{2.30}\] Therefore, \[\left(\frac{\left(n-1\right)^{2}\delta^{2}}{4}-\frac{\alpha}{\alpha+2} \frac{\left(\alpha+\beta\right)^{2}}{4}B^{2}\right)\int_{M\backslash D}u^{ \alpha+\beta}\chi^{2}\psi^{2}\] \[\leq \frac{\left(\alpha+\beta\right)^{2}}{2\left(\alpha+2\right)}B^{- \alpha}\int_{M\backslash D}|\nabla u|^{\alpha+2}\,u^{\beta-2}\chi^{2}\psi^{2}\] \[-\int_{\ell(1)}h_{\nu}+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M \right)}R}+Ct^{\beta-\frac{1}{n-1}}.\] We optimize this inequality by choosing \[B=\frac{\left(n-1\right)\delta}{\alpha+\beta}\] and obtain that \[\int_{M\backslash D}u^{\alpha+\beta}\chi^{2}\psi^{2} \leq \left(\frac{\alpha+\beta}{\left(n-1\right)\delta}\right)^{\alpha +2}\int_{M\backslash D}|\nabla u|^{\alpha+2}\,u^{\beta-2}\chi^{2}\psi^{2}\] \[-\frac{2\left(\alpha+2\right)}{\left(n-1\right)^{2}\delta^{2}} \int_{\ell(1)}h_{\nu}\] \[+\frac{C}{t}e^{-2\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}}.\] Plugging this into (2.28), we conclude that \[\Lambda\int_{M\backslash D}|\nabla u|^{\alpha+2}\,u^{\beta-2}\chi ^{2}\psi^{2} \leq -\frac{2\left(\alpha+2\right)}{\left(n-1\right)^{2}\delta^{2}} \Lambda_{2}\int_{l(1)}h_{\nu}\] \[+\int_{\ell(1)}\left(|\nabla u|^{\alpha}\right)_{\nu}-\int_{\ell (1)}\left(u^{\beta}\right)_{\nu}|\nabla u|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}},\] where \[\Lambda_{2}=\frac{2\left(n-2\right)}{\alpha+2}A^{\alpha}\] and \[\Lambda = \Lambda_{1}-\left(\frac{\alpha+\beta}{\left(n-1\right)\delta} \right)^{\alpha+2}\Lambda_{2}\] \[= \beta\left(1-\beta\right)-\frac{\alpha\left(n-2\right)}{\alpha+2 }A^{-2}\] \[-\left(\frac{\alpha+\beta}{\left(n-1\right)\delta}\right)^{\alpha +2}\frac{2\left(n-2\right)}{\alpha+2}A^{\alpha}.\] We optimize \(\Lambda\) by choosing \[A=\frac{\left(n-1\right)\delta}{\alpha+\beta}\] and obtain that \[\Lambda=\beta\left(1-\beta\right)-\frac{\left(n-2\right)\left(\alpha+\beta \right)^{2}}{\left(n-1\right)^{2}\delta^{2}}. \tag{2.32}\] Hence, (2.31) becomes \[\Lambda\int_{M\setminus D}\left|\nabla u\right|^{\alpha+2}u^{\beta-2} \chi^{2}\psi^{2} \leq -\frac{4\left(n-2\right)}{\left(\alpha+\beta\right)^{\alpha}\left( \left(n-1\right)\delta\right)^{2-\alpha}}\int_{\ell\left(1\right)}h_{\nu}\] \[+\int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu}-\int_{\ell\left(1\right)}\left(u^{\beta}\right)_{\nu}\left| \nabla u\right|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}}. \tag{2.33}\] Recall that \[\alpha = \frac{n-2}{n-1}\] \[\delta^{2} = \frac{4\left(n-2\right)^{2}}{\left(n-1\right)^{4}}\left(2n-3 \right),\] as specified in (2.22) and (2.30). We let \[\beta=\frac{n-2}{3n-5}. \tag{2.34}\] Note that for any \(n\geq 5\) we have \(\beta>\frac{1}{n-1},\) as required by (2.23). Furthermore, it follows that \(\Lambda=0\) by direct calculation. Consequently, (2.33) reduces to \[0 \leq -\frac{4\left(n-2\right)}{\left(\alpha+\beta\right)^{\alpha}\left( \left(n-1\right)\delta\right)^{2-\alpha}}\int_{\ell\left(1\right)}h_{\nu}\] \[+\int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu}-\int_{\ell\left(1\right)}\left(u^{\beta}\right)_{\nu}\left| \nabla u\right|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}}.\] However, by (2.3), \[\lambda_{1}\left(M\right)\operatorname{Vol}\left(D\cap B_{p}\left( R\right)\right) \leq \lambda_{1}\left(M\right)\int_{D}\psi^{2}\] \[= -\int_{D}\left(\Delta h+\left|\nabla h\right|^{2}\right)\psi^{2}\] \[= \int_{\ell\left(1\right)}h_{\nu}+\int_{D}\left\langle\nabla h, \nabla\psi^{2}\right\rangle\] \[-\int_{D}\left|\nabla h\right|^{2}\psi^{2}\] \[\leq \int_{\ell\left(1\right)}h_{\nu}+\int_{D}\left|\nabla\psi\right| ^{2},\] where \(\nu=\frac{\nabla u}{\left|\nabla u\right|}\) is the inward unit normal to \(\partial D=\ell\left(1\right).\) According to (2.8) it follows that \[\lambda_{1}\left(M\right)\operatorname{Vol}\left(D\cap B_{p}\left(R\right) \right)\leq\int_{\ell\left(1\right)}h_{\nu}+Ce^{-2\sqrt{\lambda_{1}\left(M \right)}R}.\] Making \(R\rightarrow\infty\) yields \[\lambda_{1}\left(M\right)\operatorname{Vol}\left(D\right)\leq\int_{\ell\left( 1\right)}h_{\nu}.\] As \(\Omega\subset D\), we conclude from (2.35) that \[\frac{4\left(n-2\right)\lambda_{1}\left(M\right)}{\left(\alpha+ \beta\right)^{\alpha}\left(\left(n-1\right)\delta\right)^{2-\alpha}}\text{Vol} \left(\Omega\right)\] \[\leq \int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu}-\int_{\ell\left(1\right)}\left(u^{\beta}\right)_{\nu}\left| \nabla u\right|^{\alpha}\] \[+\frac{C}{t}e^{-\sqrt{\lambda_{1}\left(M\right)}R}+Ct^{\beta- \frac{1}{n-1}}. \tag{2.36}\] Letting \(R\rightarrow\infty\) first and then \(t\to 0\) in (2.36) we arrive at \[\frac{4\left(n-2\right)\lambda_{1}\left(M\right)}{\left(\alpha+\beta\right)^{ \alpha}\left(\left(n-1\right)\delta\right)^{2-\alpha}}\text{Vol}\left(\Omega \right)\leq\int_{\ell\left(1\right)}\left(\left|\nabla u\right|^{\alpha} \right)_{\nu}-\int_{\ell\left(1\right)}\left(u^{\beta}\right)_{\nu}\left| \nabla u\right|^{\alpha}.\] Note that the mean curvature of \[\Sigma=\ell\left(1\right)=\partial\left(M\setminus D\right)\] satisfies \[H_{\Sigma}=-\frac{\left\langle\nabla\left|\nabla u\right|,\nabla u\right\rangle }{\left|\nabla u\right|^{2}}.\] Hence, \[\left(\left|\nabla u\right|^{\alpha}\right)_{\nu}=-\alpha H_{\Sigma}\left| \nabla u\right|^{\alpha}.\] This proves that \[\frac{4\left(n-2\right)\lambda_{1}\left(M\right)}{\left(\alpha+\beta\right)^{ \alpha}\left(\left(n-1\right)\delta\right)^{2-\alpha}}\text{Vol}\left(\Omega \right)\leq-\alpha\int_{\Sigma}H_{\Sigma}\left|\nabla u\right|^{\alpha}-\beta \int_{\Sigma}\left|\nabla u\right|^{\alpha+1}.\] From Young's inequality that \[\alpha\left|H_{\Sigma}\right|\left|\nabla u\right|^{\alpha}\leq\beta\left| \nabla u\right|^{\alpha+1}+\frac{\alpha}{\alpha+1}\left(\frac{\alpha^{2}}{ \left(\alpha+1\right)\beta}\right)^{\alpha}\left|H_{\Sigma}\right|^{\alpha+1},\] we conclude \[\Gamma\,\text{Vol}\left(\Omega\right)\leq\int_{\Sigma}\left|H_{\Sigma}\right| ^{\alpha+1}\leq\int_{\partial\Omega}\left|H\right|^{\alpha+1}, \tag{2.37}\] where \[\Gamma=\frac{4\left(n-2\right)}{\left(\alpha+\beta\right)^{\alpha}\left(\left( n-1\right)\delta\right)^{2-\alpha}}\frac{\alpha+1}{\alpha}\left(\frac{\left( \alpha+1\right)\beta}{\alpha^{2}}\right)^{\alpha}\lambda_{1}\left(M\right)\] and \[\alpha = \frac{n-2}{n-1},\ \ \ \ \beta=\frac{n-2}{3n-5},\] \[\delta = \frac{2\left(n-2\right)}{\left(n-1\right)^{2}}\sqrt{2n-3}.\] For \(n\geq 5\) we have that \[\delta < 1,\] \[\alpha+\beta < \frac{4}{3},\] \[\frac{\left(\alpha+1\right)\beta}{\alpha^{2}} > \frac{1}{2},\] \[\left(n-1\right)^{2-\alpha} < \sqrt{2}\left(n-1\right).\] Therefore, \[\Gamma>\frac{2}{3}\sqrt{n}\;\lambda_{1}\left(M\right).\] In conclusion, by (2.37) we have \[\frac{2}{3}\sqrt{n}\;\lambda_{1}\left(M\right)\operatorname{Vol}\left(\Omega \right)\leq\int_{\partial\Omega}\left|H\right|^{\frac{2n-3}{n-1}}.\] ## 3. Splitting of Ricci solitons In this section, we address the issue of nonexistence of compact minimal hypersurfaces in Ricci solitons. We begin with the case of steady solitons. Let \(\left(M,g,f\right)\) be a gradient steady Ricci soliton. Then the potential \(f\) satisfies the soliton equation \[\operatorname{Ric}+\operatorname{Hess}\left(f\right)=0.\] It is known [20] that \(f\) may be normalized so that \[S+\left|\nabla f\right|^{2}=1,\] where \(S\) is the scalar curvature. It is also known [8] that \(S>0\) unless \(\left(M,g\right)\) is Ricci flat. **Theorem 3.1**.: _Let \(\left(M^{n},g,f\right)\) be a steady Ricci soliton. Assume that there exists a smooth compact embedded minimal hypersurface \(\Sigma\) in \(M.\) Then \(\left(M,g\right)\) splits isometrically as a direct product \(\mathbb{R}\times\Sigma.\)_ Proof.: By the splitting theorem in [29] we may assume that \(M\) and its double covers all have one end. Hence, according to Proposition 5.2 in [6], the integral homology \[H_{n-1}\left(M,\mathbb{Z}\right)=\left\{0\right\}.\] In particular, \(\Sigma\) bounds a compact domain \(D\) in \(M.\) In [29] it was proved that \(\Delta_{f}\) has positive spectrum. Consequently, \(M\) is \(f\)-nonparabolic. This implies that there exists \(w>0\) on \(M\setminus D\) such that \[\Delta_{f}w = 0\;\text{ on }M\setminus D\] \[w = 1\text{ on }\Sigma\] \[\inf_{M\setminus D}w = 0.\] Moreover, \[\int_{M\setminus D}\left|\nabla w\right|^{2}e^{-f}<\infty. \tag{3.1}\] The Bochner formula implies that \[\frac{1}{2}\Delta_{f}\left|\nabla w\right|^{2}\geq\left|\nabla\left|\nabla w\right| \right|^{2}. \tag{3.2}\] We now prove, similar to Proposition 2.1, that \[0\leq\int_{M\setminus D}\left|\nabla\phi\right|^{2}e^{-f}-\int_{\Sigma}f_{ \nu}\phi^{2}e^{-f} \tag{3.3}\] for any smooth function \(\phi\) on \(M\setminus D\) that vanishes at infinity, where \(\nu=\frac{\nabla w}{\left|\nabla w\right|}\) is the outward unit normal to \(\partial\left(M\setminus D\right)=\Sigma=\left\{w=1\right\}.\) Note that \[\Delta_{f}\left(f\right) = \Delta f-\left|\nabla f\right|^{2}\] \[= -S-\left|\nabla f\right|^{2}\] \[= -1.\] Therefore, \[\int_{M\setminus D}\phi^{2}e^{-f} = -\int_{M\setminus D}\left(\Delta_{f}\left(f\right)\right)\phi^{2 }e^{-f}\] \[= \int_{M\setminus D}\left\langle\nabla\phi^{2},\nabla f\right\rangle e ^{-f}-\int_{\Sigma}f_{\nu}\phi^{2}e^{-f}\] \[\leq \int_{M\setminus D}\phi^{2}e^{-f}+\int_{M\setminus D}\left| \nabla\phi\right|^{2}e^{-f}\] \[-\int_{\Sigma}f_{\nu}\phi^{2}e^{-f}.\] This proves (3.3). Let \(\psi\) be a smooth function on \(M\setminus D\) with \(\psi=1\) on \(\Sigma\) and \(\psi=0\) outside a sufficiently large ball \(B_{p}(R).\) Setting \(\phi=\left|\nabla w\right|\psi\) in (3.3) we get that \[0 \leq \int_{M\setminus D}\left|\nabla\left(\left|\nabla w\right|\psi \right)\right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}\] \[= \int_{M\setminus D}\left|\nabla\left|\nabla w\right|\right|^{2} \psi^{2}e^{-f}+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla\left|\nabla w \right|^{2},\nabla\psi^{2}\right\rangle e^{-f}\] \[+\int_{M\setminus D}\left|\nabla\psi\right|^{2}\left|\nabla w \right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}.\] By (3.2), this yields \[0 \leq \frac{1}{2}\int_{M\setminus D}\left(\Delta_{f}\left|\nabla w \right|^{2}\right)\psi^{2}+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla \left|\nabla w\right|^{2},\nabla\psi^{2}\right\rangle e^{-f}\] \[+\int_{M\setminus D}\left|\nabla\psi\right|^{2}\left|\nabla w \right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}\] \[= \frac{1}{2}\int_{\Sigma}\left(\left|\nabla w\right|^{2}\right)_{ \nu}e^{-f}+\int_{M\setminus D}\left|\nabla\psi\right|^{2}\left|\nabla w \right|^{2}e^{-f}\] \[-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}.\] However, as \(\nu=\frac{\nabla w}{\left|\nabla w\right|}\) and \(H_{\Sigma}=0\), we see that \[\frac{1}{2}\left(\left|\nabla w\right|^{2}\right)_{\nu} = \left\langle\nabla\left|\nabla w\right|,\nabla w\right\rangle\] \[= \left(\Delta w\right)\left|\nabla w\right|\] \[= \left\langle\nabla f,\nabla w\right\rangle\left|\nabla w\right|\] \[= f_{\nu}\left|\nabla w\right|^{2}.\] Hence, (3.4) becomes an equality by letting \(R\rightarrow\infty\), which in turn forces (3.2) to be an equality. This implies the splitting of the manifold as a direct product \(\mathbb{R}\times\Sigma\). We refer to [29] for the details. An analogous result for expanding Ricci solitons holds true as well. Recall that an expanding Ricci soliton \(\left(M,g,f\right)\) satisfies the equation \[\mathrm{Ric}+\mathrm{Hess}\left(f\right)=-\frac{1}{2}g.\] We may normalize \(f\) (see [20]) such that \[S+\left|\nabla f\right|^{2}=-f.\] Moreover, the scalar curvature \(S\geq-\frac{n}{2}\) on \(M\) by [32]. **Theorem 3.2**.: _Let \(\left(M^{n},g,f\right)\) be an expanding gradient Ricci soliton with \(S\geq-\frac{n-1}{2}\) on \(M.\) Assume that there exists an embedded compact minimal hypersurface \(\Sigma\) in \(M.\) Then \(M\) splits isometrically as a direct product \(\mathbb{R}\times\Sigma.\)_ Proof.: Recall that by [30] such an expanding Ricci soliton must have one end or it splits as a direct product. Hence, by [6] we may assume as before that the integral homology \[H_{n-1}\left(M,\mathbb{Z}\right)=\left\{0\right\}\] and that \(\Sigma\) bounds a compact domain \(D\) in \(M.\) As \(\Delta_{f}\) has positive spectrum [30], \(M\) is \(f\)-nonparabolic. In particular, there exists function \(w>0\) on \(M\setminus D\) such that \[\Delta_{f}w = 0\text{ on }M\setminus D\] \[w = 1\text{ on }\Sigma\] \[\inf_{M\setminus D}w = 0.\] Moreover, \[\int_{M\setminus D}\left|\nabla w\right|^{2}e^{-f}<\infty.\] We now prove, similar to Proposition 2.1, that \[\frac{1}{2}\int_{M\setminus D}\phi^{2}e^{-f}\leq\int_{M\setminus D}\left| \nabla\phi\right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\phi^{2}e^{-f} \tag{3.5}\] for any smooth function \(\phi\) on \(M\setminus D\) that vanishes near infinity, where \(\nu=\frac{\nabla w}{\left|\nabla w\right|}\) is the unit normal to \(\Sigma=\left\{w=1\right\}.\) Direct calculation gives \[\Delta_{f}\left(f\right) = \Delta f-\left|\nabla f\right|^{2}\] \[= -\frac{n}{2}-S-\left|\nabla f\right|^{2}\] \[\leq -\frac{1}{2}-\left|\nabla f\right|^{2}.\] So we have \[\frac{1}{2}\int_{M\setminus D}\phi^{2}e^{-f} \leq -\int_{M\setminus D}\left(\Delta_{f}\left(f\right)\right)\phi^{2}e^ {-f}-\int_{M\setminus D}\left|\nabla f\right|^{2}\phi^{2}e^{-f}\] \[= \int_{M\setminus D}\left\langle\nabla\phi^{2},\nabla f\right\rangle e ^{-f}-\int_{\Sigma}f_{\nu}\phi^{2}e^{-f}-\int_{M\setminus D}\left|\nabla f \right|^{2}\phi^{2}e^{-f}\] \[\leq \int_{M\setminus D}\left|\nabla\phi\right|^{2}e^{-f}-\int_{ \Sigma}f_{\nu}\phi^{2}e^{-f}.\] This proves (3.5). We apply (3.5) to \(\phi=\left|\nabla w\right|\psi\), where \(\psi\) is a cut-off function as in Theorem 3.1 that \(\psi=1\) on \(\Sigma\) and that \(\psi=0\) outside the geodesic ball \(B_{p}(R)\) when \(R\) is large. It follows that \[\frac{1}{2}\int_{M\setminus D}\left|\nabla w\right|^{2}\psi^{2}e ^{-f} \leq \int_{M\setminus D}\left|\nabla\left|\nabla w\right|\right|^{2} \psi^{2}e^{-f}\] \[+\int_{M\setminus D}\left|\nabla w\right|^{2}\left|\nabla\psi \right|^{2}e^{-f}\] \[+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla\left|\nabla w \right|^{2},\nabla\psi^{2}\right\rangle e^{-f}\] \[-\int_{\Sigma}f_{\nu}|\nabla w|^{2}e^{-f}.\] Recall the Bochner formula \[\frac{1}{2}\Delta_{f}\left|\nabla w\right|^{2}\geq\left|\nabla\left|\nabla w \right|\right|^{2}-\frac{1}{2}\left|\nabla w\right|^{2}\ \ \text{on}\ M\setminus D.\] Plugging into (3.6) yields that \[\frac{1}{2}\int_{M\setminus D}\left|\nabla w\right|^{2}\psi^{2}e ^{-f} \leq \frac{1}{2}\int_{M\setminus D}\left(\Delta_{f}\left|\nabla w \right|^{2}\right)\psi^{2}e^{-f}+\frac{1}{2}\int_{M\setminus D}\left|\nabla w \right|^{2}\psi^{2}e^{-f}\] \[+\int_{M\setminus D}\left|\nabla w\right|^{2}\left|\nabla\psi \right|^{2}e^{-f}+\frac{1}{2}\int_{M\setminus D}\left\langle\nabla\left| \nabla w\right|^{2},\nabla\psi^{2}\right\rangle e^{-f}\] \[-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}\] \[= \frac{1}{2}\int_{\Sigma}\left(\left|\nabla w\right|^{2}\right)_{ \nu}e^{-f}+\frac{1}{2}\int_{M\setminus D}\left|\nabla w\right|^{2}\psi^{2}e ^{-f}\] \[+\int_{M\setminus D}\left|\nabla w\right|^{2}\left|\nabla\psi \right|^{2}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}.\] In conclusion, \[0 \leq \frac{1}{2}\int_{\Sigma}\left(\left|\nabla w\right|^{2}\right)_{ \nu}e^{-f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}\] \[+\int_{M\setminus D}\left|\nabla w\right|^{2}\left|\nabla\psi \right|^{2}e^{-f}.\] Making \(R\to\infty\), we get \[0\leq\frac{1}{2}\int_{\Sigma}\left(\left|\nabla w\right|^{2}\right)_{\nu}e^{ -f}-\int_{\Sigma}f_{\nu}\left|\nabla w\right|^{2}e^{-f}.\] Since \(\nu=\frac{\nabla w}{\left|\nabla w\right|},\) it follows that \[0\leq\int_{\Sigma}\left\langle\nabla\left|\nabla w\right|,\nabla w\right\rangle e ^{-f}-\int_{\Sigma}\left\langle\nabla f,\nabla w\right\rangle\left|\nabla w \right|e^{-f}.\] However, as \(w\) is \(f\)-harmonic, \[\left\langle\nabla\left|\nabla w\right|,\nabla w\right\rangle-\left\langle \nabla f,\nabla w\right\rangle\left|\nabla w\right|=-H_{\Sigma}\left|\nabla w \right|^{2}.\] Since \(\Sigma\) is minimal, this again shows the above inequality must be equality, which in turn forces the Bochner formula itself is also an equality. This suffices to conclude that \(M=\mathbb{R}\times\Sigma.\) One may refer to [30] for details. **Acknowledgment:** We wish to thank Pengfei Guan for his interest and comments. The first author was partially supported by the NSF grant DMS-1811845 and by a Simons Foundation grant.
2309.04235
Quasi-integrability and nonlinear resonances in cold atoms under modulation
Quantum dynamics of a collection of atoms subjected to phase modulation has been carefully revisited. We present an exact analysis of the evolution of a two-level system (represented by a spinor) under the action of a time-dependent matrix Hamiltonian. The dynamics is shown to evolve on two coupled potential energy surfaces, one of them binding while the other one scattering type. The dynamics is shown to be quasi-integrable with nonlinear resonances. The bounded dynamics with intermittent scattering at random moments presents the scenario reminiscent to Anderson and dynamical localization. We believe that a careful analytical investigation of a multi-component system which is classically non-integrable is relevant to many other fields, including quantum computation with multi-qubit system.
Rahul Gupta, Manan Jain, Sudhir R. Jain
2023-09-08T09:42:25Z
http://arxiv.org/abs/2309.04235v1
# Quasi-integrability and nonlinear resonances in cold atoms under modulation ###### Abstract Quantum dynamics of a collection of atoms subjected to phase modulation has been carefully revisited. We present an exact analysis of the evolution of a two-level system (represented by a spinor) under the action of a time-dependent matrix Hamiltonian. The dynamics is shown to evolve on two coupled potential energy surfaces, one of them binding while the other one scattering type. The dynamics is shown to be quasi-integrable with nonlinear resonances. The bounded dynamics with intermittent scattering at random moments presents the scenario reminiscent to Anderson and dynamical localization. We believe that a careful analytical investigation of a multi-component system which is classically non-integrable is relevant to many other fields, including quantum computation with multi-qubit system. ## 1 Introduction Evolution in the fields of ultracold atoms and quantum physics in the past few decades has led to recognition of these fields as a huge well-acclaimed arena for the exploration of popular subjects like quantum chaos [1], Feshbach resonances[2, 4, 3, 5, 6, 7, 8, 9, 10, 11, 12] and ultracold atomic mixtures[13, 14, 15, 16, 17], atom interferometry[18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32], atomic clocks [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43], quantum diffraction [44, 45] & quantum thermodynamics[46, 47, 48, 49]. This is due to rich internal structures, longer de Broglie wavelengths and tunable long-range interactions possessed by ultracold atoms. Furthermore, the research in the regime of lower temperatures has also been extended to molecules [50, 51]. Apart from these recent developments, there has been a sustained effort to realize parallels between atomic and condensed matter physics [52]. One of the ideas pursued with great interest is the localization of states in disordered systems, pioneered by Anderson [53]. Due to a common sense analogy between disorder and chaos, a connection between localization of wavefunctions of classically chaotic systems, and, disordered lattices of infinite [54] and finite extent [55] was brought out. Even in matter waves, the phenomenon of localization has been experimentally demonstrated [56]. Many years ago, an experiment carried out by the group led by Raizen [1] demonstrated the dynamical analog of Anderson Localization in a system of cold atoms. In this experiment, about one hundred thousand \({}^{23}\)Na atoms were trapped in a spherical volume of 300 \(\mu\)m at a temperature of 17 \(\mu\)K. At the end of the preparation step, the temperature was turned off and a modulated standing light field was switched on for 10 \(\mu\)s. The Hamiltonian describing the interaction of the sodium atom with the light field is given by [62] \[H_{0}=H_{\rm el}+\frac{p^{2}}{2m}+eF\cos\{k_{L}[x-\Delta L\sin\omega t]\}\cos \omega_{L}t. \tag{1}\] Here, \(H_{\rm el}\) contains the interaction of valence electrons with an atom. The last term denotes the electric dipole interaction of the electromagnetic field with an electron. Laser frequency and wavenumber are respectively denoted by \(\omega_{L}\) and \(k_{L}\), and \(\omega\) is the modulation frequency. Standing waves are generated by directing two counter-propagating laser beams into the trap, and, the modulation is achieved by passing one beam through an electro-optical phase modulator. The beam is made to strike a mirror in a cavity of length \(\Delta L\) which is moving with the modulation frequency, \(\omega\). The laser frequency was chosen close to the D\({}_{2}\) line of sodium. The electronic Hamiltonian can be reduced to a two-level system containing ground state \(\psi^{-}|g\rangle\) and an excited state, \(\psi^{+}|e\rangle\). \[\psi=\left(\begin{array}{c}\psi^{+}\\ \psi^{-}\end{array}\right)=\psi^{+}|e\rangle+\psi^{-}|g\rangle \tag{2}\] Taking the energy average of the two states as zero energy, the matrix elements of \(H_{\rm el}\) and \(eF\) together give \[H_{\rm el}+eF=\begin{pmatrix}\hbar\omega_{0}/2&\hbar\Omega\\ \hbar\Omega&-\hbar\omega_{0}/2\end{pmatrix}=\frac{\hbar\omega_{0}}{2}\sigma_{ z}+\hbar\Omega\sigma_{x} \tag{3}\] where the transition frequency is denoted by \(\omega_{0}\), \(\Omega\) denotes Rabi frequency, and \(\sigma^{\prime}s\) are the Pauli matrices. Thus, \(H_{0}\) may be written as \[H_{0}=\frac{p^{2}}{2m}{\bf I}+\frac{\hbar\omega_{0}}{2}\sigma_{z}+\hbar\Omega \cos\{k_{L}[x-\Delta L\sin\omega t]\}\cos(\omega_{L}t)\sigma_{x}. \tag{4}\] where \({\bf I}\) denotes an identity matrix. After we present the general Hamiltonian below, in SS2, we present the Hamiltonian under Rotating Wave Approximation. Within this approximation, the case of adiabatic perturbation for the two cases of small and large detuning is considered. In SS3, the exact solution for this matrix Hamiltonian is given. The method transforms the dynamics under the matrix Hamiltonian to dynamics on potential energy surfaces. Classical dynamics reveals the presence of nonlinear resonances in SS4. The classical system obeys the Kolmogorov-Arnold-Moser (KAM) theorem [57], and hence is quasi-integrable [58]. In a related context of quantum Rabi model, a discussion on integrability [60] and symmetries [61] has been presented relatively recently. Special solutions are discussed as they have been used to analyze experiments carried out by different groups. For each case discussed at the quantum mechanical level, we also present classical phase space pictures and show that this atomic system presents a very interesting and deep instance of the association of quasi-integrability and dynamical localization. The phase space pictures exhibit certain misleading features in the approximated Hamiltonian, compared to the exact Hamiltonian obtained by systematic expansion in powers of \(\hbar\). _General Hamiltonian_ We now transform to a frame which is rotating with \(\omega_{L}\) about the \(z\)-axis in spin space: \[\psi_{\rm rot}=\exp\left(i\omega_{L}\sigma_{z}t/2\right)\psi. \tag{5}\] Substituting \(\psi\) in the Schrodinger equation, \(i\hbar\partial\psi/\partial t=H_{0}\psi\), we have the equation for the rotated wavefunction: \[H_{\rm rot}=\frac{p^{2}}{2m}{\bf I}+\frac{\hbar(\omega_{0}- \omega_{L})}{2}\sigma_{z}+\hbar\Omega\cos\{k_{L}[x-\Delta L\sin\omega t]\}.\] \[.\cos(\omega_{L}t)e^{i\omega_{L}\sigma_{z}t/2}\sigma_{x}e^{-i \omega_{L}\sigma_{z}t/2}. \tag{6}\] Using the standard identity, \(e^{i\omega_{L}\sigma_{z}t/2}\sigma_{x}e^{-i\omega_{L}\sigma_{z}t/2}=\sigma_{x }\cos\omega_{L}t-\sigma_{y}\sin\omega_{L}t\), we have the transformed Hamiltonian: \[H_{\rm rot}=\frac{p^{2}}{2m}{\bf I}+\frac{\hbar(\omega_{0}- \omega_{L})}{2}\sigma_{z}+\frac{\hbar\Omega}{2}\cos\{k_{L}[x-\Delta L\sin \omega t]\}.\] \[.[\sigma_{x}(1+\cos 2\omega_{L}t)-\sigma_{y}\sin 2\omega_{L}t]. \tag{7}\] This is the general Hamiltonian for the physical situation described above where there are terms oscillating with twice the \(\omega_{L}\). ## 2 Rotating Wave Approximation The Schrodinger equation for \(H_{\rm rot}\) is usually solved under the Rotating Wave Approximation (RWA) [62, 65]. Here the terms oscillating with frequency \(2\omega_{L}\) are neglected. This leads to a simplified Hamiltonian, \[H_{rot}^{\rm RWA}=\frac{p^{2}}{2m}{\bf I}+\hbar\Omega_{\rm eff}(\sigma_{z}\cos \alpha+\sigma_{x}\sin\alpha) \tag{8}\] where \[\Omega_{\rm eff}=\frac{1}{2}[(\omega_{0}-\omega_{L})^{2}+\Omega^{ 2}\cos^{2}\{k_{L}(x-\Delta L\sin\omega t)]\}]^{1/2},\] \[\tan\alpha=\frac{\Omega\cos[k_{L}(x-\Delta L\sin\omega t)]}{ \omega_{0}-\omega_{L}}. \tag{9}\] Let us rotate the state of this Hamiltonian further in the spin space by an angle \((-\alpha/2)\) about the \(y\)-axis, to obtain a new state, \(\psi^{\prime}=\psi^{\prime+}|e\rangle+\psi^{\prime-}|g\rangle=\exp(i\alpha \sigma_{y}/2)\psi_{rot}\) \[\psi^{\prime}=\left(\begin{array}{c}\cos(\alpha/2)e^{i\omega_{L}t/2}\psi^{+}+ \sin(\alpha/2)e^{-i\omega_{L}t/2}\psi^{-}\\ -\sin(\alpha/2)e^{i\omega_{L}t/2}\psi^{+}+\cos(\alpha/2)e^{-i\omega_{L}t/2} \psi^{-}\end{array}\right) \tag{10}\] in which the second term is diagonal. Consequently, the equation satisfied by \(\psi^{\prime}\) is \[i\hbar\frac{\partial\psi^{\prime}}{\partial t}=-\frac{\hbar}{2}\frac{\partial \alpha}{\partial t}\sigma_{y}\psi^{\prime}+e^{i\alpha\sigma_{y}/2}H_{\rm rot }^{\rm RWA}e^{-i\alpha\sigma_{y}/2}\psi^{\prime}=H_{\rm eff}^{\rm RWA}\psi^{ \prime}. \tag{11}\] But this will transform the kinetic term as [66]: \[e^{i\alpha\sigma_{y}/2}p^{2}{\bf I}e^{-i\alpha\sigma_{y}/2}\psi ^{\prime}=\left(p{\bf I}-\hbar{\bf A}\right)^{2}\psi^{\prime}={\bf\Pi}^{2} \psi^{\prime} \tag{12}\] \[{\bf A}=\frac{\sigma_{y}}{2}\frac{\partial\alpha}{\partial x}= \frac{-k_{L}\delta_{L}\Omega\sin[k_{L}(x-\Delta L\sin\omega t)]\sigma_{y}}{2 \left({\delta_{L}}^{2}+\Omega^{2}\cos^{2}[k_{L}(x-\Delta L\sin\omega t)]\right)} \tag{13}\] where \({\bf I}\) is an identity matrix. Now we can employ the well-known identity: \[e^{i\alpha(\hat{n}.\vec{\sigma})}\vec{\sigma}e^{-i\alpha(\hat{n}.\vec{\sigma} )}=\vec{\sigma}\cos 2\alpha+\hat{n}\times\vec{\sigma}\sin 2\alpha+\hat{n}( \hat{n}.\vec{\sigma})(1-\cos 2\alpha). \tag{14}\] While the "potential" part of the Hamiltonian becomes diagonal with these unitary transformations, the kinetic term modifies to \((p{\bf I}-\hbar{\bf A})^{2}\). This has terms of order 1, \(\hbar\), and \(\hbar^{2}\) - thus, a semiclassical expansion (and not a perturbative expansion) appears naturally. Moreover, since \({\bf A}\) has non-zero diagonal matrix elements, there is a possibility of a geometric phase appearing in the state of the atoms as the system evolves. This is indeed due to the cavity modulation. Dimensionally, \(\hbar{\bf A}/e\) is a magnetic vector potential. \(H_{\rm eff}^{\rm RWA}\) can be written as: \[H_{\rm eff}^{\rm RWA} =\frac{{\bf\Pi}^{2}}{2m}+\hbar\Omega_{\rm eff}\sigma_{z}-\frac{ \hbar}{2}\frac{\partial\alpha}{\partial t}\sigma_{y}, \tag{15}\] \[=\left[\frac{p^{2}}{2m}-\frac{\hbar^{2}}{4}\left(\frac{\partial \alpha}{\partial x}\right)^{2}\right]{\bf I}+\hbar\Omega_{\rm eff}\sigma_{z}+ \left(-\frac{\hbar}{2}\frac{\partial\alpha}{\partial t}-\hbar\frac{\partial \alpha}{\partial x}p+\frac{i\hbar^{2}}{2}\frac{\partial\alpha}{\partial x} \right)\sigma_{y}. \tag{16}\] Except for terms of order O(\(\hbar^{2}\)), each of the terms can make a significant contribution. At this point, one of the possible simplifications occurs if \(\alpha\) is slowly varying with time. This leads us to consider applying the adiabatic approximation, which we discuss now. ### Adiabatic variation We may neglect the term \(\hbar\sigma_{y}d\alpha/dt\). But note that in this case: \[\hbar\sigma_{y}\frac{d\alpha}{dt}=\hbar\frac{\partial\alpha}{\partial x}p \sigma_{y}+\hbar\frac{\partial\alpha}{\partial t}\sigma_{y}\to 0. \tag{17}\] The adiabatic Hamiltonian is: \[H_{\rm ad}^{\rm RWA}=\left[\frac{p^{2}}{2m}-\frac{\hbar^{2}}{4}\left(\frac{ \partial\alpha}{\partial x}\right)^{2}\right]{\bf I}+\hbar\Omega_{\rm eff} \sigma_{z}+\left(\frac{\hbar}{2}\frac{\partial\alpha}{\partial t}+\frac{i \hbar^{2}}{2}\frac{\partial\alpha}{\partial x}\right)\sigma_{y}. \tag{18}\] It matters a lot if the detuning is small or large. This is because \[\frac{\partial\alpha}{\partial x}=-\frac{k_{L}\frac{\delta_{L}}{\Omega}\sin[k_{L}(x -\Delta L\sin\omega t)]}{\left(\frac{\delta_{L}}{\Omega}\right)^{2}+\cos^{2}[k _{L}(x-\Delta L\sin\omega t)]};\qquad\frac{\partial\alpha}{\partial t}=\frac{ \omega\frac{\delta_{L}}{\Omega}\sin[k_{L}(x-\Delta L\sin\omega t)]\cos\omega t} {\left(\frac{\delta_{L}}{\Omega}\right)^{2}+\cos^{2}[k_{L}(x-\Delta L\sin \omega t)]}. \tag{19}\] So either for small or large detuning, \[\delta_{L}\ll\Omega\quad\text{or}\quad\delta_{L}\gg\Omega\quad\Rightarrow \quad\frac{\partial\alpha}{\partial t},\frac{\partial\alpha}{\partial x}\to 0. \tag{20}\] #### 2.1.1 Small detuning Here, \(\omega_{0}\sim\omega_{L}\), thus \(\tan\alpha\to\infty\) or \(\alpha\sim\pi/2\). Considering (20) and keeping the terms up O(\(\hbar\)), the adiabatic Hamiltonian further simplifies to \[H^{\text{RWA}}_{\text{ad,s}}=\frac{p^{2}}{2m}\mathbf{I}+\hbar\Omega_{\text{ eff}}\sigma_{z}. \tag{21}\] Exploiting the smallness of detuning, we may expand binomially to obtain \[H^{\text{RWA},\pm}_{\text{ad,s}} =\frac{p^{2}}{2m}\pm\frac{\hbar\Omega}{2}\cos[k_{L}(x-\Delta L\sin \omega t)]\left[1+\frac{(\omega_{0}-\omega_{L})^{2}}{2\Omega^{2}\cos^{2}[k_{L}( x-\Delta L\sin\omega t)]}\right]\] \[+\mathcal{O}\left(\left(\frac{\omega_{0}-\omega_{L}}{\Omega} \right)^{3}\right). \tag{22}\] These provide the two potential energy surfaces on which the two-level system evolves, connected by tunneling. This can be seen by the fact that the intersection of the two curves occurs when \(\Omega_{\text{eff}}\) is zero, leading to \[x =\Delta L\,\sin\omega t+\frac{\pi}{2k_{L}}+i\log\left(\sqrt{1- \frac{\delta_{L}^{2}}{2\Omega^{2}}}-\frac{\delta_{L}}{\sqrt{2}\Omega}\right)\] \[\simeq\Delta L\,\sin\omega t+\frac{\pi}{2k_{L}}-i\frac{\sqrt{2} \delta_{L}}{2\Omega} \tag{23}\] for small detuning. The binding part of the potential in (22) supports eigenvalues. However, since the Hamiltonian is periodic in time, the eigenvalues are quasienergies. Owing to the imaginary part, these are more precisely "quasienergy resonances". #### 2.1.2 Large detuning We consider the case where we have RWA and adiabatic approximation but \(\delta_{L}\gg\Omega\). Then we have the Hamiltonian, \[H^{\text{RWA}}_{\text{ad,l}}=\begin{pmatrix}p^{2}/2m+\hbar\Omega_{\text{eff}}& 0\\ 0&p^{2}/2m-\hbar\Omega_{\text{eff}}\end{pmatrix}. \tag{24}\] This can be decomposed into two Hamiltonians: \[H^{\rm RWA,\pm}_{\rm ad,l}=\frac{p^{2}}{2m}\pm\frac{\hbar\delta_{L}}{2}\left[1+ \frac{\Omega^{2}}{2\delta_{L}^{2}}\cos^{2}[k_{L}(x-\Delta L\sin\omega t)]\right]+ \mathcal{O}\left(\left(\frac{\Omega}{\omega_{0}-\omega_{L}}\right)^{3}\right). \tag{25}\] The potential energy curves intersect when \[x(t)=\left(n+\frac{1}{2}\right)\frac{\pi}{k_{L}}+\Delta L\,\sin\omega t. \tag{26}\] Here the intersection points are real where the real part is the same as for small detuning. The potential energy curves support sharp quasienergies. ## 3 Exact solution We now return to the (7) and lift all the approximations considered in the last Section. The Hamiltonian is written as \[H_{\rm rot}=\frac{p^{2}}{2m}\mathbf{I}+\begin{pmatrix}a&b\\ b*&-a\end{pmatrix}\equiv\frac{p^{2}}{2m}\mathbf{I}+\mathcal{M} \tag{27}\] where \(a=\hbar(\omega_{0}-\omega_{L})/2\), \(b=b_{1}+ib_{2}\) with \[b_{1} =\frac{\hbar\Omega}{2}\cos[k_{L}(x-\Delta L\sin\omega t)](1+\cos 2 \omega_{L}t),\] \[b_{2} =\frac{\hbar\Omega}{2}\cos[k_{L}(x-\Delta L\sin\omega t)]\sin 2 \omega_{L}t. \tag{28}\] The matrix, denoted by \(\mathcal{M}\) in (27) can be diagonalized by a matrix \(\mathcal{S}\) to get the diagonal matrix, \(\mathcal{J}\). The matrices are \[\mathcal{S}=\begin{pmatrix}\frac{(a-\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}})(b_{1}+ib_ {2})}{b_{1}^{2}+b_{2}^{2}}&\frac{(a+\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}})(b_{1}+ib _{2})}{b_{1}^{2}+b_{2}^{2}}\\ 1&1\end{pmatrix} \tag{29}\] and \[\mathcal{J}=\begin{pmatrix}-\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}}&0\\ 0&\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}}\end{pmatrix}. \tag{30}\] Define \(\psi_{1}=\mathcal{S}^{-1}\psi_{\rm rot}\) with \(i\hbar\partial\psi_{\rm rot}/\partial t=\mathcal{H}\psi_{\rm rot}\). The equation for the time evolution of \(\psi_{1}\) is \[i\hbar\frac{\partial\psi_{1}}{\partial t}=-i\hbar\mathcal{S}^{-1}\frac{ \partial\mathcal{S}}{\partial t}\psi_{1}+\mathcal{S}^{-1}\frac{p^{2}}{2m} \mathbf{I}\mathcal{S}\psi_{1}+\mathcal{J}\psi_{1}. \tag{31}\] Now, \(\mathcal{S}^{-1}p^{2}\mathcal{S}=(\mathcal{S}^{-1}p\mathcal{S})^{2}=(p-i \hbar\mathcal{S}^{-1}\partial\mathcal{S}/\partial x)^{2}\). Here we again have a vector potential which is an artificial gauge field. The Hamiltonian is thus written as an expansion [66, 67], \[H=H_{0}+\hbar H_{1}+\hbar^{2}H_{2} \tag{32}\] with \(H_{0}\) has a simple form: \[H_{0}=\frac{p^{2}}{2m}\mathbf{I}+\begin{pmatrix}-\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2 }}&0\\ 0&\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}}.\end{pmatrix} \tag{33}\] Writing \(\psi_{1}=(\psi_{1}^{(+)}\quad\psi_{1}^{(-)})^{T}\) with the superscript, \(T\) denoting the transpose, we have written the state with two components. The classical Hamiltonians corresponding to the states, \(\psi_{1}^{(\pm)}\) are \[H_{0}^{(\pm)}=\frac{p^{2}}{2m}\pm\frac{\hbar(\omega_{0}-\omega_{L})}{2}\left( 1+\frac{4\Omega^{2}}{(\omega_{0}-\omega_{L})^{2}}\cos^{2}[k_{L}(x-\Delta L\sin \omega t)]\cos^{2}\omega_{L}t\right)^{1/2}. \tag{34}\] Usually, \(\psi_{1}^{(+)}\) is subjected to a binding potential and \(\psi_{1}^{(-)}\) is evolving on a scattering potential. There are two potential energy surfaces, \(\pm\sqrt{a^{2}+b_{1}^{2}+b_{2}^{2}}\) on which the full two-component wavefunction, \(\psi_{1}\) evolves. The potential energy surfaces meet at the solution of \[a^{2}+b_{1}^{2}+b_{2}^{2}=0. \tag{35}\] The solution is \[x =\Delta L\sin\omega t+\frac{1}{k_{L}}\cos^{-1}\left[\pm i\,\frac{ (\omega_{0}-\omega_{L})}{2\Omega}\sec(\omega_{L}t)\right]\] \[=\Delta L\sin\omega t+\frac{\pi}{2k_{L}}+i\,\frac{1}{k_{L}}\log \left[1\mp\frac{\delta_{L}}{2\Omega}\sec(\omega_{L}t)+\frac{\delta_{L}^{2}}{ 8\Omega^{2}}\sec^{2}(\omega_{L}t)\right]. \tag{36}\] For small detuning (\(\delta_{L}\ll\Omega\)), the potential curves intersect at \[x=\Delta L\sin\omega t+\frac{\pi}{2k_{L}}\mp i\,\frac{\delta_{L}}{2\Omega} \sec(\omega_{L}t)\pm i\,\frac{\delta_{L}^{3}}{48\Omega^{3}}\sec^{3}(\omega_{L} t). \tag{37}\] Figure 1: Potential Energy Surface (PES) at (a) large detuning (\(\delta_{L}\gg\Omega\)) and (b) small detuning (\(\delta_{L}\ll\Omega\)). At large detuning the gap shrinks allowing a larger region for space for crossing of PES The complex value of crossing of the potential energy surfaces implies the tunneling of atoms. The tunneling across these surfaces where the underlying dynamics is nonlinear has some very interesting related phenomena like resonance assisted tunneling [63], which have been recently experimentally realized [64]. The Fig. 1 (a) and (b) show these crossings along the complex position plane. We note that the crossing gap at the null imaginary position plane vanishes as one reaches closer to resonance (at small detuning) and remains wide open at large detuning. In (34), for large detuning, \(\Omega^{2}/(\omega_{0}-\omega_{L})^{2}\ll 1\), a Taylor expansion immediately yields \[H_{0,l}^{(\pm)}=\frac{p^{2}}{2m}\pm\frac{\hbar(\omega_{0}-\omega_{L})}{2} \left(1+\frac{2\Omega^{2}}{(\omega_{0}-\omega_{L})^{2}}\cos^{2}[k_{L}(x-\Delta L \sin\omega t)]\cos^{2}\omega_{L}t\right). \tag{38}\] Among the two Hamiltonians, \(H_{0,l}^{(-)}\) is binding; it can be seen that the second term in the Taylor expansion of \(\cos[k_{L}(x-\Delta L\sin\omega t)]\) along with an overall negative sign will make this roughly parabolic for small arguments, at least. For the same reason, \(H_{0}^{(+)}\) is a scattering Figure 2: Comparison of Poincare sections for Hamiltonians under different approximations for the case of large detuning for the same set of parameters used in Fig. 3. (a) Shows the un-approximated case corresponding to the exact solution. (b) Shows the application of binomial approximation to the exact solution. (c) Corresponds to the RWA+Adiabatic approximation and (d) corresponds to the RWA+adiabatic+Binomial approximation. Initial conditions and number of evolution steps are kept the same for all cases here. potential. The differences in Poincare sections for various cases can be seen in the following figure. We found that the 3 island ring which is present in both un-approximated case and RWA+Adiabatic case vanishes if we make a binomial approximation implying origin of this resonance is purely because of higher order terms of (38) and (22). We also note that the chaos is more apparent in the binomial case but less severe in all other cases. We now study the classical mechanics of these Hamiltonians. ## 4 Quasi-integrability In this Section, we study the classical dynamics of the Hamiltonians obtained above under different approximations. We begin with the exact Hamiltonian, namely (32), and consider only \(H_{0}^{(-)}\) in (34). We make the following transformations to convert it to a dimensionless form almost similar to [65]. \[t\rightarrow\frac{t}{\omega}\,\ x\rightarrow\frac{x}{2k_{L}} \,\ p\rightarrow\frac{M\omega p}{2k_{L}}\,\ H_{0}^{-}\rightarrow\frac{M \omega^{2}H_{0}^{-}}{4K_{L}^{2}}\] \[\lambda=2k_{L}\Delta L\,\ \gamma=\frac{\omega_{L}}{\omega}\,\ \eta= \left(\frac{\Omega}{\delta_{L}}\right)^{2}\,\ K=\frac{\hbar k_{L}^{2}\Omega^{2}}{2M \omega^{2}\delta_{L}} \tag{39}\] where \(\eta\) is strength of Rabi resonance and \(\delta_{L}=\omega_{0}-\omega_{L}\) is the detuning of laser. The simplified Hamiltonian yields: \[H_{0}^{-}=\frac{p^{2}}{2}-\frac{4K}{\eta}\left[1+2\eta(1+\cos(x-\lambda\sin t ))\cos^{2}\gamma t\right]^{\frac{1}{2}} \tag{40}\] Now, using the same transformations (39), we write the Hamiltonians for large detuning, neglecting the constant terms: \[H_{0,l}^{-}\simeq\frac{p^{2}}{2}-4K\cos(x-\lambda\sin t)\cos^{2 }\gamma t, \tag{41}\] \[H_{\rm ad,l}^{\rm RWA,-}\simeq\frac{p^{2}}{2}-K\cos(x-\lambda \sin t). \tag{42}\] This clearly implies a drastic change in the equation since if \(\gamma\gg 1\), thus even if we use \(\langle\cos^{2}\gamma t\rangle=1/2\), the second term contributes double compared to the contribution coming from the usual case with adiabatic and RWA approximation. In order to understand the underlying phase space structure, we initialize 1000 ultracold atoms (blue dots) in one of the island in the Poincare section taken in steps of modulation time period \(T\) as shown in Fig. 3 (top) and look at its stroboscopic evolution in multiples of the modulation time period. We find that after each modulation period, atoms move from one island to another lying around the same larger elliptic-like orbit (Fig. 3 (middle)). Similarly, we find that the number of islands is equal to (or twice if \(n\) is even) the number of modulation periods \(n\) for the marked islands in Fig. 3 (bottom). In other words, these islands satisfies \(T_{\rm orbit}=nT\) or \(\Omega_{\rm orbit}/\omega=1/n\). To study the origin of these patterns in resonance structures, we write the dimensionless Hamiltonian (42) in action-angle variables. Let us write one of the RWA Hamiltonians as a perturbed harmonic oscillator: \[H_{0,\mathrm{l}}^{\mathrm{RWA},-} =\frac{p^{2}}{2}+\frac{Kx^{2}}{2}-\left(K\cos(x-\lambda\sin t)+ \frac{Kx^{2}}{2}\right) \tag{43}\] \[=H_{\mathrm{h.o.}}+\epsilon\Delta H. \tag{44}\] where \(\epsilon\) is introduced for book-keeping (eventually, we shall put \(\epsilon=1\)). Employing the oscillator action-angle variables, \((J,\theta)\), with \(x=\sqrt{\frac{J}{\pi\Omega}}\sin(\theta)\) and \(p=\sqrt{\frac{J\Omega}{\pi}}\cos(\theta)\) with \(K=\Omega^{2}\), the Hamiltonians are: \[H_{h.o.} =\frac{\Omega J}{2\pi} \tag{45}\] \[\Delta H =-\Omega^{2}\cos\left(\sqrt{\frac{J}{\pi\Omega}}\sin\theta- \lambda\sin t\right)-\frac{J\Omega}{2\pi}\sin^{2}\theta. \tag{46}\] We use the classical time-dependent perturbation theory [57] to calculate the associated action of this Hamiltonian up to first order in perturbation. For this, we transform the action variables in a way that the new Hamiltonian \(\bar{H}\) is only a function of the new action variable \(\bar{J}\) alone. We obtain \[\langle\Delta H\rangle =\frac{1}{2\pi}\int_{0}^{2\pi}dt\frac{1}{2\pi}\int_{0}^{2\pi}d \theta\Delta H(J,\theta,t)\] \[=-\Omega^{2}J_{0}\left(\sqrt{\frac{\bar{J}}{\Omega\pi}}\right)J_ {0}(\lambda)-\frac{\bar{J}\Omega}{4\pi} \tag{47}\] \[\bar{H}(\bar{J}) =\frac{\Omega\bar{J}}{2\pi}-\epsilon\Omega^{2}J_{0}\left(\sqrt{ \frac{\bar{J}}{\Omega\pi}}\right)J_{0}(\lambda)-\epsilon\frac{\bar{J}\Omega}{ 4\pi} \tag{48}\] Figure 3: Poincaré Sections taken in steps of modulation period using the same parameter as in [1]. (a) 1000 ultracold atoms (purple dots) are loaded in one of the islands of stability in the Poincare section taken in steps of the driving period T. (b) stroboscopic evolution of the ultracold atoms reveals that they evolve with a period 4T. (c) Similarly, loading on different islands of stability shows the existence of 3T, 11T/3, 4T and 5T periods predominantly. where \(J_{0}(.)\) is the cylindrical Bessel function of order zero. The new frequency is \[\Omega^{\prime}(\bar{J})=2\pi\frac{\partial\bar{H}}{\partial\bar{J}}=\Omega(1- \epsilon/2)-2\epsilon\pi\Omega^{2}J_{0}^{\prime}\left(\sqrt{\frac{\bar{J}}{ \Omega\pi}}\right)J_{0}(\lambda) \tag{49}\] where prime on the Bessel function denotes a derivative with respect to its argument. We subtract this \(\epsilon(\Delta H)\) from \(\epsilon\Delta H\) to obtain the oscillating part \(\epsilon\{\Delta H\}\). For calculating the integral, we expand the potential term using Jacobi-Anger expansion [59]\(e^{iz\sin\theta}=\sum_{n=-\infty}^{+\infty}J_{n}(z)e^{in\theta}\): \[\{\Delta H\} =-\sum_{n,m=-\infty}^{\infty}\Omega^{2}J_{n}\left(\sqrt{\frac{ \bar{J}}{\Omega\pi}}\right)J_{m}(\lambda)\cos(n\bar{\theta}-mt)+\frac{\bar{J} \Omega}{4\pi}\cos 2\bar{\theta} \tag{50}\] \[\equiv\sum_{n,m=-\infty}^{\infty}\Delta H_{n,m}(\bar{J},\bar{ \theta},t)+\frac{\bar{J}\Omega}{4\pi}\cos 2\bar{\theta} \tag{51}\] where both \(n,m\) are non-zero. The change in action \(\epsilon\Delta S\) can be calculated as \[\epsilon\Delta S =-\int^{t}dt\epsilon\{\Delta H\} \tag{52}\] \[=\sum_{n,m=-\infty}^{\infty}\epsilon\Delta S_{n,m}(\bar{J},\bar{ \theta},t)+\frac{\epsilon\bar{J}\Omega}{8\pi\bar{\Omega}(\bar{J})}\sin 2\bar{\theta} \tag{53}\] where \[\epsilon\Delta S_{n,m}=\frac{-\epsilon\Omega^{2}}{n\bar{\Omega}(\bar{J})-m}J_ {n}\left(\sqrt{\frac{\bar{J}}{\Omega\pi}}\right)J_{m}(\lambda)\sin(n\bar{ \theta}-mt) \tag{54}\] Consequent to the above, \[\bar{J}=J-\epsilon\frac{\partial\Delta S}{\partial\theta}(J,\theta,t)\ ;\ \bar{ \theta}=\theta+\epsilon\frac{\partial\Delta S}{\partial J}(J,\theta,t). \tag{55}\] The new action-angle variables can be calculated up to first order as \[\bar{J} =J+\epsilon\frac{n\Omega^{2}}{n\bar{\Omega}(J)-m}J_{n}\left( \sqrt{\frac{J}{\Omega\pi}}\right)J_{m}(\lambda)\cos(n\theta-mt)-\epsilon\frac {J\Omega}{4\pi}\cos 2\theta, \tag{56}\] \[\bar{\theta} =\theta+\epsilon\frac{-\Omega^{2}}{n\bar{\Omega}(J)-m}J_{n}^{ \prime}\left(\sqrt{\frac{J}{\Omega\pi}}\right)J_{m}(\lambda)\sin(n\theta-mt)+ \frac{\epsilon\Omega}{8\pi\bar{\Omega}(\bar{J})}\sin 2\theta. \tag{57}\] Thus we have obtained the action with resonant denominators which leads to resonant condition \[n\bar{\Omega}(\bar{J})=m\omega \tag{58}\] where \(\omega\) is the modulation frequency and \(\bar{\Omega}(\bar{J})\) is the frequency of the orbit, \(\omega\) is obtained when we substitute actual time, \(t\) in place of dimensionless time from (39). This explains the observed pattern in Fig. 3 : the orbital periods are integral multiples of the modulation period at the resonance. The strength of \((\mathrm{n},\mathrm{m})^{\mathrm{th}}\) resonance is determined by the product of two Bessel functions \(J_{n}(\sqrt{J/\Omega\pi})\) and \(J_{m}(\lambda)\). Using the first-order correction in the frequency \(\Omega(J)\), we plot it as a function of \(J\) in Fig. 4. We see that only the 1:3 resonance is allowed under first-order correction. This means that all other resonances in Fig. 3 must originate from the higher-order perturbation terms in correction for \(\bar{\Omega}\) and \(\bar{J}\). That explains the dominance of primary islands in (n,m)=(3,1) resonance and the presence of secondary islands in other resonances. For the expression without binomial approximation (42) where in Fig.2 we saw (3,1) resonance to be dominantly present, but without binomial approximation (25), this resonance is suppressed and doesn't appear. This can lead to significant corrections for both quantum and classical equations despite being in large detuning limit. Similarly, very high-ordered resonances are enhanced by binomial approximation as the chaotic regime can be seen enhanced around the edges for this case. ## 5 Dynamical localization Let us imagine that we prepare the initial state of the atoms as a localized wavepacket. As the system evolves, the wavepacket spreads. The wavefunction of the two-state system is shown to evolve, in all versions of description, on a pair of potential energy surfaces. The form of these potentials readily support bounded dynamics on one of the potentials. The complex intersection points provide paths for tunneling. The succession of these two dynamical features leads to localization of the wavepacket. The physics of this is nothing but the well-known argument by Mott [68] and Anderson [53], adapted in recent times in quantum chaos [54, 55]. Figure 4: Only those resonances whose frequency ratio \(\Omega(J):\omega\) (\(\omega\)=1 here) intersect with the \(\Omega(J)\) are allowed. Conclusions The matrix Hamiltonian driving a two-level atom has been unitarily transformed to a series of Hamiltonians arranged in the powers of Planck constant - which is the precise meaning of semiclassical expansion. A successive application of these transformations brings out an effective Hamiltonian to any desired level of accuracy. In principle, one could perform computations to all orders of \(\hbar\). The system is shown to tunnel between two potential energy surfaces, the underlying dynamics is quasi-integrable in the KAM sense. The analysis has been carried out in the past by employing physically appealing and rather standard approximations. We recapitulated these and then have provided exact solution where by "exact", we mean in the sense described in the preceding paragraph. We have seen that a matrix Hamiltonian for a spinor eigenstate. At different orders of Planck constant, there are different potential energy surfaces on which the system is shown to evolve. If one makes a binomial approximation in the Hamiltonian to treat the system, the detailed features in the Poincare surfaces of section differ. The approximated analysis has certain appeal insofar as tunneling between islands is seen clearly. However, to establish that the existence of islands and tunneling, We show that the onset of islands of stability can be seen from the first-order perturbation theory. The analysis reveals a vector potential that is related to an artificial gauge field. We believe that knowing the form for this could be useful for experiments with cold atoms and in developing fields of Hamiltonian engineering, quantum sensing and quantum interference. We have not developed these aspects here. As referred to in the Introduction, our results add to the discussion of integrability in matrix models for atomic systems, in particular to the work on quantum Rabi model [60]. In the future, by adding nonlinear terms to incorporate interactions that allow control of atomic states, these works could be useful for critical quantum metrology [69]. Control of states of multi-qubit systems [70] and their protection [71] belongs to the present theme in a rather compelling manner. **Acknowledgements** We thank Sandeep Joshi for several helpful discussions. RG acknowledges the fellowship support received from CSIR-HRDG.
2309.14740
Fraction Constraint in Partial Wave Analysis
To resolve the non-convex optimization problem in partial wave analysis, this paper introduces a novel approach that incorporates fraction constraints into the likelihood function. This method offers significant improvements in both the efficiency of pole searching and the reliability of resonance selection within partial wave analysis.
Xiang Dong, Chu-Cheng Pan, Yu-Chang Sun, Ao-Yan Cheng, Ao-Bo Wang, Hao Cai, Kai Zhu
2023-09-26T08:08:18Z
http://arxiv.org/abs/2309.14740v1
# Fraction Constraint in Partial Wave Analysis ###### Abstract To resolve the non-convex optimization problem in partial wave analysis, this paper introduces a novel approach that incorporates fraction constraints into the likelihood function. This method offers significant improvements in both the efficiency of pole searching and the reliability of resonance selection within partial wave analysis. ## 1 Introduction Partial wave analysis (PWA) is a powerful technique used in particle physics to study the angular distributions of particles produced in scattering or decay processes [1]. By decomposing the final-state wave functions into a sum of partial waves with different angular momentum quantum numbers, PWA allows people to extract valuable information about the underlying dynamics of the interaction[2, 3]. This method enables people to identify and study resonances, determine their properties such as masses and widths, and understand the contributing amplitudes and phase shifts. PWA is particularly useful in experiments involving complex final states or multiple particles, where it helps disentangle the different contributions and extract meaningful physical observables. PWA is widely employed in experiments involving hadron colliders, electron-positron colliders, and other facilities, making it an essential tool for studying the fundamental building blocks of matter and the forces that govern their interaction. However, PWA usually suffers from non-convex optimization problems. Non-convexity arises due to the complex nature of the underlying physics and the presence of multiple resonances, therefore numerous undetermined parameters in a fitting model [4]. Unlike convex optimization problems that have a unique global minimum, non-convex optimization problems have multiple local minima. This makes finding the best fit parameters challenging, as traditional optimization algorithms can get trapped in local minima and fail to find the global or near-global minimum. The non-convex nature of the problem introduces uncertainties and can lead to biased or inaccurate results. Overcoming these challenges requires the development and application of specialized non-convex optimization techniques that can effectively explore the parameter space and find the best fit solutions. In this paper, we propose to mitigate the non-convex optimization problem in PWA by modifying the likelihood function with an additional penalty term. This term is related to a sum of all resonance state fractions. After introduce the definition of the additional penalty term, we perform two simplified PWAs, one is without the penalty term but the other one with, on a toy Monte Carlo (MC) sample. General features are obtained for the proposed PWA method, and compared with the conventional one. Then we discuss how to obtain a crucial parameter in the penalty term by a scanning method, that is more practical in a real measurement than the previously pedagogical one. Meanwhile, we show the proposed method is helpful to select reasonable contributions of resonances. A short summary then ends this paper. ## 2 Fraction Constraints to the Partial Wave Analysis As mentioned in the introduction, there are usually many undetermined parameters in a PWA, so the fitting is essentially a non-convex optimization problem, that will result in a non-global minimum point, sometimes as an unreasonable result. To resolve this problem, we propose to add a penalty term to the traditional logarithm of the likelihood, \(-\ln L\), to construct a new target function \(\tilde{M}\): \[\tilde{M}=-\ln L+\lambda(\mathbf{SF}-\overline{\mathbf{SF}})^{2}\, \tag{1}\] where \(\mathbf{SF}\) is the sum of the fractions of total events as \(\mathbf{SF}=\sum_{k}\mathbf{F}_{k}\) and \(\overline{\mathbf{SF}}\) is its expected value, where \(k\) is the index of the amplitude, and \(\lambda\) is the strict-factor. The determination of \(\overline{\mathbf{SF}}\) and \(\lambda\) are based on the situations that will be discussed later. Explicitly, the fraction of each channel is defined as: \[\mathbf{F}_{k}=\frac{1}{N}\sum_{i=1}^{N}\frac{\left|c_{k}M_{k}\left(\zeta_{i} \right)\right|^{2}}{\left|\sum_{k}c_{k}e^{i\phi_{k}}M_{k}\left(\zeta_{i}\right) \right|^{2}}\, \tag{2}\] where \(N\) is the number of events, \(M_{k}\) are the (normalized) amplitude with respect to \(\zeta_{i}\) representing both physical and nuisance parameters that may dynamically depend on the \(i\)th event, \(c_{k}\) and \(\phi_{k}\) are the magnitude and phase of each amplitude. By introducing this additional term, we restrict the feasible region and transform the original optimization problem into a "constrained non-convex optimization", that potentially is more tractable. Here, \(\overline{\mathbf{SF}}\) is the expected value of \(\mathbf{SF}\). Since \(\mathbf{SF}\) represents only the contribution from non-interference effect, the value of \(\mathbf{SF}\) is usually not 100%. When constructive interference dominates between resonance states, \(\mathbf{SF}\) will be less than 100%; when destructive interference dominates between resonance states, \(\mathbf{SF}\) will be greater than 100%. But no matter the interference is constructive or destructive, we expect the \(\mathbf{SF}\) based on a reasonable physical solution will not extremely deviate from 100%. Obviously, when \(\lambda\) is close to zero, \(\tilde{M}\) will be reduced to \(-\ln L\); but when \(\lambda\) is large enough, \(\mathbf{SF}\) will be restricted to \(\overline{\mathbf{SF}}\), i.e., the interference effect is under control, then the parameter space will be deduced, and the convexity is improved. ## 3 Partial Wave Analysis without or with Fraction Constraints For demonstration, an MC sample containing 10,000 events have been generated based on a PWA model that describes the process \(\psi(2S)\rightarrow\phi K^{+}K^{-}\)[5] with various intermediate resonances decaying into \(K^{+}K^{-}\). For convenience, this PWA model is denoted as \(R_{0}\) and the MC sample is denoted as \(S_{0}\). In \(R_{0}\), resonances such as \(f_{0}(980)\)[6, 7], \(f_{2}(1270)\)[8], \(f_{2}^{\prime}(1525)\)[9], \(f_{0}(1710)\)[10], \(f_{2}(2150)\)[11], and \(f_{2}(2340)\)[12] are included with description according to the corresponding references, respectively. Their masses, widths, and relevant fractions are shown in Table 1. In the \(R_{0}\) model, covariant tensors are applied to describe the partial wave amplitudes. It should be noted that Table 1 lists the fractions of each resonance, and the sum of the fractions yields a \(\mathbf{SF}\) value of approximately 115%. The Dalitz plot corresponding to the generated events is shown in Fig. 1, and the distribution on the \(K^{+}K^{-}\) invariant mass spectrum is shown in Fig. 2. The existence of both narrow and broad resonances makes \(R_{0}\) not a naive model. It should be noted that this MC sample is just designed for studying the PWA method, but does not intend to simulate the three-body decay \(\psi(2S)\rightarrow\phi K^{+}K^{-}\) in the real world. Firstly, we fit the MC sample \(S_{0}\) with the \(R_{0}\) model 300 times by using the target function \(-\ln L\). Figure 3 shows the obtained logarithm of the likelihood and the sum of the fractions. It is apparently \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(R_{0}\) & Name & \(F_{i}(\%)\) & Mass (GeV) & Width (GeV) \\ \hline 1 & \(f_{0}(980)\) & 39.5 & 0.979 & 0.107 \\ 2 & \(f_{2}(2340)\) & 37.1 & 2.548 & 0.324 \\ 3 & \(f_{2}^{\prime}(1525)\) & 24.7 & 1.522 & 0.089 \\ 4 & \(f_{0}(1710)\) & 8.30 & 1.676 & 0.163 \\ 5 & \(f_{2}(1270)\) & 3.16 & 1.290 & 0.196 \\ 6 & \(f_{2}(2150)\) & 2.22 & 2.162 & 0.159 \\ \hline & \(\mathbf{SF}\) & 115.0 & & \\ \hline \end{tabular} \end{table} Table 1: Resonances incorporated in PWA model \(R_{0}\), and their corresponding parameters. Figure 1: The Dalitz plot from the MC sample \(S_{0}\) generated by the \(R_{0}\) model. Figure 2: The \(K^{+}K^{-}\) invariant mass spectrum for the MC sample \(S_{0}\) generated by the \(R_{0}\) model. that even the fitting PWA model is perfectly matched to the data-producing model, there is still a large probability that the fitting results deviate significantly from the true values, while good fit results, in which the global minimum is found, always provide correct **SF** values. The red box of Fig. 3 represents a region enclosing good fits. The number of points in it is 41, that accounts for only about 14% of the total fitting times. The unreliability of the fitting results is the so called non-convex problem, that is caused by the complexity of the PWA, resulting in various local minima of the likelihood function in the parameter space. One way to avoid this problem and find the global minima is by re-fitting data in huge number of times, with varied initial parameters, and this is a critical reason for the low efficiency of the PWA. Secondly, we redo the fits again by replacing the target function from \(-\ln L\) to \(\tilde{M}\). Usually, the expected sum of fractions \(\overline{\mathbf{SF}}\) can be determined by a scanning method that will be described in Sec. 4 along with the resonance selection. Here, we just adopt the result and set it to 120%, and set the strict-factor \(\lambda=10^{-2}\) by practical experience. The results of 300 fits are shown in Fig. 4. There are 46 points in the red box of Fig. 4, which is slightly higher than the number in Fig. 3. It can be seen that the penalty term limits the range of **SF** as expected and increases the probability of the fitting result reaching the global optimum. Although it needs more computation source to calculate the penalty term **SF**, against one's intuition, the whole fitting time required by \(\tilde{M}\) is less than that of \(-\ln L\). This timing reduction is mainly caused by the less tempts to find a minimal in a reduced parameter space. To investigate the impact on computation time, a time analysis is performed to obtain the results in Fig. 3 and Fig. 4. The costumed time is shown in Fig. 5. From it, the average fitting time for \(\tilde{M}\) is approximately 500 s, while the average fitting time for \(-\ln L\) is around 750 s. A significant speed-up is found. This result is obtained in our own testing environment, and factors such as the PWA program, fitting method, and hardware platform can affect the results. However, just like the role of penalty terms in the field of deep learning, the inclusion of penalty terms in this context serves to prevent large, ineffective attempts during the fitting process. These penalty terms provide additional gradients (on the boundaries of the parameter space) that are independent of the program, software, and hardware platforms used. To check the feasibility of the new PWA method, the fitting results corresponding to the global optimal points, without or with the penalty, are listed in Table 2 and Table 3 for comparison. It can be seen that the two fitting results, including both mean values and statistical uncertainties, are consistent with each other. To test the fit stability of the PWA with the additional penalty term, we have generated 300 sets of samples using the same \(R_{0}\) model only with various random number seeds, and performed fitting on each set. Figure 6 shows the distribution of the sum of fractions. A fit with a Gaussian function gives the result is \(1.13\pm 0.02\), that is consistent with the input value 1.14 considering the uncertainty. Figure 3: The distribution of likelihood values and **SF** values of the fitting results corresponding to the resonance combination \(R_{0}\). The red vertical line represents the true value of **SF**, and the red box contains the points of good fits. Figure 4: The likelihood value and SF value distribution of the resonance state combination \(R_{0}\) corresponding to the fitting result when \(\mathbf{SF}=120\%\) and \(\lambda=10^{-2}\). The red vertical line represents the true value of \(\mathbf{SF}\), and the red box contains the points of good fits. Figure 5: Compare the fitting time used by \(-\ln L\) and \(\tilde{M}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(R_{0}\) & Name & \(F_{i}(\%)\) & Mass (GeV) & Width (GeV) \\ \hline 1 & \(f_{0}(980)\) & \(39.2\pm 1.5\) & \(1.015\pm 0.043\) & \(0.102\pm 0.030\) \\ 2 & \(f_{2}(2340)\) & \(37.5\pm 1.6\) & \(2.571\pm 0.015\) & \(0.281\pm 0.017\) \\ 3 & \(f_{2}^{\prime}(1525)\) & \(23.5\pm 1.0\) & \(1.523\pm 0.002\) & \(0.084\pm 0.003\) \\ 4 & \(f_{0}(1710)\) & \(8.7\pm 0.9\) & \(1.671\pm 0.005\) & \(0.159\pm 0.010\) \\ 5 & \(f_{2}(1270)\) & \(2.7\pm 0.6\) & \(1.288\pm 0.013\) & \(0.181\pm 0.027\) \\ 6 & \(f_{2}(2150)\) & \(2.5\pm 0.6\) & \(2.152\pm 0.012\) & \(0.170\pm 0.026\) \\ \hline & \(\mathbf{SF}\) & \(114.0\) & & \\ \hline \end{tabular} \end{table} Table 2: Fitting results of the PWA model \(R_{0}\) with \(-\ln L\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(R_{0}\) & Name & \(F_{i}(\%)\) & Mass (GeV) & Width (GeV) \\ \hline 1 & \(f_{0}(980)\) & \(39.3\pm 1.6\) & \(1.017\pm 0.039\) & \(0.101\pm 0.035\) \\ 2 & \(f_{2}(2340)\) & \(37.5\pm 1.8\) & \(2.571\pm 0.016\) & \(0.282\pm 0.018\) \\ 3 & \(f_{2}^{{}^{\prime}}(1525)\) & \(23.6\pm 1.0\) & \(1.523\pm 0.002\) & \(0.084\pm 0.003\) \\ 4 & \(f_{0}(1710)\) & \(8.7\pm 1.0\) & \(1.671\pm 0.005\) & \(0.159\pm 0.010\) \\ 5 & \(f_{2}(1270)\) & \(2.7\pm 0.6\) & \(1.288\pm 0.014\) & \(0.182\pm 0.026\) \\ 6 & \(f_{2}(2150)\) & \(2.5\pm 0.6\) & \(2.152\pm 0.012\) & \(0.170\pm 0.027\) \\ \hline & **SF** & 114.3 & & \\ \hline \end{tabular} \end{table} Table 3: Fitting results of the PWA model \(R_{0}\) with \(\dot{M}\). Figure 6: The distribution of the sum of fractions in 300 test MC samples that are generated with the model \(R_{0}\). The red curve represents the Gaussian function utilized in the fit. ## 4 Fraction Constraint Scanning and Resonant State Selection In the last section, both PWAs are performed with a perfect model, that is, exactly the one used in generating the MC sample. However, in a real PWA, to determine which resonances should be included is an important and difficult issue to be addressed [13]. Typically, this is done by comparing the likelihood values of different combinations of resonances and calculating corresponding significance. But how to determine a baseline, that is crucial for the significance calculation, is a frequently debated question in PWA. Furthermore, whether to include a resonance or not should be beyond the sole goodness of a fit. In addition to considering the significance of a resonance, more information, such as the branching fraction, physical rules conservation, complexity of a PWA model, etc., need to be considered. Some researchers have already borrowed some mature theories from information theory, such as AIC and BIC [14], to balance the model complexity and goodness of a fit. Similar to AIC and BIC, the fraction constraint method, proposed by us, try to control the model complexity by introducing the penalty term. Using \(\tilde{M}\), we can quickly obtain the best fit results for different PWA models with various resonance combinations, when the strict-factor \(\lambda\) is set to be a somewhat large value, such as \(10^{2}\). Based on this advantage, the value of \(\overline{\mathbf{SF}}\) is obtained by scanning in a series of fits, and the results are shown in Fig. 7. Here \(R_{-1}\) represents the PWA model subtracting resonance \(f_{2}(1270)\) from \(R_{0}\), and \(R_{-2}\) represents subtracting resonance \(f_{2}(2150)\); while \(R_{+1}\) represents adding resonance \(f_{0}(1370)\)[15], \(R_{+2}\) represents adding resonance \(f_{2}(2010)\)[16]. From Fig. 7, it can be seen that there is a large gap between \(R_{-1}\) ((\(R_{-2}\)) and \(R_{0}\). The difference in the y-axis, i.e., the logarithm of the likelihood, indicates the models with subtracting resonances is not complex enough to describe the data, compared with the \(R_{0}\). But the gap between \(R_{+1}\) (\((R_{+2})\) and \(R_{0}\) is very small, indicating that the parameters of models with additional resonances are overpopulated. Therefore, \(R_{0}\) is the best PWA model to describe the data. So the scan method can help to select a reasonable set of resonances in a PWA model. And from the scan curve the best \(\mathbf{SF}\) can be determined from the minimum, that should be considered as the expected value of \(\mathbf{SF}\). ## 5 Summary This article proposes the use of \(\tilde{M}\) instead of \(-\ln L\) in PWA by evaluating the likelihood value as a function of fraction constraints, thereby improving analysis efficiency. An analysis conducted on the MC sample demonstrates the reliability of the fitted center values and statistical uncertainties based on the new method. Additionally, the relationship between the likelihood value of the fitting results and the \(\mathbf{SF}\) value provides a fresh perspective on addressing the resonance selection issue. By constraining the \(\mathbf{SF}\) values, redundant resonances can be effectively reduced, thereby mitigating the Figure 7: \(\mathbf{SF}\) scanning curves. The blue, green, yellow, red, and purple lines represent the PWA models \(R_{-2}\), \(R_{-1}\), \(R_{0}\), \(R_{+1}\), \(R_{+2}\), respectively. overestimation of systematic uncertainties resulting from the selection of resonance states. While the use of \(\tilde{M}\) instead of \(-\ln L\) does not offer a definitive solution to the increasingly complex nature of PWA driven by expanding data volumes, it has proven to enhance efficiency and minimize debates surrounding resonance states through practical implementation.
2310.20586
Harmonization-enriched domain adaptation with light fine-tuning for multiple sclerosis lesion segmentation
Deep learning algorithms utilizing magnetic resonance (MR) images have demonstrated cutting-edge proficiency in autonomously segmenting multiple sclerosis (MS) lesions. Despite their achievements, these algorithms may struggle to extend their performance across various sites or scanners, leading to domain generalization errors. While few-shot or one-shot domain adaptation emerges as a potential solution to mitigate generalization errors, its efficacy might be hindered by the scarcity of labeled data in the target domain. This paper seeks to tackle this challenge by integrating one-shot adaptation data with harmonized training data that incorporates labels. Our approach involves synthesizing new training data with a contrast akin to that of the test domain, a process we refer to as "contrast harmonization" in MRI. Our experiments illustrate that the amalgamation of one-shot adaptation data with harmonized training data surpasses the performance of utilizing either data source in isolation. Notably, domain adaptation using exclusively harmonized training data achieved comparable or even superior performance compared to one-shot adaptation. Moreover, all adaptations required only minimal fine-tuning, ranging from 2 to 5 epochs for convergence.
Jinwei Zhang, Lianrui Zuo, Blake E. Dewey, Samuel W. Remedios, Savannah P. Hays, Dzung L. Pham, Jerry L. Prince, Aaron Carass
2023-10-31T16:23:37Z
http://arxiv.org/abs/2310.20586v1
Harmonization-enriched domain adaptation with light fine-tuning for multiple sclerosis lesion segmentation ###### Abstract Deep learning algorithms using magnetic resonance (MR) images have demonstrated state-of-the-art performance in the automated segmentation of multiple sclerosis (MS) lesions. Despite their success, these algorithms may fail to generalize across sites or scanners, leading to domain generalization errors. Few-shot or one-shot domain adaptation is an option to reduce the generalization error using limited labeled data from the target domain. However, this approach may not yield satisfactory performance due to the limited data available for adaptation. In this paper, we aim to address this issue by integrating one-shot adaptation data with harmonized training data that includes labels. Our method synthesizes new training data with a contrast similar to that of the test domain, through a process referred to as "contrast harmonization" in MRI. Our experiments show that combining one-shot adaptation data with harmonized training data outperformed the use of either one of the data sources alone. Domain adaptation using only harmonized training data achieved comparable or even better performance compared to one-shot adaptation. In addition, all adaptations only required light fine-tuning of 2 to 5 epochs for convergence. Multiple Sclerosis, Lesion Segmentation, Domain Adaptation, Synthesis-based Harmonization Further author information: (Send correspondence to Jinwei Zhang) Jinwei Zhang: E-mail: [email protected] ## 1 Introduction Multiple sclerosis (MS) is a central nervous system disorder characterized by inflammatory demyelination and axonal and neuronal degeneration [1]. T2-weighted (T2w) magnetic resonance imaging (MRI) using the fluid-attenuated inversion recovery (FLAIR) pulse sequence is routinely used for clinical diagnosis of MS lesions because it provides high lesion-to-brain contrast while simultaneously suppressing hyperintense cerebrospinal fluid (CSF) signals, which can cause partial-volume artifacts in T2w images [2]. Extensive manual editing is required for accurate delineation of MS lesions, though the task can be quite subjective. Therefore, automatic detection and segmentation of MS lesions is desired for better efficiency and reproducibility. State-of-the-art methods [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] employ deep learning (DL) to automate MS lesion segmentation using multi-contrast MRI scans, including FLAIR. However, these algorithms frequently face challenges in achieving consistent performance across different MRI scanners and imaging protocols. This has led to increased research interest in domain adaptation techniques, such as one-shot domain adaptation [5], spatially adaptive sub-networks [6], domain-invariant latent feature learning [7, 8, 9], contrast-adaptive generative modeling [10], and domain randomness via synthesis [11], to name just a few. An alternative to domain adaptation is image harmonization, which reduces inter-site variation to aid downstream comparisons and analysis of images across imaging sites, scanners, and over time [15, 16, 17]. Synthesis-based multi-contrast MRI harmonization has shown remarkable progress in recent years [18, 19, 20, 21]. In this paper, we use HACA3 [22], a new synthesis-based multi-site MRI harmonization approach, to enhance one-shot and even "zero-shot" domain adaptation performance for MS lesion segmentation. ## 2 Method Dataset.We use the training data from the ISBI longitudinal dataset consisting of five people with MS (PwMS) [23] and an in-house dataset including ten PwMS; both datasets have corresponding manual delineations. We first pre-trained a segmentation network using four of the five training subjects from the ISBI dataset, with the remaining subject being used for validation. We then applied the pre-trained segmentation network to our in-house dataset, which comes from a different domain than the training and validation data. Network.We implemented a modified 3D UNet following the design choices of nnUNet [24]. For the convolutional building block in the UNet, we chose the "Conv+InstanceNorm+ReLU" configuration with 2 blocks for each level of the encoding or decoding path of the UNet and 4 downsampling/upsampling operations. The numbers of channels in each convolutional block at all levels along the encoding path were 32, 64, 128, 256, and 512. Three dimensional patches were cropped from skull-stripped [25] and white matter intensity normalized [26] T1-weighted (T1w) and FLAIR images, and these patches were concatenated along the channel dimension to be used as input. We generate the binary prediction of segmentation by thresholding the sigmoid of the output of UNet's last convolutional layer. Training.A batch size of two was used and the 3D patch size was set to \(112\times 112\times 112\) to fully utilize GPU memory during backpropagation. Heavy augmentations were employed on the fly, including random cropping, axis permutation, intensity shifts, as well as affine and elastic deformations. The loss function was the mean of the Dice similarity coefficient (DSC) and binary cross-entropy. The Adam optimizer was used with an initial learning rate of \(10^{-4}\) for 100 epochs, where each epoch involved the application of eight random augmentations to every training subject. Harmonization-based domain adaptation.For domain adaptation after training, all T1w and FLAIR images from the ISBI training dataset were transformed to match the contrast of our in-house test dataset using the synthesis-based multi-site harmonization of HACA3 [22]. HACA3 was trained on diverse MR datasets acquired from 21 sites, which included white matter lesion multi-contrast images with varying field strengths, scanner platforms, and acquisition protocols. An example of such image harmonization is shown in Fig. 1, where the FLAIR "source data" from "Site-01" in the ISBI training set was harmonized to match the contrast of "Site-02" of our in-house test set. Consistent gray and white matter contrast was observed between "synthetic/harmonized data" and "real data" for "Site-02". After harmonization, three domain adaptation strategies were evaluated: Figure 1: **Synthesis-based harmonization of one FLAIR axial slice.** (a) Source FLAIR slice on Site-01 (ISBI public training set). (b) Harmonized FLAIR slice from Site-01 to Site-02 (in-house test set) using HACA3 [22]. (c) Real FLAIR slice on Site-02. One-Shot Strategy Fine-tune (FT) the pre-trained network with only one of the ten subjects in the test domain and evaluate on the remaining nine test subjects. Zero-Shot Strategy FT with only the harmonized ISBI data and evaluate on the ten test subjects. Harmonization-enriched One-Shot Strategy FT with a combination of all harmonized ISBI training data and one of the test domain subjects, and evaluate on the remaining nine test subjects. For the One-Shot and Harmonization-enriched One-Shot strategies, ten-fold cross-validation (CV) was employed to evaluate on all ten test subjects from the in-house data, where each of the ten subjects in the test domain was included in the training set for one fold. All FTs were conducted for 20 epochs, and the models were tested after every epoch. For comparison, two-fold CV FT with 4/1/5 training/validation/test subjects per fold was also employed for 100 epochs. This CV served as a "normal" performance estimation, assuming enough labeled data in new domains was provided for network adaptation. Evaluation metrics.The DSC, lesion-wise F1 score (L-F1), and Pearson's correlation coefficient of the lesion volumes between the ground truth and the prediction (VC) were utilized as the segmentation performance evaluation metrics in the experiment. ## 3 Results Quantitative comparison.Figure 2 shows DSC, L-F1, and VC scores of the three domain adaptation strategies after each FT epoch. First, the performance of all three strategies converged within just 2 to 5 epochs, exhibiting noticeable improvement over the pre-trained results (dashed purple lines) in terms of DSC and L-F1. However, a noticeable degradation in VC was observed for normal CV (dashed red line) and the One-Shot strategy (solid orange line), which was not observed for the Zero-Shot (solid blue line) and Harmonization-enriched One-Shot (solid green line) strategies. Second, the Harmonization-enriched One-Shot strategy consistently outperformed the other two strategies in terms of DSC and L-F1, and performed similarly well with the Zero-Shot strategy for VC. Notably, the Harmonization-enriched One-Shot strategy achieved a DSC score of above 0.6 after convergence, approaching inter-rater consistency [27]. Third, Zero-Shot outperformed One-Shot strategy in terms of DSC and VC, while these two strategies performed similarly for L-F1. Qualitative comparison.Figure 3 shows segmentation predictions with the corresponding ground-truth label and axial FLAIR slice (left column). The pre-trained prediction (Fig. 3a) exhibited false negative predictions throughout the entire image, which was addressed by the target site CV (Fig. 3b). False positive lesions (indicated by red arrows) were observed in the One-Shot strategy after 2 (Fig. 3c-1) and 20 (Fig. 3c-2) fine-tuning epochs, which were not observed in the Zero-Shot strategy (Fig. 3d) or the Harmonization-enriched One-Shot strategy (Fig. 3e) after 2 epochs (Fig. 3d-1 and Fig. 3e-1, respectively) or 20-epochs (Fig. 3d-2 and Fig. 3e-2) of fine-tuning. False negative lesions (indicated by yellow arrows) missed by the Zero-Shot strategy (Fig. 3d) were still captured by the One-Shot and Harmonization-enriched One-Shot strategies (Fig. 3c and Fig. 3e, respectively). No significant differences were observed between the 2 epoch (Fig. 3d-1 or Fig. 3e-1) and 20 epoch (Fig. 3d-2 or Fig. 3e-2) fine-tuning results of the Zero-Shot or Harmonization-enriched One-Shot strategies. ## 4 New Work to Be Presented We will present how to use synthesis-based harmonization to boost one-shot domain adaptation performance for MS lesion segmentation. This work has not been submitted or presented elsewhere before. ## 5 Discussion and Conclusion Domain adaptation through network fine-tuning has been successfully applied to different imaging problems in MRI, including under-sampled k-space reconstruction [28], biophysical inversion [29] with uncertainty quantification [30], and contrast translation [31]. In this work, we demonstrate the feasibility of leveraging synthesis-based MRI harmonization to enhance domain adaptation performance in MS lesion segmentation. Our experiments demonstrate that our Zero-Shot domain adaptation, utilizing solely public data synthesized to the target contrast, yields comparable or superior performance than a One-Shot strategy on the target domain. More notably, the combination of One-Shot and Zero-Shot adaptation, which we coin as Harmonization-enriched One-Shot domain adaptation, achieved DSC results approaching inter-rater performance. Additionally, only light fine-tuning of between 2 and 5 epochs was enough for an adequate adaptation of the pre-trained network. ###### Acknowledgements. This material is partially supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1746891 (Remedios). This work also received support from National Multiple Sclerosis Society RG-1907-34570 (Pham), CDMRP W81XWH2010912 (Prince), and the Department of Defense in the Center for Neuroscience and Regenerative Medicine. The opinions and assertions expressed herein are those of the authors and do not reflect the official policy or position of the Uniformed Services University of the Health Sciences or the Department of Defense.
2303.18121
BERTino: an Italian DistilBERT model
The recent introduction of Transformers language representation models allowed great improvements in many natural language processing (NLP) tasks. However, if on one hand the performances achieved by this kind of architectures are surprising, on the other their usability is limited by the high number of parameters which constitute their network, resulting in high computational and memory demands. In this work we present BERTino, a DistilBERT model which proposes to be the first lightweight alternative to the BERT architecture specific for the Italian language. We evaluated BERTino on the Italian ISDT, Italian ParTUT, Italian WikiNER and multiclass classification tasks, obtaining F1 scores comparable to those obtained by a BERTBASE with a remarkable improvement in training and inference speed.
Matteo Muffo, Enrico Bertino
2023-03-31T15:07:40Z
http://arxiv.org/abs/2303.18121v1
# BERTino: an Italian DistilBERT model ###### Abstract **English.1** The recent introduction of Transformers language representation models allowed great improvements in many natural language processing (NLP) tasks. However, if on one hand the performances achieved by this kind of architectures are surprising, on the other their usability is limited by the high number of parameters which constitute their network, resulting in high computational and memory demands. In this work we present BERTino, a DistilBERT model which proposes to be the first lightweight alternative to the BERT architecture specific for the Italian language. We evaluated BERTino on the Italian ISDT, Italian ParTUT, Italian WikiNER and multiclass classification tasks, obtaining F1 scores comparable to those obtained by a \(BERT_{BASE}\) with a remarkable improvement in training and inference speed. Footnote 1: Copyright ©2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). **Italiano.** La recente introduzione dei Transformers come modelli di rappresentazione del linguaggio naturale ha pernesso grandi avanzamenti sullo stato dell'arte in molte applicazioni di Natural Language Processing (NLP). Tuttavia, se da una parte i risultati raggiunti da queste architetture sono sorprendenti, dall'altra la loro fruibilita e limitata dall'elevato numero di parametri che costituiscono la loro architettura, con conseguenti elevate esigenze computazionali e di memoria. In questo lavoro presentiamo BERTino, un modello DistilBERT che e la prima alternativa _leggera_ all'architettura BERT specifica per la lingua italiana. Abbiamo valutato BERTino sui task ISDT tialiano, ParTUT italiano, WikiNER italiano e classificazione multiclasse, ottenendo punteggi F1 paragonabili a quelli ottenuti da un modello \(BERT_{BASE}\) con un notevole miglioramento nella velocita di addestramento e inferenza. ## 1 Introduction In recent years the introduction of Transformers language models allowed great improvements in many natural language processing (NLP) tasks. Among Transformer language models, BERT Devlin et al. (2018) affirmed itself as an high-performing and flexible alternative, being able to transfer knowledge from general tasks to downstream ones thanks to the pretraining-finetuning approach. The context-dependent text representations provided by this model demonstrated to be a richer source of information when compared to static textual embeddings such as Word2Vec Mikolov et al. (2013), GloVe Pennington et al. (2014), FastText Bojanowski et al. (2016) or Sent2Vec Pagliardini et al. (2018). However, despite the substantial improvements brought by BERT in the NLP field, the high number of parameters that constitute its network makes its usage prohibitive in resource-limited devices, both at training and inference time, and with a non-negligible environmental impact. To address the aforementioned problem, recent research proposes several approaches to reduce the size of the BERT network, such as DistilBERT Sanh et al. (2019), MobileBERT Sun et al. (2020) or pruning Gordon et al. (2020); McCarley et al. (2019). The experiments conducted in Virtanen et al. (2019), de Vries et al. (2019) and Martin et al. (2020) demonstrate that monolingual BERT models outperform the same multilingual BERT architecture Devlin et al. (2018), justifying the effort for pre-training Transformer models required for specific languages. In this work we present **BERTino**, a DistilBERT model pre-trained on a large Italian corpus. This model proposes to be the first general-domain, lightweight alternative to BERT specific for the Italian language. We evaluate BERTino on two Part Of Speech tagging tasks, Italian ISDT Bosco et al. (2000) and Italian ParTUT Sanguinetti and Bosco (2015), on the Italian WikiNER Nothman et al. (2012) Named Entity Recognition task and on a multi-class sentence classification. Comparing the scores obtained by BERTino, its teacher model and GilBERTo, the first obtains performances comparable to the other two architectures while sensibly decreasing the fine-tuning and evaluation time. In Section 2 we discuss the related works with a focus on DistilBERT, in Section 3 we describe the corpus and the pre-train followed by the results in Section 4. ## 2 Related work In this section we will give a brief outline of the inner workings for Transformers, then we overview some lightweight alternatives to BERT. The introduction of Transformer blocks Vaswani et al. (2017) in language representation models is a keystone in recent NLP. The attention mechanism adopted by the Transformer encoder allows to provide contextualized representations of words, which proved to be a richer source of information than static word embeddings. Attention mechanism processes all words in an input sentence simultaneously, allowing parallelization of computations. This is a non-negligible improvement with respect to models like ELMo Peters et al. (2018), which aim to provide contextualized text representations using a bidirectional LSTM network, processesion each word sequentially. Among language models that adopt Transformer technology, BERT Devlin et al. (2018) affirmed itself as a flexible and powerful alternative, being able to establish new state-of-the-art for 11 NLP tasks at the time of publication. In its base version, this model adopts an hidden size of 768 and is composed of 12 layers Transformer blocks), each of these involving 12 attention heads, for a total of 110 millions of parameters. As outlined in Section 1, the high number of parameters constituting BERT's network can result prohibitive for deployment in resource-limited devices and the computational effort is not negligible. For this reason, great effort has been devoted by researchers in order to propose smaller but valid alternatives to the base version of BERT. Gordon et al. (2020) studies how weight pruning affects the performances of BERT, concluding that a low level of pruning (30-40% of weights) marginally affects the natural language understanding capabilities of the network. McCarley et al. (2019) conducts a similar study on BERT weight pruning, but applied to the Question Answering downstream task specifically. Sanh et al. (2019) propose DistilBERT, a smaller BERT architecture which is trained using the knowledge distillation technique Hinton et al. (2015). Since the model that we propose relies on this training technique, we propose a brief description of knowledge distillation in section 2.1. DistilBERT leverages the inductive biases learned by larger models during pre-training using a triple loss combining language modeling, distillation and cosine-distance losses. DistilBERT architecture counts 40% less parameters but is able to retain 97% of natural language understanding performances with respect to the teacher model, while being 60% faster. Sun et al. (2020) propose MobileBERT, a compressed BERT model which aims to reduce the hidden size instead of the depth of the network. As DistilBERT, MobileBERT uses knowledge distillation during pre-training but adopts a \(BERT_{LARGE}\) model with inverted bottleneck as teacher. ### Knowledge distillation Knowledge distillation Hinton et al. (2015) is a training technique that leverages the outputs of a big network (called _teacher_) to train a smaller network (the _student_). In general, in the context of supervised learning, a classifier is trained in such a way that the output probability distribution that it provides is as similar as possible to the one-hot vector representing the gold label, by minimizing the cross-entropy loss between the two. By receiving a one-hot vector as learning signal, a model evaluated on the training set will provide an output distribution with a near-one value in cor respondence of the right class, and all near-zero values for other classes. Some of the near-zero probabilities, however, are larger than the others and are the result of the generalization capabilities of the model. The idea of knowledge distillation is to substitute the usual one-hot vector representing gold labels with the output distribution of the teacher model in the computation of the cross-entropy loss, in order to leverage the information contained in the near-zero values of the teacher's output distribution. Formally, the knowledge distillation loss is computed as: \[\mathcal{L}_{KD}=\sum_{i}t_{i}*\log(s_{i}) \tag{1}\] with \(t_{i}\) being the output distribution of the teacher model relative to the \(i^{th}\) observation, and \(s_{i}\) being the output distribution of the student model relative to the \(i^{th}\) observation. ## 3 BERTTino As outlined in section 1, we propose in this work BERTTino, a DistilBERT model pre-trained on a general-domain Italian corpus. As for BERT-like architectures, BERTTino is task-agnostic and can be fine-tuned for every downstream task. In this section we will report details relative to the pre-training that we conducted. ### Corpus The corpus that we used to pre-train BERTino is the union of PAISA (Lyding et al., 2014) and ItWaC (Baroni et al., 2009), two general-domain Italian corpora scraped from the web. While the former is made up of short sentences, the latter includes a considerable amount of long sentences. Since our model can receive input sequences of at most 512 tokens, as for BERT architectures, we decided to apply a pre-processing scheme to the ItWaC corpus. We split the sentences with more than 400 words into sub-sentences, using fixed points to create chunks that keep the semantic sense of a sentence. In this way, most of the long sentences contained in ItWaC are split into sub-sentences containing less than 512 tokens. A certain number of the final sentences still contain more than 512 tokens and they will be useful for training the parameters relative to the last entries of the network. The PAISA corpus counts 7.5 million sentences and 223.5 million words. The ItWaC corpus counts 6.5 million sentences and 1.6 billion words after preprocessing. Our final corpus counts 14 million sentences and 1.9 billion words for a total of 12GB of text. ### Pre-training **Teacher model** The teacher model that we selected to perform knowledge distillation during the pre-training of BERTTino is _dbmdz/bert-basetialian-xcl-uncased_, made by _Bavarian State Library2_. We chose this model because it is the Italian \(BERT_{BASE}\) model trained on the biggest corpus (81 GB of text), up to our knowledge. Following Sanh et al. (2019), we initialized the weights of our student model by taking one layer out of two from the teacher model. Footnote 2: [https://github.com/dbmdz/berts](https://github.com/dbmdz/berts) **Loss function** We report the loss function used to pre-train BERTino: \[\mathcal{L}=0.45\mathcal{L}_{KD}+0.45\mathcal{L}_{MLM}+0.1\mathcal{L}_{COS} \tag{2}\] with \(\mathcal{L}_{KD}\) being the knowledge distillation loss as described in equation 1, \(\mathcal{L}_{MLM}\) being the masked language modeling loss and \(\mathcal{L}_{COS}\) being the cosine embedding loss. Sanh et al. (2019) describe the cosine embedding loss useful to "align the directions of the student and teacher hidden states vectors". When choosing the weights of the three loss functions, we wanted our model to learn from the teacher and by itself in an equal way, so we set the same weights for both \(\mathcal{L}_{KD}\) and \(\mathcal{L}_{MLM}\). Moreover, we considered the alignment of student and teacher hidden states vectors marginal for our objective, setting \(\mathcal{L}_{COS}\) as 10% of the total loss. **Architecture** The architecture of BERTTino is the same as in DistilBERT. Our model adopts an hidden size of 768 and is composed of 6 layers (Transformer blocks), each of which involving 12 attention heads. In this way BERTTino's network results to have half the layers present in the \(BERT_{BASE}\) architecture. **Training details** To pre-train BERTTino we used a batch size of 6 and an initial learning rate of \(5\times 10^{-4}\), adopting Adam (Kingma and Ba, 2014) as optimization algorithm. We chose 6 as batch size due to the limited computational resources available. Results described in section 4 demonstrate that the small batch size that we adopted is sufficient to obtain a valid pre-trained model. We trained our model on 4 Tesla K80 GPUs for 3 epochs, requiring 45 days of computation in total. For some aspects of the training, we relied on the Huggingface Transformers repository (Wolf et al., 2019). ## 4 Results We tested the performances of BERTino on benchmark datasets: the Italian ISDT (Bosco et al., 2000) and Italian ParTUT (Sanguinetti and Bosco, 2015) Part Of Speech tagging tasks, and the Italian WikiNER (Nothman et al., 2012) Named Entity Recognition task. To complete the evaluation of the model, we also tested it on a multi-class sentence classification task. In particular, we focused on intent detection, a task specific to the context of Dialogue Systems, creating a novel italian dataset which is freely available at our repository3. The dataset that we propose collects 2786 real-world questions (2228 for training and 558 for testing) submitted to a digital conversational agent. The total number of classes in the dataset is 139. Footnote 3: [https://github.com/indigo-ai/BERTino](https://github.com/indigo-ai/BERTino) For the first two tasks mentioned, we fine-tuned our model on the training set for 4 epochs with a batch size of 32 and a learning rate of \(5\times 10^{-5}\), for the NER task we performed 5-fold splitting of the dataset and fine-tuned BERTino for 2 epochs per fold with a batch size of 32 and a learning rate of \(5\times 10^{-5}\), while for the multi-class classification task we fine-tuned our model for 14 epochs on the training set with a batch size of 32 and a learning rate of \(5\times 10^{-5}\). To compare the results obtained, we fine-tuned the teacher model and a GilBERTo model4 on the same tasks with the same hyper-parameters. Tables 1, 2, 3 and 4 collect the F1 scores gathered in these experiments together with fine-tuning and evaluation time. All the scores reported represent the average computed over three different runs. Results show that the teacher model slightly outperforms BERTino, with an increase of the F1 score of 0,29%, 5,15%, 1,37% and 1,88% over the tasks analysed. However BERTino results to be a sensibly faster network with respect to the teacher model and GilBERTo, taking almost half of the time to perform both fine-tuning and evaluation. We can conclude from the last observation that BERTino is able to retain most of the natural language understanding capabilities of the teacher model, even with a much smaller architecture. Footnote 4: Available at [https://github.com/idb-ita/GilBERTo](https://github.com/idb-ita/GilBERTo) ## 5 Conclusions In this work we presented BERTino, a DistilBERT model which aims to be the first lightweight alternative to BERT specific for the Italian language. Our model has been trained on a general-domain corpus and can then be finetuned with good performances on a wide range of tasks like its larger counterparts. BERTino showed comparable performances with respect to both the teacher model and GilBERTo in the Italian ISDT, Italian ParTUT, Italian WikiNER and multi-class sentence classification tasks while taking almost half of the time to fine-tune, demonstrating to be a valid lightweight alternative to \(BERT_{BASE}\) models for the Italian language. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Italian ISDT} \\ \hline Model & F1 score & Fine-tuning time & Evaluation time \\ \hline BERTino & 0,9800 & 9’10” & 3” \\ Teacher model & 0,9829 & 16’32” & 6” \\ GilBERTo & 0,9804 & 18’11” & 5” \\ \hline \end{tabular} \end{table} Table 1: F1 scores obtained by BERTino and the teacher model in the Italian ISDT task. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Italian PariTUT} \\ \hline Model & F1 score & Fine-tuning time & Evaluation time \\ \hline BERTino & 0,9039 & 38’3” & 3’2” \\ Teacher model & 0,9176 & 67’2” & 5’21” \\ GilBERTo & 0,9136 & 66’33” & 5’9” \\ \hline \end{tabular} \end{table} Table 3: F1 scores obtained by BERTino and the teacher model in the Italian WikiNER task. The results reported are the average of the scores obtained in each of the 5 folds. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Multi-class sentence classification} \\ \hline Model & F1 score & Fine-tuning time & Evaluation time \\ \hline BERTino & 0,7766 & 5’4” & 6” \\ Teacher model & 0,7954 & 9’48” & 10” \\ GilBERTo & 0,7381 & 10’0” & 10” \\ \hline \end{tabular} \end{table} Table 4: F1 scores obtained by BERTino and the teacher model in the multi-class sentence classification task. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Italian PariTUT} \\ \hline Model & F1 score & Fine-tuning time & Evaluation time \\ \hline BERTino & 0,9193 & 1’19” & 1” \\ Teacher model & 0,9708 & 2’19” & 1” \\ GilBERTo & 0,9621 & 2’21” & 1” \\ \hline \end{tabular} \end{table} Table 2: F1 scores obtained by BERTino and the teacher model in the Italian ParTUT task.
2309.11638
A survey on the semantics of sequential patterns with negation
A sequential pattern with negation, or negative sequential pattern, takes the form of a sequential pattern for which the negation symbol may be used in front of some of the pattern's itemsets. Intuitively, such a pattern occurs in a sequence if negated itemsets are absent in the sequence. Recent work has shown that different semantics can be attributed to these pattern forms, and that state-of-the-art algorithms do not extract the same sets of patterns. This raises the important question of the interpretability of sequential pattern with negation. In this study, our focus is on exploring how potential users perceive negation in sequential patterns. Our aim is to determine whether specific semantics are more "intuitive" than others and whether these align with the semantics employed by one or more state-of-the-art algorithms. To achieve this, we designed a questionnaire to reveal the semantics' intuition of each user. This article presents both the design of the questionnaire and an in-depth analysis of the 124 responses obtained. The outcomes indicate that two of the semantics are predominantly intuitive; however, neither of them aligns with the semantics of the primary state-of-the-art algorithms. As a result, we provide recommendations to account for this disparity in the conclusions drawn.
Thomas Guyet
2023-09-20T21:03:18Z
http://arxiv.org/abs/2309.11638v1
# A survey on the semantics of sequential patterns with negation # A survey on the semantics of sequential patterns with negation Thomas Guyet\({}^{1}\) \({}^{1}\) Inria - Centre de Lyon, AlstroSight [email protected] **Abstract** A sequential pattern with negation, or negative sequential pattern [10], takes the form of a sequential pattern for which the negation symbol (\(\neg\)) may be used in front of some of the pattern's itemsets. Intuitively, such a pattern occurs in a sequence if negated itemsets are _absent_ in the sequence. Recent work [3] has shown that different semantics can be attributed to these pattern forms, and that state-of-the-art algorithms do not extract the same sets of patterns. This raises the important question of the interpretability of sequential pattern with negation. In this study, our focus is on exploring how potential users perceive negation in sequential patterns. Our aim is to determine whether specific semantics are more "intuitive" than others and whether these align with the semantics employed by one or more state-of-the-art algorithms. To achieve this, we designed a questionnaire to reveal the semantics' intuition of each user. This article presents both the design of the questionnaire and an in-depth analysis of the 124 responses obtained. The outcomes indicate that two of the semantics are predominantly intuitive; however, neither of them aligns with the semantics of the primary state-of-the-art algorithms. As a result, we provide recommendations to account for this disparity in the conclusions drawn. Keywordspattern mining, sequential patterns, negation, interpretation, survey ## 1 Introduction Sequential pattern extraction is a classic class of data mining methods. Its objective is to extract subsequences (patterns) that frequently appear from a large dataset of sequences. A pattern is considered frequent when it appears in at least \(\sigma\) sequences, where \(\sigma\) is user-defined. For instance, consider the pattern \(\langle e\ (ca)\ d\rangle\), which indicates that "item \(e\) is followed by the itemset \(ca\) and then by item \(d\) simultaneously". In the table below, this pattern appears in 4 sequences (\(\boldsymbol{p_{0}}\), \(\boldsymbol{p_{2}}\), \(\boldsymbol{p_{3}}\), and \(\boldsymbol{p_{4}}\)). These frequent patterns can be efficiently enumerated thanks to the anti-monotonicity property of the support measure (i.e., the number of occurrences of a pattern). Intuitively, the support of a pattern decreases with the pattern's size. This property, utilized by most algorithms in the \begin{table} \begin{tabular}{l l} \hline \(id\) & _Sequence_ \\ \hline \(\boldsymbol{p_{0}}\) & \(\langle e\ (caf)\ d\ b\ e\ d\rangle\) \\ \(\boldsymbol{p_{1}}\) & \(\langle c\ a\ d\ b\ e\ d\rangle\) \\ \(\boldsymbol{p_{2}}\) & \(\langle e\ (ca)\ d\rangle\) \\ \(\boldsymbol{p_{3}}\) & \(\langle d\ e\ (ca)\ b\ d\ b\ e\ f\rangle\) \\ \(\boldsymbol{p_{4}}\) & \(\langle c\ e\ b\ (fac)\ d\ e\ c\rangle\) \\ \hline \end{tabular} \end{table} Table 1: Example of a dataset containing five sequences over an alphabet of six items \(\Sigma=\{a,b,c,d,e,f\}\). literature, prevents enumerating patterns that are larger than those known a priori not to be frequent. This trick ensures the complete exploration of the search space while maintaining algorithm efficient. Several studies [5, 7] have expanded the domain of sequential patterns by incorporating information about the absence of item occurrences. Such patterns are termed "sequential patterns _with negation_" or "_negative sequential patterns_". Sequential patterns with negation take the form of sequential patterns in which negation symbols, \(\neg\), preceding certain items. The negation symbol indicates that the specified item must be absent from a sequence for the pattern to be considered to occur. Intuitively, the pattern \(\langle a\neg b\ c\rangle\) is recognized in a sequence if the latter contains an \(a\) followed by a \(c\), and \(b\) is not present between the occurrences of \(a\) and \(c\). We advocate for a broader use of sequential patterns with negation in the process of mining datasets of sequences. This type of pattern holds particular significance for data analysts, as it has the potential to unveil meaningful insights from the absence of events. For instance, in the context of health, the non-administration of a certain drug (\(d\)) might trigger an illness (\(i\)). When analyzing a database using a conventional sequential pattern mining algorithm, frequent patterns might indicate an illness occurrence without other co-occurring events. However, in the conventional semantics of sequential patterns, the absence of other events related to illness cannot be concluded from this pattern. Sequential patterns with negation, such as \(\langle\neg d\ i\rangle\), bring to light the frequent co-occurrence of drug absence and the occurrence of an illness. In this study, we would like to highlight possible interpretability issues of sequential patterns with negation. Indeed, Besnard and Guyet [3] have demonstrated the existence of multiple semantics for these patterns. have demonstrated the existence of multiple semantics for these patterns, there is a risk of misinterpretation of extracted patterns in case the user and the algorithms do not share the same semantics. This concern is not solely theoretical; it manifests practically since the two state-of-the-art algorithms, eNSP [5] and NegPSpan [7], do not have the same semantics for the negation symbol [3]. As a result, the patterns extracted by each of these algorithms need to be interpreted differently by the user. Considering that a user does not necessarily seek to understand the intricacies of these patterns, we believe that the designers of pattern mining algorithms have to take care of possible misinterpretations of the outputs of their algorithms. Therefore, it is crucial to identify any possible disparity between the semantics used in an algorithm and the one that is perceived "intuitively" by users. In this article, we have therefore investigate three questions: 1. Is there an "intuitive" semantics for patterns with negation? 2. Does the "intuitive" semantics correspond to those actually employed by any of the algorithms? 3. What recommendations can be made regarding the use of patterns with negations? To address these questions, our methodology involved designing a questionnaire to uncover the intuitive semantics of potential users of pattern mining algorithms. The details of the methodology of this survey are described in Section 3. Section 5 presents the questions posed to users and makes explicit the potential alternative interpretations. The collected results from 124 participants are presented and analyzed in Section 6. We begin by introducing a brief overview of state-of-the-art algorithms for extracting sequential pattern with negations. ## 2 State-of-the-art in sequential pattern extraction with negations The first endeavor in negative pattern extraction was presented by Savasere et al. [9] in the context of itemset mining. Initial efforts toward sequential patterns with negation were made by Wu et al. [12] for association rules. Over time, several recent approaches have emerged to capitalize on advancements in pattern extraction techniques. The eNSP algorithm extracts negative patterns by leveraging set operations between sets of sequences matched by frequent sequential patterns [5]. This approach circumvents the direct enumeration of patterns with negation that leads to efficient algorithms. Since then, numerous alternatives to this algorithm have been proposed, focusing on item utility [13], repetitions [6], multiple support constraints [14], and more. Nonetheless, these methods do not rely on an antimonotonicity property of the support measure and they do not guarantee to extract all frequent patterns. An alternative to eNSP is NegPSpan[7], which employs a distinct pattern semantics to harness the antimonotonicity property. This enables efficient and complete extraction following conventional pattern mining principles. The completeness of the mining process makes the approach more reliable as it guarantees to the user to not miss interesting patterns. And the implementation benefits from decades of pattern mining research to maintain the efficiency. More recently, Wang et al. [11] introduced VM-NSP, an algorithm utilizing a vertical representation to enhance efficiency. For a comprehensive overview of recent developments in mining sequential pattern with negation, interested readers can refer to the work of Wang et al. [10]. In the initial stages, early approaches were compared without employing uniform pattern semantics. However, the recognition of distinct semantics has contributed to the clarification of the domain [3]. Specifically, eight semantics of patterns with negations have been delineated. These eight variations stem from different interpretations of the notion of non-inclusion, occurrence, and inclusion relation. These notions, detailed in Section 5, have informed the design of our questionnaire. ## 3 Survey on the Perception of Sequential Patterns with Negations The survey aims to identify the most intuitive semantics of sequential patterns with negation. The questionnaire is organized into three parts: 1. Evaluation of background knowledge in the domains of pattern mining and logic. In this part, participants are asked whether they are familiar with the concepts of pattern mining and whether they are computer scientists, logicians, or researchers. This information helps characterize potential biases within the participant group. 2. Verification of the understanding of sequential patterns (without negation) and the scope of negations. The general framework for the semantics of negative sequential patterns [3] makes assumptions about the definition of a classical sequential pattern and the intuitive scope of the negation. Two questions assess whether participants adhere to these definitions. Correct answers are required for inclusion in the survey analysis. 3. Identification of the intuitive semantics of sequential patterns with negation. This third part constitutes the core of the questionnaire. Participants are asked to determine which sequences they believe contain a given pattern (see example in Figure 1). The questions have been designed to unveil the semantics assumed by each participant. Thus, each participant is assigned one of the eight possible semantics. We refer to this questionnaire as revealing the intuitive semantics of participants, as they are not explicitly asked to state their preferred interpretations, but their interpretations are indirectly inferred from their answers. The questionnaire was distributed between December 2021 to March 2023. We used research mailing lists and non-research channels to collect responses from both experts and non-experts. The questionnaire is accessible via a standard web browser1. The questionnaire begins with explanations of sequential pattern concepts. It is designed to accommodate users with varying levels of mathematical comprehension by offering two versions: one employing letter notations and the other employing colored symbols. Figure 1 depicts the two alternative format to presenting a question. Footnote 1: [http://people.irisa.fr/Thomas.Guyet/negativepatterns/Survey/index.php](http://people.irisa.fr/Thomas.Guyet/negativepatterns/Survey/index.php) The questionnaire is entirely anonymous, and the collected data only include dates and answers to the questions. ## 4 General Framework We now introduce the syntax of sequential patterns with negation which restricts the general definition of sequential patterns with negation to the ones introduced by Besnard and Guyet [3]. In the following, let \([n]=1,\ldots,n\) denote the set of the first \(n\) integers, and let \(\mathcal{I}\) denote a set of items (alphabet). A subset \(A=a_{1},a_{2},\ldots,a_{m}\subseteq\mathcal{I}\) is called an _itemset_. A _sequence_\(\boldsymbol{s}\) is of the form \(\boldsymbol{s}=\langle s_{1},s_{2},\ldots,s_{n}\rangle\), where \(s_{i}\) is an itemset. **Definition 1** (Sequential pattern with negation).: _A sequential pattern with negation \(\boldsymbol{p}=\langle p_{1},\neg q_{1},\neg p_{2},\neg q_{2},\ldots,p_{n-1}, \neg q_{n-1},p_{n}\rangle\) is such that \(p_{i}\in 2^{\mathcal{I}}\setminus\emptyset\) for all \(i\in[n]\) and \(q_{i}\in 2^{\mathcal{I}}\) for all \(i\in[n-1]\). \(\boldsymbol{p}^{+}=\langle p_{1},p_{2},\ldots,p_{n}\rangle\) denotes the positive part of \(\boldsymbol{p}\)._ The semantics of patterns relies on the containment relation, which specifies how to determine whether a pattern occurs (is contained) or not in a sequence. This relation utilizes the notion of occurrence of a (positive) sequential pattern in a sequence, formally defined as follows: **Definition 2** (Occurrence of a sequential pattern).: _Let \(\boldsymbol{s}=\langle s_{1},s_{2},\ldots,s_{n}\rangle\) be a sequence and \(\boldsymbol{p}=\langle p_{1},p_{2},\ldots,p_{m}\rangle\) be a sequential pattern, \(\boldsymbol{e}=(e_{i})_{i\in[m]}\in[n]^{m}\) is an occurrence of the pattern \(\boldsymbol{p}\) in the sequence \(\boldsymbol{s}\) if \(p_{i}\subseteq s_{e_{i}}\) for all \(i\in[m]\) and \(e_{i}<e_{i+1}\) for all \(i\in[m-1]\)._ The understanding of this definition (explained at the beginning of the questionnaire) is verified through the following question. **Question 1** (Occurrence of a sequential pattern).: _Let \(\boldsymbol{p}=\langle(ca)\ d\ e\rangle\) be a sequential pattern, indicate in which sequences of Table 1 the pattern \(\boldsymbol{p}\) occurs._ The expected answers to this question are the sequences \(\boldsymbol{p_{0}}\), \(\boldsymbol{p_{3}}\), and possibly \(\boldsymbol{p_{4}}\). Sequence \(\boldsymbol{p_{0}}\) allows us to verify the understanding that \((ca)\) appears in \((caf)\) as per our definitions. Sequence \(\boldsymbol{p_{1}}\) verifies that all the elements of \((ca)\) appear together (and not just a subset). Sequence \(\boldsymbol{p_{2}}\) allows us to verify the understanding of the importance of the occurrence order in the sequence. Sequence \(\boldsymbol{p_{3}}\) lets us verify the understanding of the notion of a _gap_: it is possible to have itemsets in the middle of an occurrence (e.g., the occurrence of \(b\) between \(d\) and \(e\)). Lastly, the final sequence presents an itemset whose items are not ordered. If \(\boldsymbol{p_{4}}\) is not deemed to contain \(\boldsymbol{p}\), it would indicate a user's sensitivity to the order within an itemset (which is classically not the case). Likewise, the semantics of sequential patterns with negation are based on a containment relation. A pattern with negation, \(\boldsymbol{p}\), is contained in a sequence \(\boldsymbol{s}\) if \(\boldsymbol{s}\) contains a subsequence \(\boldsymbol{s}^{\prime}\) such that each positive set of \(\boldsymbol{p}\) (denoted as \(p_{i}\)) is included in an itemset of \(\boldsymbol{s}^{\prime}\) (in proper order), and all the negation constraints expressed by \(\neg q_{i}\) are also satisfied. The negation constraint on \(q_{i}\) then applies to the subsequence of \(\boldsymbol{s}^{\prime}\) located between the occurrence of the positive itemset preceding \(\neg q_{i}\) in \(\boldsymbol{p}\) and the occurrence of the positive itemset following \(\neg q_{i}\) in \(\boldsymbol{p}\). This definition determines the scope of the negation, which is specific to the framework we are working in. Ensuring that users share this definition is paramount. The subsequent question enables us to affirm this understanding. **Question 2** (Scope of the negation).: _Let \(\boldsymbol{p}=\langle c\neg d\ e\rangle\) be a pattern with negation, indicate the sequences of the table below in which, according to you, \(\boldsymbol{p}\) occurs._ Figure 1: Illustration of the two versions of the questionnaire: on the left, the classical view employing mathematical notations; on the right, the version employing colored shapes tailored for non-expert users. The use of colors and shapes provides redundancy while also catering to color-blind individuals. \begin{tabular}{l l} \hline \(id\) & _Sequence_ \\ \hline \(\mathbf{s_{0}}\) & \(\langle f\ f\ c\ b\ d\ a\ e\rangle\) \\ \(\mathbf{s_{1}}\) & \(\langle f\ c\ b\ f\ a\ e\rangle\) \\ \(\mathbf{s_{2}}\) & \(\langle b\ f\ c\ b\ a\rangle\) \\ \(\mathbf{s_{3}}\) & \(\langle b\ c\ b\ e\ d\rangle\) \\ \(\mathbf{s_{4}}\) & \(\langle f\ a\ c\ e\ b\rangle\) \\ \hline \end{tabular} In this question, it seems reasonable to consider that \(\mathbf{p}\) occurs in \(\mathbf{s_{1}}\), \(\mathbf{s_{3}}\) (since \(d\) is outside the assumed scope of the negation), and \(\mathbf{s_{4}}\). Participants who do not tick \(\mathbf{s_{4}}\) likely interpret the constraint \(\neg d\) as referring to the occurrence of an element other than \(d\) (which is not consistent with the definitions proposed above). If \(\mathbf{p_{0}}\) is deemed to contain \(\mathbf{p}\), it is likely that the constraint \(\neg d\) is understood to strictly follow \(c\) which is not a situation considered in our framework. ## 5 Questions on the semantics of negations In this section, we take up the questions of the third part of the questionnaire and we explain the different interpretations revealed by the answers given by the participants. There are three questions. Each question is dedicated to one dimension of the semantics of negative sequential pattern, and they cover all dimensions identified in [3]. ### Itemset non-inclusion **Question 3**.: _Let \(\mathbf{p}=\langle d\ \neg(af)\ b\rangle\) be a sequential pattern with negation, indicate the sequences of the table below in which, according to you, \(\mathbf{p}\) occurs._ \begin{tabular}{l l} \hline \(id\) & _Sequence_ \\ \hline \(\mathbf{i_{0}}\) & \(\langle e\ e\ d\ a\ b\ e\rangle\) \\ \(\mathbf{i_{1}}\) & \(\langle d\ (af)b\ c\rangle\) \\ \(\mathbf{i_{2}}\) & \(\langle e\ d\ (fc)\ b\rangle\) \\ \(\mathbf{i_{3}}\) & \(\langle e\ c\ d\ (ec)\ b\rangle\) \\ \(\mathbf{i_{4}}\) & \(\langle d\ (fa)\ b\ e\rangle\) \\ \hline \end{tabular} This question is designed to unveil the interpretation of the inclusion relation between itemsets. Each sequence in the table contains the positive part of the pattern, \(\mathbf{p}^{+}=\langle d\ b\rangle\), with only one itemset between the occurrences of \(d\) and \(b\). These sequences prompt inquiry into the non-inclusion of the \((af)\) itemset in \(a\), \((af)\), \((fc)\), \((ec)\), or \((fa)\). If a participant ticks the sequences \(i_{0}\), \(i_{2}\), and \(i_{3}\), we can deduce that they regard the presence of at least one element of the itemset \((af)\) to "validate" the negation. This is referred to as "partial non-inclusion". On the other hand, if only sequence \(\mathbf{i_{3}}\) is ticked, it suggests that the participant considers that all items in the itemset must be present to "validate" the negation. This is referred to as "total non-inclusion". Additionally, sequence \(\mathbf{i_{4}}\) is included to examine whether the order of items in the itemset matters to participants and whether their response aligns with their answer to sequence \(\mathbf{p_{4}}\) in Question 1. More formally, this question discriminates between two choices of inclusion between two itemsets, \(P\) and \(I\): * Partial non-inclusion: \(P\not\subseteq_{G}I\Leftrightarrow\exists e\in P\), \(e\notin I\) * Total non-inclusion: \(P\not\subseteq_{D}I\Leftrightarrow\forall e\in P,e\notin I\) Partial non-inclusion means that \(P\setminus I\) is non-empty, while total non-inclusion means that \(P\) and \(I\) are disjoint. In the following, the symbol \(\not\subseteq_{*}\) denotes a relation of non-inclusion between itemsets, either \(\not\subseteq_{G}\) or \(\not\subseteq_{D}\). ### Embedding of a pattern with negation **Question 4** (Embedding of a pattern with negation).: _Let \(\mathbf{p}=\langle f\ \neg(ea)\ d\rangle\) be the sequential pattern with negation, indicate the sequences from the table below in which, according to you, \(\mathbf{p}\) occurs._ The form of the pattern \(\mathbf{p}=\langle f\ \neg(ea)\ d\rangle\) mirrors that of the previous question, differing by a permutation of letters. Each sequence in the table contains the positive part of \(\mathbf{p}\), i.e. \(\langle f\ d\rangle\). The primary difference is that there are multiple itemsets between the occurrences of \(f\) and \(d\). Participants must decide which itemset(s) of the sequence to compare with the negated itemsets of the pattern. First and foremost, we anticipate participants to deduce that \(\mathbf{p}\) occurs in \(\mathbf{e_{3}}\) (there is clearly neither \(e\) nor \(a\) here) but that \(\mathbf{p}\) does not occur in \(\mathbf{e_{2}}\) (the itemset \((ea)\) is found in the scope of the negation). The sequence that unveil the participant semantics is \(\mathbf{e_{1}}\). Notably, this sequence comprises both elements of the negated itemset (\(e\) and \(a\)), but in two separated itemsets of the sequence. The participant who does not tick it (i.e. he/she considers that \(e\) does not occur in \(e_{1}\)) uses the notion of "soft-embedding": \(e\) and \(a\) would have to appear together to "validate" the negation (as in the case of \(e_{2}\)). The participant who ticks it consider that the negation constraint applies across the entire set of itemsets within the negation's scope. The interpretation is termed _strict-embedding_. Furthermore, \(\mathbf{e_{0}}\) unveils the notion of non-inclusion discussed earlier: in the case of partial non-inclusion, \(\mathbf{p}\) occurs in \(\mathbf{e_{0}}\), but not if we consider a total non-inclusion. Thus, this sequence serves to assess the consistency of responses. Two interpretations have been distinguished: strict- and soft-embeddings. They can be formally defined as follows: Let a sequence \(\mathbf{s}=\langle s_{1},\ldots s_{n}\rangle\) and a pattern with negation \(\mathbf{p}=\langle p_{1},\ldots\ \neg q_{1},\ldots\ \neg q_{m-1}\ p_{m}\rangle\). We say that \(\mathbf{e}=(e_{i})_{i\in[m]}\in[n]^{m}\) is a soft-embedding of \(\mathbf{p}\) in the sequence \(\mathbf{s}\) iff: * \(p_{i}\subseteq s_{e_{i}}\) for all \(i\in[m]\) * \(q_{i}\not\subseteq_{*}s_{j},\ \forall j\in[e_{i}+1,e_{i+1}-1]\) for all \(i\in[m-1]\) We say that \(\mathbf{e}=(e_{i})_{i\in[m]}\in[n]^{m}\) is a strict-embedding of \(\mathbf{p}\) in the sequence \(\mathbf{s}\) iff: * \(p_{i}\subseteq s_{e_{i}}\) for all \(i\in[m]\) * \(q_{i}\not\subseteq_{*}\bigcup_{j\in[e_{i}+1,e_{i+1}-1]}s_{j}\) for all \(i\in[m-1]\) Intuitively, the soft-embedding considers the non-inclusion of \(q_{i}\) for each of the itemsets within the positional range \([e_{i}+1,e_{i+1}-1]\) while the strict-embedding considers the non-inclusion across the union of the itemsets at those same positions. The interval corresponds to the itemsets of the sequence that lie strictly between the occurrences of the itemsets surrounding \(q_{i}\). ### Multiple occurrences **Question 5** (Multiple occurrences of a pattern with negation).: _Let \(\mathbf{p}=\langle b\ \neg e\ f\rangle\) be a negative sequential pattern, indicate the sequences of the table below in which, according to you, \(\mathbf{p}\) occurs._ \begin{tabular}{l l} \hline _id_ & _Sequence_ \\ \hline \(\mathbf{o_{0}}\) & \(\langle b\ a\ f\ d\ b\ d\ f\rangle\) \\ \(\mathbf{o_{1}}\) & \(\langle b\ a\ f\ d\ e\ b\ d\ f\rangle\) \\ \(\mathbf{o_{2}}\) & \(\langle d\ b\ e\ c\ a\ d\ f\ b\ d\ e\ f\rangle\) \\ \(\mathbf{o_{3}}\) & \(\langle b\ a\ f\ b\ a\ e\ f\rangle\) \\ \hline \end{tabular} In this question, each sequence contains multiple occurrences of the positive part of the pattern, \(p^{+}=\langle b\ f\rangle\). Notably, there are even non-nested occurrences of \(\langle b\ f\rangle\) in each sequence to underscore this. Given that the negation constraint pertains only to the item \(e\), whatever the choices of non-inclusion and embedding interpretations, the question centers on the interpretation of these multiple occurrences. Two alternative interpretations are anticipated: * The first interpretation considers that once an occurrence of the positive part, \(\langle b\ f\rangle\), fulfills the negation constraint, the sequence contains the pattern. This is termed a "weak occurrence". Ticking sequences \(\boldsymbol{o_{0}}\), \(\boldsymbol{o_{1}}\), and \(\boldsymbol{o_{3}}\) indicates alignment with this interpretation. * The second interpretation holds that if any occurrence of the positive part fails to satisfy the negation constraint, the sequence does not contain the pattern. This is termed a "strong non-occurrence". In Question 5, participants subscribing to this view solely ticked \(\boldsymbol{o_{0}}\), as all other sequences possess at least one occurrence of \(\langle b\ f\rangle\) with an interstitial \(e\). However, sequence \(\boldsymbol{o_{1}}\) might pose a challenge for those with this interpretation. It contains two minimal occurrences [8] of \(\langle b\ f\rangle\) that meet the negation constraint, alongside an occurrence involving the first \(b\) and the last \(f\) which does not satisfy the negation constraint.2 This subtlety may be difficult to detect for those unfamiliar with sequences. Hence, it is advisable to assess the interpretation solely based on the absence of \(\boldsymbol{o_{3}}\). When the participant ticks \(\boldsymbol{o_{1}}\), we assign to him/her a specific attention to minimal occurrences. Footnote 2: In sequence mining, a minimal occurrence [8] is an occurrence of a pattern whose extent within the sequence does not contain another occurrence of the same pattern. For instance, in the sequence \(\langle b\ b\ f\rangle\), the blue occurrence of \(\langle b\ f\rangle\) is minimal, but not the red one. Finally, the three dimensions of interpretation for negation combine to establish eight distinct semantics, each characterized by its containment relations as studied in [3]. The three questions above were strategically crafted to individually delve into each of the three dimensions underlying the semantics of sequential patterns with negation. Notably, this approach illustrates how the question design facilitates the assignment of a specific semantics to a participant based on their provided responses. ## 6 Analysis of the questionnaire answers By the conclusion of the survey period, we had amassed 124 fully completed questionnaires. Participants' self-assessed expertise in pattern mining is distributed as follows: 40 novices (level 0), 54 with knowledge of data science (level 1), and 27 who identified themselves as familiar with pattern mining (level 3). In terms of background, 79 participants identified themselves as computer scientists, 82 as researchers, and 23 as logicians. The average number of attempts made to comprehend the notion of pattern occurrence was \(1.27\pm 0.49\), with attempts ranging from 1 to 5. Notably, 102 participants answered correctly on their initial attempt. It is worth noting that among the participants with knowledge of data analysis (out of 24), 6 requires more than one attempt to arrive at the correct answer. The objective of the questionnaire analysis is to identify clusters of individuals who selected the same answers, i.e., who have the same intuitive semantics of sequential patterns with negation. This process unfolds in two stages: 1. Initially, we analyze the results question by question, focusing individually on each dimension of the semantics of negative sequential patterns 2. The analysis is then complemented by a global analysis of semantics. In the preceding section, we determined the expected responses for each question. We propose the utilization of formal concept analysis (FCA) to achieve a comprehensive overview of the outcomes. FCA is a data analysis technique that identifies concepts within a dataset. Each concept is defined by its intention, which represents the set of selected answers, and its extension, which enumerates all individuals who select those answers. These extracted concepts are "closed", meaning that their extension is maximal for their intention, and vice versa. FCA empowers us to succinctly represent the answers in a concept lattice. Through this lattice, we visualize all subgroups of individuals who provided identical answers. FCA has previously found application in questionnaire analysis [1]. For our practical implementation, we employed the GALACTIC tool [4] to construct our lattices. ### Analysis of each dimension of semantics In this section, we analyze the responses to questions 2 to 5. It should be noted that participants are required to answer Question 1 correctly to proceed with the questionnaire. As a result, the analysis of answers to this question may not be significant. First, we focus on the answers to the question regarding the scope of negations. Subsequently, we delve into the analyze of the three dimensions of the semantics of patterns with negation: the non-inclusion of itemsets, embeddings, and multiple occurrences. Tables 2 to 5 provide a synthetic account of each of the interpretations. Furthermore, Figures 2 to 4 depict the concept lattices obtained for each of these questions to give a more global picture of the responses. Regarding the scope of the negations, 101 participants provided answers that corresponded with the expected understanding of negation scope (see Table 2). It is interesting to note that 9 people who selected \(\mathbf{s_{1}}\) and \(\mathbf{s_{3}}\) did not select \(\mathbf{s_{4}}\). This discrepancy suggests that, for them, negating an itemset means negating the event itself, rather than negating the presence of the event.3 The remaining marginal differences (14 people) are assumed to be omissions or errors. As their grasp of the scope of negation might differ, these individuals were excluded from further results analysis, ensuring the interpretability of the responses. Therefore, the further analysis is based on 110 completed answers. Footnote 3: NB: In the following questions, all sequences have at least one “neutral” event where an itemset with negation is expected. Regarding the non-inclusion of itemsets (Table 3 and Figure 2), we can observe that the majority of participants (100) selected the response triple \(\mathbf{i_{0}}\), \(\mathbf{i_{2}}\), and \(\mathbf{i_{3}}\) aligning with the interpretation of partial non-inclusion (concept $8 in Figure 2). Only 3 people considered the total non-inclusion \begin{table} \begin{tabular}{l c c} \hline **Scope** & **Count** & **Percentage** \\ \hline Conform & 101 & 81.4\% \\ Conform except \(\mathbf{s_{4}}\) & 9 & 7.3\% \\ Alternative & 14 & 11.3\% \\ \hline \end{tabular} \end{table} Table 2: Results on the question of the scope of negation. Figure 2: Concepts extracted from the answers to Question 3: non-inclusion of an itemset. Each concept is illustrated by a box containing different elements: the generators on an orange background (representing possible answers to the questions), and the prototypes on a green background. The size of the extension is indicated with a #. Each concept indicates the intention as a set of ticked sequences (refer to the tables presented in the examples). In the responses to the questions, i0 indicates that the participant ticked the sequence \(\mathbf{i_{0}}\), and ni1 (prefixed with n) indicates that the participant _did not_ tick the sequence \(\mathbf{i_{1}}\). interpretation. An interesting observation pertains to the 22 participants who considered that the sequence \(\mathbf{i_{4}}\) contains the pattern. They believe that \((fa)\) is not incompatible with \((af)\). These participants spanned across varying levels of expertise: 8, 11, and 3, respectively, for levels 0, 1, and 2. Unsurprisingly, people knowledgeable in pattern mining (level 2) are, in proportion, less represented among people who are inclined to differentiate between \((fa)\) and \((af)\). Moving on to the analysis of the embeddings (Table 4 and Figure 3), the sequence \(\mathbf{e_{1}}\) allows us to distinguish the participants' intuition. For Table 4, we also ensure that the answers are correct for \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\); otherwise, we categorize the answer as "other". Once again, we observe a pronounced trend in the results. 97 participants subscribed to the soft-embedding interpretation (Concept $7 in Figure 3). Concept $3 corresponds to individuals who did not select \(\mathbf{e_{1}}\), indicative of a strict-embedding interpretation. Lastly, regarding the analysis of the inclusion relations (Table 5 and Figure 4), two balanced groups of participants emerge. 75 participants have exclusively identified the three sequences corresponding to the notion of a weak occurrence. They are represented by Concept $3 of Figure 4. On the other hand, 31 participants exclusively selected the sequence \(\mathbf{o_{0}}\) (Concept $1). The latter group preferred the interpretation of a strong occurrence. Among these 31 participants, 15 did not select the \(\mathbf{o_{1}}\) sequence, while 16 did (Concept $2). The latter group tends to align more with the notion of minimal occurrence. ### Global semantics analysis Questions 3 to 5 assign each participation to an interpretation of one of the three dimensions that constitute the semantics of a pattern with negation (according to the framework of Besnard and \begin{table} \begin{tabular}{l c c} \hline \hline **Interpretation** & **Count** & **Percentage** \\ \hline Strict-embedding & 97 & 88.2\% \\ Soft-embedding & 7 & 6.3\% \\ Other & 6 & 5.5\% \\ \hline \hline \end{tabular} \end{table} Table 4: Responses to the question of embeddings. Figure 3: Concepts extracted from responses to the Question 4 relating to embeddings (see Figure 2 for legend details). \begin{table} \begin{tabular}{l c c} \hline \hline **Interpretation** & **Count** & **Percentage** \\ \hline Partial non-inclusion & 100 & 90.9\% \\ Total non-inclusion & 3 & 2.7\% \\ Other & 7 & 6.4\% \\ \hline \hline \end{tabular} \end{table} Table 3: Responses to the question of non-inclusions (number and percentage). Guyet [3]. We now investigate if there are dominant semantics (combinations of interpretation choices for the three dimensions) among the eight possibilities. Figure 5 provides a summary of the survey responses. It presents the concept lattice that represents the semantics of patterns with negation. The five prototypes at the bottom level describe the five semantics (and their representation in the data) that the participants used. Among the 110 participants, 96 were assigned an intuitive semantic by the questionnaire. The remaining 14 participants had at least one question for which no clear interpretation was identified. These individuals are categorized in the intermediate concepts (prototypes $5, $6, and $10; generator $15; and concepts $3, $9, and $13). One noteworthy observation is that a significant proportion of participants can be attributed a semantic, suggesting that the same individuals likely provided "alternative" answers to different questions. This outcome reinforces the reliability of the collected answers. Furthermore, the figure highlights the main finding of this study: there are two primary intuitively used semantics: * the first is partial non-inclusion, with soft-embedding and strong-occurrences, accounting for 23.9% of participants, and * the second is partial non-inclusion, with soft-embedding and weak-occurrences, accounting for 69.8% The representation of the other semantics is marginal. Additionally, we sought to compare the populations defined by their choice of semantics by analyzing their responses to profile questions. To do this, we conducted a statistical test to compare the distributions of expertise levels using Student's t-test. The results show no significant difference between the groups. In conclusion, we find that the intuition of a semantics is not inherently linked to a particular expertise in computer science or data science. Figure 4: Concepts extracted from responses to Question 5 relating to multiple occurrences (see Figure 2 for legend details). \begin{table} \begin{tabular}{l c c} \hline **Interpretation** & **Count** & **Percentage** \\ \hline Weak occurrence & 75 & 69.2\% \\ Strong occurrence & 31 & 28.2\% \\ Other & 4 & 3.6\% \\ \hline \end{tabular} \end{table} Table 5: Responses to the question of multiple occurrences. ## 7 Preferred semantics vs state-of-the-art algorithms As a preliminary summary, the analyses reveal the absence of a single shared semantic among participants, but rather the presence of two dominant semantics. These results prompt a comparison with the choices made by two prominent algorithms in the field: * eNSP employs total non-inclusion, with soft-embedding and strong-occurrences * NegPSpan employs total non-inclusion, with soft-embedding and weak-occurrences Firstly, neither of the algorithms aligns with the participants' intuitive understanding, as both rely on total non-inclusion of itemsets, whereas partial non-inclusion appears to be the most intuitive. One possible explanation for this algorithmic choice is that partial non-inclusion is anti-monotonic, while the total non-inclusion is monotonic. The latter is less straightforward to exploit algorithmically. Therefore, the most intuitive semantics may not be the most suitable from an algorithmic perspective. In practice, this raises concerns about potential misinterpretation of patterns extracted by these state-of-the-art algorithms. Without explicitly defining their semantics, the results of this study indicate that the patterns will be interpreted differently from the intended interpretation used for their extraction. This poses a significant challenge for the practical use of these algorithms. In light of these findings, several recommendations emerge: 1. **Singleton-only negations**: Consider limiting negations to singletons only. This adjustment would make partial and total non-inclusions equivalent, potentially reducing confusion and aligning better with participants' intuition. 2. **Algorithmic Adaptations**: Develop alternative algorithms tailored to the partial non-inclusion semantics. While these adaptations are algorithmically feasible, their computational performance should be rigorously compared to existing algorithms to assess their efficiency and competitiveness. Given that NegPSpan adheres more closely with the intuition of a larger number of participants, consider favoring the extension and utilization of the NegPSpan algorithm. 3. **Distinct Syntaxes**: Promote the adoption of distinct syntaxes for each semantic interpretation. This approach can help differentiate and avoid confusion between different interpretations. This recommendation serves as a practical solution to address the challenges faced by the pattern mining community regarding sequential patterns with negations. Figure 5: Concepts extracted from the attributions made for each dimension. While preferred semantics have been identified through our survey, we recognize that all semantics might have their uses depending on the data context. Resolving this challenge might involve designing algorithms capable of extracting various types of negative sequential patterns. This avenue has been explored in [2] using a declarative pattern mining framework, although scalability to large datasets remains a limitation. ## 8 Discussion In this part, we discuss the methodology employed for conducting the survey. However, it's important to acknowledge several limitations associated with our approach. Firstly, the survey encompassed only a limited number of questions that enabled a precise profiling of participants. Consequently, our understanding of whether the surveyed population accurately represents potential users of pattern mining algorithms remains constrained. Additionally, the questionnaire was primarily disseminated through academic channels, which may introduce bias in the responses. A second limitation of the questionnaire is the lack of redundancy in the questions. Each dimension of the semantics of patterns with negation is addressed by only one question. This approach may be prone to errors. We chose to have a shorter questionnaire without repeating questions in order to prevent from early abandon of the participant and to maximize the number of complete answers. This was effectively the case because 100% of the answers were complete. Furthermore, redundant question might be prone to inconsistent answers that would lead to discard them. Then, we designed the questionnaire to separate the different dimensions as much as possible to avoid ambiguity in the analysis of results. The third limitation pertains to the relatively modest number of collected responses. Acquiring 124 completed questionnaires spanned several months, and an increased number of participants would have necessitated alternative dissemination strategies. Nonetheless, considering the nature of the questions and the results, we deemed this sample size to be sufficient for statistically significant analysis. Notably, the substantial disparities observed in the outcomes substantiate the validity of our findings. The quality of the collected responses is buttressed by two questions: a preliminary eliminatory question and a second question on the scope of negation, which were used to filter out participants who could bias the results. The very low number of such participants indicates that the response set is of good quality, suggesting that participants answered the questions conscientiously. Another potential bias of this questionnaire is the presentation of basic notions of sequential patterns, which may have influenced certain responses over others. It is noteworthy that the questions on non-inclusion and embedding exhibited low diversity of responses. We expected a more varied perception of the notion of non-inclusion of itemsets, but this diversity was not reflected in the participant panel. Considering the diversity observed in the responses to the multiple occurrence question, we believe that if there was significant heterogeneity in the previous questions, it would have emerged in the questionnaire responses. Among the presentation biases, the use of symbols (rather than letters) in the questionnaire format was reported as interesting by some participants. Using letters assumes an order in the items that does not exist. In practice, we observed that only 22.6% of participants were sensitive to the item order. The use of geometric symbols better captures the idea of set without order. Unfortunately, we did not collect information on the graphic mode of the participant used, so we cannot test this hypothesis. Lastly, the questionnaire is closely aligns with the analysis framework proposed by Besnard and Guyet [3], which makes specific assumptions about the syntax and semantics of patterns with negation. Two crucial assumptions revolve around insensitivity to item order within an itemset and the scope of negation. The latter assumption saw 11.3% of participants responding differently than anticipated. As we excluded these individuals from the analysis, it does not affect the conclusions, but it raises questions about the "intuition" held by these people. Further in-depth interviews could shed light on this matter. A third hypothesis pertains to the syntax of patterns with negation. A more comprehensive study could explore more extensive syntaxes, such as allowing consecutive negations or negations at the beginning or end of a pattern. While such possibilities are inherent in some state-of-the-art pattern extraction algorithms, they were not explored in this study. Conclusion This paper delves into the semantics of sequential patterns with negation from the perspective of potential users of algorithms that extract such patterns. Prior research has highlighted the inherent ambiguity in the notations employed for these patterns [3]. Our primary objective was to determine whether the patterns extracted by state-of-the-art algorithms could potentially lead to misinterpretation by users. To address this question, we conducted a survey targeting potential users with diverse profiles. The goal of the survey was to understand which of the identified semantics were preferred by or intuitive for the users. Analysis of the questionnaire responses, which involved 124 participants, revealed that two semantics dominate within the panel. A first important result is that there is no universally shared intuitive semantics among the participants. The second significant outcome underscores the discrepancy between user intuitive semantics and the semantics used in state-of-the-art pattern extraction algorithms with negation, such as eNSP and NegPSpan. As the partial non-inclusion arises when negation involves sets of items (e.g., \(\neg(ab)\)), patterns incorporating this form of constraint warrant special attention to ensure optimal user comprehension. Furthermore, the substantial majority preference (approximately 69% of participants) is for weak-embeddings, aligning with the choice made by the NegPSpan algorithm. This semantics also exhibits antimonotonicity properties when negations are restricted to singletons. Based on these findings, we offer the following recommendations for sequential pattern extraction methods with negation: * Limit the use of item set negation and prioritize item negation instead, * Alternatively, explore the extension of the NegPSpan algorithm, as its inclusion relation semantics aligns with the majority intuition, * Promote the use of specific syntaxes for each semantics in order to avoid confusion.
2302.14432
Gap engineering and wave function symmetry in C and BN armchair nanoribbons
Many are the ways of engineering the band gap of nanoribbons including application of stress, electric field and functionalization of the edges. In this article, we investigate separately the effects of these methods on armchair graphene and boron nitride nanoribbons. By means of density functional theory calculations, we show that, despite their similar structure, the two materials respond in opposite ways to these stimuli. By treating them as perturbations of a heteroatomic ladder model based on the tight-binding formalism, we connect the two behaviours to the different symmetries of the top valence and bottom conduction wave functions. These results indicate that opposite and complementary strategies are preferable to engineer the gapwidth of armchair graphene and boron nitride nanoribbons.
Elisa Serrano Richaud, Sylvain Latil, Hakim Amara, Lorenzo Sponza
2023-02-28T09:21:28Z
http://arxiv.org/abs/2302.14432v2
# Impact of edge morphology and chemistry on nanoribbons' gapwidth ###### Abstract In this work, we scrutinise theoretically how the gap of C and BN armchair nanoribbons changes upon variations of the bond length between edge atoms and their distance from passivating species. Our DFT calculations indicate that the gap of C-based nanoribbons is more sensitive to the relaxation of the bonding length between edge atoms (morphology) whereas in BN-nanoribbons it is more sensitive to the distance between edge atoms and passivating hydrogens (chemical environment). To understand the origin of these two different behaviours, we solved a tight-binding ladder model numerically and at the first-order perturbation theory, demonstrating that the different dependence is due to the interference of the wavefunctions of the top valence and the bottom conduction states. ## I Introduction In recent decades, graphene and hexagonal boron nitride (BN) have attracted a great deal of interest because of their remarkable transport and optical properties [1; 2; 3; 4; 5]. A much explored way to modulate them is by adding extra confinement (as in 2D quantum dots, nanoribbons or nanotubes). The presence of confining edges endows them with novel size-dependent features dominated by the characteristics of the edge itself. This is why graphene and BN nanoribbons are often classified according to their edge shape, which can be zig-zag, armchair, fall in an intermediate chiral angle, or present structures that require a more general nomenclature [6]. In zig-zag nanoribbons, well localised edge-state are formed which confer antiferromagnetic properties to C-based zig-zag nanoribbons [6; 7; 8; 9; 10; 11; 12]. Instead, BN-based zig-zag nanoribbons have an indirect gap and display an intrinsic dipole moment [13; 14; 15; 16; 17; 18; 19]. At variance, both graphene [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] and BN [14; 15; 16; 17; 18] armchair nanoribbons (AGNR and ABNN), have no magnetic states and display a direct size-dependent gapwidth To take full advantage of this richness of properties, several methods have been explored including the application of external electromagnetic fields [9; 10; 14; 18; 27], strain [17; 24; 28] and edge engineering [17; 19; 21; 22; 23; 24; 25; 26; 29]. As a matter of fact, the edge characteristics are crucial for the performances of nanoribbons-based devices such as transistors, interconnects and logical devices [23; 29; 30; 31; 32; 33], photovoltaic applications [33; 34], or chemical sensing [35; 33]. Experimentally, edge engineering [34; 36; 37], chemical treatment [38] or selective passivation [29] have been demonstrated to have a significant impact on the device quality, precisely because of their action on the edges. Alterations of the electronic structure due to edge modifications can be divided into morphology effects (variation of the bondlengths) and chemistry effects (variation of the passivating species and their distance from the edges) [6; 26]. The sensitivity of AGNR and ABNNR gap to the passivation has been investigated by many authors [6; 17; 19; 21; 22; 23; 24; 25; 26; 29] who showed that its effect depends on the type of atoms involved, and/or on the number and position of the passivated sites. Most of these first-principle studies [17; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 88; 89; 91; 87; 88; 89; 92; 85; 89; 93; 86; 87; 89; 94; 95; 96; 97; 98; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 170; 171; 172; 173; 174; 175; 176; 1778; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 208; 209; 210; 211; 224; 213; 214; 215; 216; 217; 218; 219; 225; 217; 219; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 259; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 289; 291; 285; 286; 288; 287; 289; 292; 300; 31; 329; 331; 332; 333; 341; 342; 343; 35; 361; 370; 38; 393; 394; 395; 396; 397; 40; 41; 429; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 94; 95; 96; 97; 98; 101; 12; 12; 13; 14; 15; 16; 17; 18; 19; 199; 18; 19; 202; 213; 224; 245; 246; 247; 248; 249; 251; 261; 275; 28; 293; 250; 252; 254; 255; 256; 257; 258; 259; 261; 276; 289; 293; 301; 302; 303; 31; 33; 342; 35; 36; 37; 38; 39; 40; 41; 429; 51; 52; 53; 54; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 78; 79; 81; 82; 83; 84; 85; 86; 87; 89; 95; 96; 97; 98; 101; 12; 13; 14; 15; 16; 17; 18; 19; 19; 19; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 50; 39; 41; 42; 43; 44; 45; 46; 47; 49; 51; 53; 54; 56; 57; 59; 60; 62; 63; 64; 65; 66; 67; 68; 69; 71; 19; 80; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 303; 32; 343; 35; 36; 37; 39; 510; 38; 31; 39; 40; 43; 44; 45; 47; 48; 49; 52; 53; 54; 56; 57; 58; 59; 70; 74; 75; 76; 78; 79; 82; 83; 85; 86; 87; 89; 99; 90; 911; 12; 13; 14; 15; 16; 17; 18; 19; 19; 19; 203; 21; 24; 25; 26; 27; 28; 29; 32; 33; 34; 35; 36; 37; 38; 39; 511; 36; 39; 52; 37; 39; 53; 54; 57 tion stops at a stability level and the relation to the gapwidth is not explored. However, both effects seem to be decisive in determining the gap of nanoribbons and we deemed that the subject deserved a more focused study. In this article, we employ density functional theory (DFT) to study the evolution of the gap, the top valence (TV) and the bottom conduction (BC) states of AGNRs and ABNRs as a function of the nanoribbon size upon variations of the distance between edge atoms and between these and the passivating species. Our objective is to compare the effect of morphological and chemical variations on the gapwidth and understand which of them is dominant and in which situation. We demonstrate that the response of the gapwidth to changes of the distance between edge atoms (morphology) or between edge atoms and passivating atoms (chemical environment) is opposite in the two materials and we rationalise this different behaviour by means of a tight-binding model which we solved both numerically and perturbatively. ## II Structural and computational details All nanoribbons studied in this article have armchair edges passivated with H atoms. They form an infinite periodic structure in the \(y\) direction and are confined along \(x\). The extension of the periodic cell along \(y\) is the cell parameter \(a\), while the width is expressed by the number \(N_{a}\) which indicates the number of dimers aligned along \(y\) inside the unitary cell (number of rows). To indicate a specific structure we will attach the index \(N_{a}\) after the label of the material, as in Figure 1, so for instance AGNR5 designates an armchair graphene nanoribbon of size \(N_{a}=5\). Density functional theory calculations were carried out within the generalized gradient approximation using the PBE [41] exchange correlation potential as implemented in the Quantum ESPRESSO [42] simulation package. Long-range van der Waals corrections were included via the DFT-D2 method [43]. To avoid interactions between consecutive cells, we included 15 A and 20 A of empty space in the \(z\) and \(x\) directions respectively. In electron density calculations and relaxation runs, the periodic axis was sampled with 20 k-points centered in \(\Gamma\) (corresponding to 11 irreducible k-points). This mesh was dense enough to converge total energies in the smallest nanoribbons. For density of states (DOS) calculations, a five times denser sampling was adopted for all systems and the resulting spectra have been broadened with a Gaussian distribution with a width of 0.02 eV. We used norm-conserving pseudopotentials [44] and set the kinetic energy cutoff at 80 Ry in both materials. It is worth stressing that using a large vertical empty space and a high energy cutoff is essential even in the relaxation runs in order to prevent nearly free-electron states from hanging below the \(p_{z}\) states hence jeopardizing the gap description. In fact, as already well known for free-standing layers [45; 46; 47; 48; 49] and nanotubes [50; 51; 52] in BN nanomaterials there is a competition at the bottom conduction between \(2p_{z}\) and \(3s\) states, whose right alignment requires a dedicated convergence study. If sometimes one can overlook this issue in BN layers, because the two competing states originate direct and indirect band gaps, this is not the case in ABNNRs where both states give rise to a direct gap at \(\Gamma\). In non-relaxed structures, all atoms occupy the sites of a regular honeycomb lattice with an inter-atomic distance of 1.42 A. Structural relaxation runs have been performed with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm for all systems with the stopping criterion of all forces being lower than \(5\times 10^{-5}\) eV/A. We allowed variations of the cell parameter \(a\) and all atomic positions. As clarified in the following, we also run some calculations letting only specific atoms to move. In Figure 1 we report the relaxed structures of AGNR and ABNNR at \(N_{a}=5\) for sake of example, and we introduce some notable structural parameters. In the AGNRs, the main modifications with respect to non-relaxed structures are a contraction of the distance between edge atoms \(d_{E}\) and between C and H \(d_{HC}\). In ABNNR, we observe a similar contraction of the B-N distance on the edges \(d_{E}\), and different contractions of the distances between H-B and H-N (\(d_{HB}\neq d_{HN}\)). We observed also that these modifications are basically independent on the size of the nanoribbon both qualitatively and quantitatively, so the structural parameters undergo minimal variations when comparing nanoribbons of different size. ## III Gap edge states ### Agnrs The electronic structure of AGNRs has been already studied in the past [6; 8; 9; 10; 11; 20; 21; 22; 23; 34]. Both non-relaxed and relaxed ribbons display a band gap at \(\Gamma\) of gapwidth \(\Delta_{N_{a}}\). Because of the 1D confinement, the gapwidth falls in one of the three families \(N_{a}=3m-1\), \(3m\) or \(3m+1\) (with \(m\in\mathbb{N}^{*}\)). Each family follows a different trend which asymptotically tends to zero for growing nanoribbon sizes and follows the general rule \(\Delta_{3m-1}<\Delta_{3m}<\Delta_{3m+1}\). This is depicted in Figure 2 where we plot the gapwidth of AGNRs versus \(N_{a}\) for both non-relaxed and relaxed structures (red dashed and solid blue curves). The effect of relaxation is to open the gap by about 0.1 eV in families \(N_{a}=3m+1\) and \(3m-1\), while in the \(N_{a}=3m\) the opening is observed only in small nanoribbons, while the gap closes in larger ones. Our results are in quantitative agreement with previous works both for relaxed [11; 12; 21; 25], and unrelaxed simulations [26]. To characterise better the gap states, we analyzed in more detail the nature of the TV and the BC states at \(\Gamma\) in the relaxed structures. In panels a) and b) of Figure 3, we report the band structure and the density of states (DOS) of the AGNR8, chosen as a representative example. For sake of comparison, in panel b) we also report the orbital-projected DOS and the DOS of an infinite graphene sheet with the same inter-atomic distance. The DOS around the gap (from -1.5 eV to 1.5 eV) displays neat van Hove singularities arranged more or less symmetrically with respect to the middle of the gap. As the inset of panel b) shows clearly, the states composing the gap are entirely of \(p_{z}\) character. They form a \(\pi\) bonding with nodes on the \(xy\) plane, as expected. Instead, the first empty \(\sigma\) state is found at 3 eV above the BC. To go deeper in the analysis of the gap-edge states, we look at the site-projected DOS. We integrated the bare data inside an interval of 0.1 eV encompassing the TV and BC (shaded bands in the inset of Figure 3b). The outcome of this analysis is summarised in Figure 3c), where the site-projected DOS of gap-edge states is reported as a function of the row index (note that the curves are plotted on the same \(y\) axis). At variance from what observed in zigzag nanoribbons [7], the gap states are not concentrated on the edge atoms, but rather delocalized throughout the full nanoribbon and present a modulation that nicely displays the characteristics of a static wave. This observation is confirmed by the wave-like modulation of the charge probability \(|\psi(\mathbf{r})|^{2}\) associated with the TV and BC states, reported aside panel c). The wavefunction plot shows also that there is no spill-out on the passivating hydrogens and that, with respect to the edge-bbondings \(d_{E}\), TV and BC states display respectively a bonding and an antibonding character. ### Abnnrs The gapwidth of ABNNRs fall in the same three families with the same hierarchy [17; 18; 28]. This similarity with the graphene ribbons is actually quite general and can be understood from a simple tight-binding model (see section IV.2). The evolution of the ABNNRs gapwidth for sizes going from \(N_{a}\)=5 to 19 in the relaxed and non-relaxed configurations is presented in Figure 4 by the solid blue and the red dashed lines. The non-relaxed structures present a gap that monotonically tends to the limit \(N_{a}\rightarrow\infty\) in a way that is similar to non-passivated calculations [17]. We estimate \(N_{a}\rightarrow\infty=3.885\) eV from the weighted average of the curves extrapolated at \(1/N_{a}=0\) (cfr. inset of the Figure). This value is about 0.8 eV lower than the gapwidth of the isolated BN sheet (4.69 eV in PBE). All these aspects are consistent because, as it will become clearer later, in non-relaxed calculation, H atoms are too far to saturate efficiently the dangling bonds located at the edges of the ribbon. As a consequence, these form edge states inside the gap that lower the gapwidth similarly to what happens in non-passivated (bare) ribbons. As a result of the structural optimisation, the gapwidth of all families opens and tends to an asymptotic limit that is still about 0.1 eV lower than in the isolated monolayer, in agreement with similar calculations [14; 17]. This discrepancy is ascribed to a non-negligible edge contribution to the BC state, obviously absent in the isolated monolayer (cfr. the row-projected DOS analysis here below, and [14]). Finally, we note that the first empty \(\sigma\) state, i.e. the near free-electron state, is only 0.5 eV above the BC. Similarly to what done before, in Figure 5 we report the band structure, the projected DOS and the row-resolved DOS of the TV and BC states of the Figure 3: Electronic structure of the relaxed AGNR8. a) Band structure. b) and Inset: Total density of states (thick black) and projected on \(p_{z}\) orbital character (red bullets) compared with the DOS of the graphene sheet (dashed blue). c) Row-projected DOS from the integration of the total DOS around the band-edge states (shaded areas of panel b) and charge density associated with the TV and BC states at \(\Gamma\). Figure 2: Energy gap of graphene nanoribbons as a function of the width \(N_{a}\). Relaxed calculations (blue solid line), unrelaxed (red dashed line) and tight-binding numerical solution (black dotted) with parameters indicated in Table 1. The three families are reported with different symbols. A blue arrow at \(N_{a}=8\) indicate the nanoribbon chosen for the analysis presented in Figure 3. representative ABNNR8 system. We verify that the TV and the BC states are formed essentially of N-centered and B-centered \(p_{z}\) orbitals respectively. The row-projected DOS of both TV and BC, reported in panel c), shows again a very nice static-wave-like modulation with nodes in rows 3 and 6, but at variance with the AGNR8 case, here the TV and BC states localize differently: while the TV states are delocalised on the entire nanoribbon as in the previous case, the BC states are clearly peaked at the edges. The visualization of the associated charge density confirms that the TV state is characterised by a wavefunction equally delocalised on all the N atoms except those on rows 3 and 6. Instead, the BC state presents a wavefunction more concentrated on the edge B atoms with non negligible tails touching passivating H and edge nitrogens, in contrast to the isolated monolayer. The compared study of the TV and BC states of AGNRs and ABNNRs suggests that the gap of the two materials responds differently to modifications of the morphology and the passivation of the edges. To test this intuition, we have performed a detailed analysis by separating the two effects. ## IV Morphology vs chemistry of the edges ### Distinguishing the effects through selective relaxation in DFT Several investigations can be found in literature on the effects of edge reconstruction on the gapwidth of AGNR and ABNNR [19; 21; 22; 23; 24; 25; 26; 6; 12]. However, a study that systematically compares the effects of passivation and edge morphology is absent. Here we monitor the gapwidth in the family \(N_{a}=3m-1\) by relaxing separately the H-X distances \(d_{HX}\) (\(X\) = C, B or N) and the C-C or B-N distance on the edges \(d_{E}\). We did calculate data from the other two families but we do not report them because they have qualitatively the same behaviour. In Figure 6, a variation of \(d_{HX}\) is represented by a change in the line's type (color and dash), while a variation of \(d_{E}\) is represented by a change in the symbols (colour filled or empty). Let us examine first the case of AGNRs in panel a). We can start from a non-relaxed configuration where all atoms are equidistant \(d_{HC}\)=\(d_{E}\)=1.42 A (empty bullets, red dashed line), then we reduce \(d_{HC}\) to its relaxed value 1.08 A (empty bullets, blue solid line). We observe that there is basically no variation on the AGNRs' gapwidth. Instead, contracting the edge bonds from \(d_{E}\)=1.42 A to \(d_{E}\)=1.36 A opens the gap by around 0.15 eV irrespective of the value of \(d_{HC}\). Consequently, we conclude Figure 4: Energy gap of BN nanoribbons as a function of the size \(N_{a}\). Relaxed DFT (blue solid line), unrelaxed (red dashed line) and the numerical tight-binding solution (Table 1). The three families are reported with different symbols. Horizontal dashed lines indicate the gapwith of the DFT hBN sheet (4.69 eV) and the asymptotic \(N_{a}=\infty\) limit (\(\sim\)3.885 eV). The blue arrow pointing at the ABNNR8 indicates the system analysed in Figure 4. Inset: extrapolation of non-relaxed calculations at \(1/N_{a}=0\). The red arrow in the inset indicates the \(N_{a}=\infty\) limit as the weighted average of the extrapolation of the three families. Figure 5: Electronic structure of the relaxed ABNNRs. a) band structure; b) total density of states (thick black) and projected on \(p_{z}\) orbital character (red and green dotted for B and N states) compared to the hBN sheet DOS (dashed blue). c) Row-projected DOS integrated around the band-edge states (shaded areas of panel b). Insets: charge density of the TV and BC states at \(\Gamma\). that in AGNRs, the variations of the gapwidth induced by the relaxation and reported in Figure 2 are essentially due to changes of bond length \(d_{E}\) between C atoms at the edge. Interestingly, this gap opening is approximately independent on the width of the ribbon. Passing now to the study of ABNNRs (bottom panel), we observe an opposite behaviour. The gapwidth undergoes very small changes upon relaxation of \(d_{E}\), whereas the passage from the unrelaxed H-B and H-N distance (1.42 A) to the relaxed values clearly opens the gap by about 0.8 eV. To be more precise, by changing separately the two distances \(d_{HB}\) and \(d_{HN}\) (not shown), we found that it is the bonding between H and B that plays a major role in the opening of the gapwidth, indicating a dominant contribution from conduction states consistently with the observations we drew from Figure 5. According to this analysis, the gapwidth of ABNNRs is more sensitive to the passivation than to the very morphology of the edge. Once again we notice that the gap opening is basically independent on \(N_{a}\). This clarifies why our non-relaxed DFT gapwidth look very similar to the non-passivated results of Topsakal and coworkers [17]. ### Unperturbed tight-binding model To investigate further the reasons of this different behaviour, we generalise a ladder tight-binding model introduced initially for AGNRs to the heteroatomic case. Changes in the edge passivation and morphology will be successively introduced through variations of the on-site and hopping parameters of the model, as suggested in [6; 12], and the modified Hamiltonian solved both numerically and perturbatively [12]. Following references [6; 7; 8; 10; 12; 16; 20], the gap of an armchair nanoribbon whose TV BC states are formed of \(p_{z}\) orbitals can be described with a ladder tight-binding model as the one reported in Figure 7. The Hamiltonian of the model reads: \[H^{0}=\sum_{j,\mu}\left(\epsilon_{\mu j}\ket{\Phi_{\mu j}}+\sum_{j^{\prime}, \mu^{\prime}}t_{\mu\mu^{\prime}jj^{\prime}}\ket{\Phi_{\mu^{\prime}j^{\prime}} }\right)\ket{\Phi_{\mu j}}. \tag{1}\] The index \(j\in[1,N_{a}]\) labels the position of a dimer in the \(x\) coordinate (row coordinate), while \(\mu=1,2\) indicates the atomic site within the dimer (\(C_{1}\) or \(C_{2}\) in AGNRs and \(B\) or \(N\) in ABNNRs). The basis function \(\bra{r}\Phi_{\mu j}=\Phi_{\mu}(\mathbf{r}-\mathbf{r}_{j})\) is the \(p_{z}\) orbital of the atom \(\mu\) of the dimer placed at \(\mathbf{r}_{j}=\hat{x}(j-1)a\). For \(\mu=1\), \(\Phi_{\mu}(\mathbf{r}-\mathbf{r}_{j})\) is centered on the bottom rung if \(j\) is odd and in the upper rung if \(j\) is even, and the opposite for \(\mu=2\). At the unperturbed level, \(\epsilon_{\mu j}\) does not depend on the row-index \(j\) and is equal to \(\epsilon\) for \(\mu=1\) and \(-\epsilon\) for \(\mu=2\), with \(\epsilon\geq 0\). In the first-neighbour approximation, the hopping term \(t_{\mu\mu^{\prime}jj^{\prime}}=t\in\mathbb{R}\) if \(\mu\neq\mu^{\prime}\) and \(j-1\leq j^{\prime}\leq j+1\) and vanishes otherwise. The unperturbed solutions of this model are: \[E^{0}_{n\pm}=\pm\sqrt{\epsilon^{2}+\tau_{n}^{2}}=\pm\mathcal{E}_{n}\,, \tag{2}\] where \(\tau_{n}=t\left[1+2\cos\left(\theta_{n}\right)\right]\), the discrete index \(n\) comes from the confinement in the \(x\) direction and \(\theta_{n}=n\pi/(N_{a}+1)\). The eigenfunction associated to these states read \[\Psi_{n\pm}=\sum_{j=1}^{N_{a}}\sum_{\mu=1,2}\sin\left(j\theta_{n}\right)D_{\mu }^{n\pm}\Phi_{\mu}(\mathbf{r}-\mathbf{r}_{j}) \tag{3}\] with \[\begin{split} D_{1}^{n\pm}&=\sqrt{\frac{\mathcal{E }_{n}\pm\epsilon}{(N_{a}+1)\mathcal{E}_{n}}}\\ D_{2}^{n\pm}&=\pm\text{sgn}\left(\tau_{n}\right) \sqrt{\frac{\mathcal{E}_{n}\mp\epsilon}{(N_{a}+1)\mathcal{E}_{n}}}\end{split} \tag{4}\] where the function \(\text{sgn}\left(x\right)=1\) if \(x\geq 0\) and \(-1\) if \(x<0\). At this point, it is worth stressing two aspects. First, if one poses \(\tau_{n}=0\), then the Hamiltonian becomes diagonal and equivalent to that of a Figure 6: Gapwidth of the \(N_{a}=3m-1\) family of a) AGNRs and b) ABNNRs. Full (empty) symbols stand for relaxed (non-relaxed) edge-atom bondings. Blue solid (red dashed) lines for relaxed (non-relaxed) passivating-to-edge-atoms bondings. Figure 7: Scheme of the ladder model of width \(N_{a}=8\). The first neighbours distance is \(a\), the index \(j\) defines the position of a dimer. Atoms \(\mu=1\) are placed above \(\mu=2\) if \(j\) is even, below if \(j\) is odd. non-interacting system. Consistently, the coefficients \(D_{\mu}^{n\pm}\) become those of a pure system: \(D_{1}^{n+}=-D_{2}^{n-}=\sqrt{2/(N_{a}+1)}\) and \(D_{1}^{n-}=D_{2}^{n+}=0\). If instead one takes the homatomic limit, i.e. \(\epsilon\to 0\), then the coefficients become a bonding and antibonding pair, with \(D_{1}^{n\pm}=1/\sqrt{N_{a}+1}\) and \(D_{n}^{n\pm}=\pm\mathrm{sgn}\left(\tau_{n}\right)/\sqrt{N_{a}+1}\). The last occupied state (TV) \(\ket{\tilde{n},-}\) and the first empty state (BC) \(\ket{\tilde{n},+}\) are found at the integer quantum number \(\tilde{n}\) that minimizes the quantity \(\mathcal{E}_{n}\), i.e. that minimize \(\ket{\tau_{n}}\). If \(N_{a}=3m\) or \(3m+1\) with \(m\in\mathbb{N}^{*}\), then \(\tilde{n}=2m+1\). Note that the interacting term \(\tau_{2m+1}\) changes sign in passing from one family to the other. Instead if \(N_{a}=3m-1\), then the integer \(\tilde{n}=2m\) and \(\tau_{n}=0\). These considerations leads to the unperturbed gap of a heteroatomic system (\(\epsilon>0\)): \[\Delta_{N_{a}}^{0}=\left\{\begin{array}{ll}2\epsilon&\text{for $N_{a}=3m-1$}\\ 2\mathcal{E}_{2m+1}&\text{for the other values of $N_{a}$}\end{array}\right. \tag{5}\] and the eigenstates of the TV and BC of the \(N_{a}=3m-1\) family are pure states. The gap of a homoatomic system (\(\epsilon=0\)) reads: \[\Delta_{N_{a}}^{0}=\left\{\begin{array}{ll}0&\text{for $N_{a}=3m-1$}\\ 2|\tau_{2m+1}|&\text{for the other values of $N_{a}$}\end{array}\right. \tag{6}\] and the eigenstates of the TV and BC of the \(N_{a}=3m-1\) family are the bonding and antibonding combinations of \(C_{1}\) and \(C_{2}\). ### Distinguishing the effects through perturbation theory As in [6; 12], we now add to \(H^{0}\) a perturbation Hamiltonian \(\delta H\) which consists in adding \(\delta t\) to the hopping term connecting the atoms of the edge rows (\(j=1,N_{a}\)) and in changing their on-side energy by \(\delta\epsilon_{\mu}\). The hopping perturbation \(\delta t\) accounts for changes in \(d_{E}\), so it is more strongly related to the edge morphology, while the on-site one \(\delta\epsilon\) takes into account variations of \(d_{HX}\) and of the passivating species. The perturbative correction to the energy of the generic state \(|n\pm\rangle\) reads \[\begin{split}\langle n,\pm|\delta H|n,\pm\rangle=2\sin^{2}( \theta_{n})\times\\ \times\left[(D_{1}^{n\pm})^{2}\delta\epsilon_{1}+(D_{2}^{n\pm})^ {2}\delta\epsilon_{2}+2D_{1}^{n\pm}D_{2}^{n\pm}\delta t\right]\end{split} \tag{7}\] In the heteroatomic case \(\epsilon>0\), the perturbative correction to the gap is always \(\delta\Delta=\bra{\tilde{n},+|\delta H|\tilde{n},+}-\bra{\tilde{n},-|\delta H |\tilde{n},-}\). Using (7), the coefficients (4) or their appropriate limit, and remembering that \(\Delta_{N_{a}}^{0}=2\mathcal{E}_{\tilde{n}}\), then the gap correction for the case \(\epsilon>0\) reads, \[\delta\Delta=\left(\delta\epsilon_{1}-\delta\epsilon_{2}\right)/m \tag{8}\] for \(N_{a}=3m-1\); and \[\begin{split}\delta\Delta=\frac{8\sin^{2}\left(\theta_{2m+1} \right)}{(N_{a}+1)\Delta^{0}}\times\\ \times\left[\epsilon\left(\delta\epsilon_{1}-\delta\epsilon_{2} \right)+2\tau_{2m+1}\delta t\right]\end{split} \tag{9}\] for \(N_{a}=3m\) and \(N_{a}=3m+1\). Notice that, by construction, \(\tau_{2m+1}\) is the closest to zero among the accessible values, so the term \(2\tau_{2m+1}\delta t\) is always negligible. The result shows that in ABNNRs the variations of the gap are mostly due to the chemical environment of the edge atoms. This dependence comes ultimately from an interference between the TV and the BC wavefunctions. These two states are very close to pure states, so the mixed products \(D_{1}^{+}D_{2}^{+}\) and \(D_{1}^{-}D_{2}^{-}\) of equation (7) are systematically negligible, and they do actually vanish in the family \(N_{a}=3m-1\) where the two states are perfectly pure. In the homoatomic case (\(\epsilon=0\)) the corrected gap can be obtained following the same approach as before, and taking the appropriate limits of the coefficients (4). However, more attention must be paid in studying the family \(N_{a}=3m-1\). In fact this case corresponds to the double limit \(\epsilon\to 0\) and \(\tau_{n}\to 0\). Even though the final eigenvalues do not depend on the order with which the two limits are taken, the eigenstates do, therefore also the perturbative corrections depend on this choice. In DFT calculation and experiments, the system itself is well defined at the very first place, because one works either with ABNNRs or with AGNRs. So, for comparisons with DFT to make sense, the right order with such the limits must be taken is: first \(\epsilon\to 0\), followed by \(\tau_{n}\to 0\). Finally, one has to pay attention to another point: in the \(N_{a}=3m-1\) family, the TV and the BC states are degenerate and the unperturbed gap is 0. So there is no reason to define \(\delta\Delta=\bra{\tilde{n},+|\delta H|\tilde{n},+}-\bra{\tilde{n},-|\delta H| \tilde{n},-}\) rather than its opposite. However, the correction must be positive, so the correction must be defined as the modulus of the difference above. Putting all these things together, one gets for the homoatomic (\(\epsilon=0\)) case \[\delta\Delta=\left\{\begin{array}{ll}\frac{2}{m}|\delta t|&\text{for $N_{a}=3m-1$}\\ \mathrm{sgn}\left(\tau_{2m+1}\right)\frac{8\sin^{2}(\theta_{2m+1})}{(N_{a}+1)} \delta t&\text{otherwise}\end{array}\right. \tag{10}\] This result shows that in AGNRs most of the variations of the gap is accounted by \(\delta t\), so by morphological changes of the bonding between edge atoms, and not by changes of their chemical environment. Once again this result can be understood from the symmetries of the TV and BC wavefunctions. In fact, when \(\epsilon=0\), the TV and BC states are perfect bonding and antibonding combinations at any \(N_{a}\), so their difference causes the terms in \((D_{\mu}^{n\pm})^{2}\) of equation (7) to always cancel out. This result, although in perfect agreement with [12], seems to be in blatant contradiction with results from 2H-passivated AGNRs [26], where the gap is found independend on the C-C edge distance. Actually, these systems present a hybridisation of the \(sp3\) type and their gapwidth can not be described by this model. ### Validation of the perturbative approach Besides the perturbative approach, we also solved the perturbed Hamiltonian \(H=H^{0}+\delta H\) numerically. For the unperturbed problem, we parametrized the model with values that fit the band structure of the isolated graphene and hBN monolayers. Instead, the perturbation parameters \(\delta\epsilon\) and \(\delta t\) have been adjusted to recover as best as possible the DFT curves reported in Figures 2 and 4. The best parameters are reported in Table 1. Successively we explored how the gap changes upon variations of the perturbative parameters \(\delta t\) and \(\delta\epsilon_{\mu}\) in the range -1 eV, +1 eV in the nanoribbons of width \(N_{a}\)=11, 12 and 13, i.e. one representative nanoribbon per family. Guided by physical intuitions we took \(\delta\epsilon_{1}=\delta\epsilon_{2}=\delta\epsilon\) in the case of AGNRs, and \(\delta\epsilon_{1}=-\delta\epsilon_{2}=\delta\epsilon\) in the case of ABNNRs. Globally, the numerical and the perturbative gap-width are in very good agreement for both ABNNRs and AGNRs in the range explored, confirming our conclusions. In all cases, the numerical solution displays a quadratic trend with respect to \(\delta\epsilon\) which adds on top of the invariance (AGNR) or the linear (ABNNR) dependence predicted by the perturbative approach. The deviations between the two approaches are larger for this parameter than for \(\delta t\), with the larger deviations of the order of 0.2 eV in the \(N_{a}=3m\) and \(N_{a}=3m+1\) families of ABNNRs. Instead, the deviations for the parameter \(\delta t\) are in general very small and never larger than 0.1 eV. Note however that for extreme values of \(\delta t\), the numerical solution may undergo a band crossing in the top valence and the bottom conduction which would lead to a sudden closing of the gap, as it is the case at \(\delta t=-0.9\) in AGNR13 and \(\delta t=0.9\) in AGNR12. This physics is not accessible in our first order expansion and clearly sets the limit of applicability of the perturbative approach. ## V Conclusion We have calculated with DFT the gapwidth of graphene and boron nitride armchair nanoribbons (AGNRs and ABNNRs) for ribbon sizes going from \(N_{a}=5\) rows to \(N_{a}=19\) rows both for relaxed and unrelaxed structures. We have relaxed selectively specific interatomic distances and reported how the gapwidth changes upon variations of the bondlength with passivating atoms (chemistry-driven changes) and between edge atoms (morphology-driven changes). Thanks to this selective relaxation, we showed that the variations of the gapwidth in AGNRs are morphology-driven, while in ABNNRs are chemistry-driven. To understand why, we adopted and extended the tight-binding approach introduced by Son and coworkers [12] and we demonstrated that the interference between the wavefunctions of the top valence and the bottom conduction are at the origin of these two distinct responses. \begin{table} \begin{tabular}{ In the AGNR case, these states are basically a bonding and antibonding pair. As the two states are equally distributed on the atoms, the difference between BC and TV leads to a mutual cancellation of on-site changes, and only hopping terms survive. This explains the stronger dependence of the gapwidth on interatomic distances and hence on the morphology of the edges rather than the chemical environment. At variance, in ABNNR case, the TV and the BC states are basically pure states and the effective Hamiltonian is quasi non-interacting. As a result, the two states are mostly insensitive to variations in the hopping term and are instead strongly affected by on-site variations (chemical environment). Our results can help pushing further the research on nanoribbon-based devices, as they clarify the role played by edge-engineering, and selective passivation and provide the tools to investigate more complex scenarios.
2309.10526
NSOAMT -- New Search Only Approach to Machine Translation
Translation automation mechanisms and tools have been developed for several years to bring people who speak different languages together. A "new search only approach to machine translation" was adopted to tackle some of the slowness and inaccuracy of the other technologies. The idea is to develop a solution that, by indexing an incremental set of words that combine a certain semantic meaning, makes it possible to create a process of correspondence between their native language record and the language of translation. This research principle assumes that the vocabulary used in a given type of publication/document is relatively limited in terms of language style and word diversity, which enhances the greater effect of instantaneously and rigor in the translation process through the indexing process. A volume of electronic text documents where processed and loaded into a database, and analyzed and measured in order confirm the previous premise. Although the observed and projected metric values did not give encouraging results, it was possible to develop and make available a translation tool using this approach.
João Luís, Diogo Cardoso, José Marques, Luís Campos
2023-09-19T11:12:21Z
http://arxiv.org/abs/2309.10526v1
# NSOAMT - New Search Only Approach to Machine Translation ###### Abstract Translation automation mechanisms and tools have been developed for several years to bring people who speak different languages together. A "new search only approach to machine translation" was adopted to tackle some of the slowness and inaccuracy of the other technologies. The idea is to develop a solution that, by indexing an incremental set of words that combine a certain semantic meaning, makes it possible to create a process of correspondence between their native language record and the language of translation. This research principle assumes that the vocabulary used in a given type of publication/document is relatively limited in terms of language style and word diversity, which enhances the greater effect of instantaneously and rigor in the translation process through the indexing process. A volume of electronic text documents where processed and loaded into a database, and analyzed and measured in order confirm the previous premise. Although the observed and projected metric values did not give encouraging results, it was possible to develop and make available a translation tool using this approach. NSOAMT NLP Natural language processing Translation Text metrics ## 1 Introduction Translation automation mechanisms and tools have been developed for several years to bring people who speak different languages together. In the last year most of these tools have been based on deep learning, in part due to the rise of AI technologies, but also due to some abstraction it provides to the multiple language semantics that exists. In this paper we describe a research project, named _New Search Only Approach to Machine Translation_ (NSOAMT) developed to tackle some of the issues (inaccuracies, etc.) of the other approaches. The idea is to develop a solution that, by indexing an incremental set of words that combine a certain semantic meaning, makes it possible to create a process of correspondence between their native language record and the language of translation. This research principle assumes that the vocabulary used in each type of publication/document is relatively limited in terms of language style and word diversity, which enhances the greater effect of instantaneously and rigor in the translation process through the indexing process. In this paper we present the results we found when putting such principles to practice, as we attempt build a machine translation service based on such premises. ## 2 Problem statement Although several general purpose language translation services already exist, it is still known that for high-quality translations in specific domains, a human expert is still required ([1], [2], [3]). Natural language sentences are just a sequence of words. In the eventuality that we could quantify and store the most commonly used sentences, could this domain expertise for machine translation purposes be crowd-sourced ([4])? If not, can the distance to this practical goal be measured (or at least guessed)? ## 3 State of the art The evolution of natural language processing has come a long way since the early 1950's.[5]. There are many technological approaches, but these can be mostly split into three major categories [6]: 1. symbolic rule-based processing systems 2. statistical approaches 3. neural network based Although the present's focus is mostly on neural network based techniques (due to the popularity of Large Language Models [7]), the approach discussed in this article is best categorized as a "statistical approach". ## 4 Methodology 1. Import vast quantities of text documents, broken into sentences. 2. Identify common text fragments. 3. Crowdsource [4] translation of common text fragments. This methodology is only viable today due to: * The Internet and the World Wide Web connecting several organizations, institutions, and private initiatives, making available several sources of text. (See sec.5.1). * General availability of open-source libraries and tools, such as NLTK, that enable quick prototyping of some NLP techniques. (See sec.5.2.3). * The consequences of the Moore's law [8] allowing for hardware capable of handling Terabytes of text at a reasonable cost. ### Sentence model Figure 1 illustrates the relational model for the documents and sentences ingested. The description of each table is: **document**: - one table row for each ingested document. The content column contains the UTF-8 plain text of the document before sentence separation. **sentence**: - one table row for each distinct sentence. The plainText column contains the UTF-8 text of the sentence, including punctuation and inner spaces. There are no duplicate sentences for the same plainText content, so it can be said that the rows in table sentence represent **distinct sentences**. (A count of rows on table sentence in a given context is the value of the metric of **#distinct sentences** in that same context). **sentencesource**: - one table row for sentence extracted from the document. The column startOffset is the sentence sequence order number. (A count of rows on table sentencesource in a given scope is the value of the metric of **#sentences** in that same scope). **sentencetranslation**: - one table row for a possible (context-free) translation of a sentence. ### Metrics When trying to translate a document it is assumed that there is already a database that contains all possible translations for each of the the sentences in the document. Logically, it can be assumed that the more sentences that are imported, the better chances there are of obtaining a correspondence between a new sentence and a sentence that already exists in the system. This line of thought thus allows measuring the current state of the system with the import of more and more files using the following metrics: **#sentences**: - count of sentences in the text, as parsed by a sentence tokenizer software. The sentence separation is not as accurate as a human reader would perceive it, but an effort was made to make it consistent across all measurements. See sec.5.2.3. **#distinct sentences**: - (sometimes abbreviated as **#d.sentences**) How many distinct sentences exist in system and information of the idea of the distance at which the theoretical ceiling is found; These can be divided into subcategories: **#distinct sentences without repetitions**: - (also called **#unique d.sentences** for shortness) How many distinct sentences exist that have been referenced (used in a document) only once in the whole sentence database. (In the short "#unique d.sentences", the "d." standing for "distinct" is redundant, as all unique sentences are distinct, but makes clearer that it should be related with the "#distinct sentences" metric, and not the "#sentences" metric). **#distinct sentences with repetitions**: - How many distinct sentences have been referenced more than once (regardless of the document, and regardless of the multiplicity of the repetition). For illustration purpose, Figure 2 has a small example text document, which results in the metrics shown in Table 1. Description of the other metrics shown in Table 1: **#text characters**: - include line breaks and other invisible characters, so it might vary for the same visual text content. For the English language texts, it is also a good approximation of the metric of the volume of information bytes processed, as UTF-8 character encoding [9] is used. Figure 1: Entity-relationship sentence diagram. Figure 2: Example text document. **#distinct sentences %** - the percentage is calculated as \(\frac{\#distinct\ sentences}{\#sentences}\) **#unique d.sentences %** - the percentage is calculated as \(\frac{\#unique\ d.sentences}{\#distinct\ sentences}\). **#non-unique sentences %** - You can calculate the percentage of sentences with repetitions using the expression \(\frac{\#sentences-\#unique\ d.sentences}{\#sentences}\). For this example, it is 50%. This metric is not usually shown (as the ratio between the two underlying values can be easily observed), but should not be compared with the **#unique d.sentences %** metric. ### Theoretical limits The feasibility of the project assumes that the vocabulary in use is limited. Based on this, it can be assumed that the number of feasible sentences resulting from possible vocabulary combinations is also limited. However, this claim contradicts the opinion of linguists who often point to the potential for an infinite number of possible sentences to exist. It is also known that for an infinite number of sentences to exist, at least one of the following conditions must be met: * There are an infinite number of words; * A sentence can contain any number of words (unbounded). For the first condition (there are an infinite number of words) to hold, there would have to be an infinite number of symbols (letters) or the words have an infinite number of letters. In western languages the number of symbols is not infinite and the longest word in the world contains 189819 letters [10] (being, however, a technical word not used in natural language) so it can be admitted that there is a finite number of words, because none of the conditions is verified. It is true that new words that exceed this limit can be created, but it is also possible to admit that the number of words that will be created is also finite and that these same words will not be used in everyday life. In this way, it is feasible to admit that there is a finite number of words used within an infinite number of possible words [11]. To estimate the number of existing words, the Oxford dictionary, which contains about 600,000 words, can be used as a basis. This number is constantly increasing; however the authors also assume that the number of new words will grow in a finite way, in the same way that the number of archaic words (i.e. words that are not used) also grows. The second condition (a sentence contains any number of words) also holds, as shown by the examples found [12]. Obviously, these examples are exceptions and in common communication longer sentences have a lower level of understand-ability. To answer the question of "How many words can a sentence contain in order to maintain effective communication?" It is possible to find studies that point out that, from 43 words, a reader only understands 10% of the content. For this reason, some organizations (such as the UK Government) recommend a maximum limit of 25 words per sentence to maintain effective communication ([13]). Based on the previous numbers, a maximum limit was estimated for the universe of sentences with comprehensibility above 10%, increasing the number of words in the dictionary to the number of words in a sentence. Thus, the number of possible sentences was limited to: \begin{table} \begin{tabular}{l||r} \hline \hline & Example (en) \\ \hline \hline \#documents & 1 \\ \hline \#text characters (UTF-8) & 140 \\ \hline \#sentences & 4 \\ \hline \#distinct sentences & 3 \\ \hline \#distinct sentences \% & 75\% \\ \hline \#d.sentences with repetitions & 1 \\ \hline \#d.sentences with repetitions \% & 33,33\% \\ \hline \#unique d.sentences & 2 \\ \hline \#unique d.sentences \% & 66,67\% \\ \hline \#non-unique sentences \% & 50,00\% \\ \hline \hline \end{tabular} \end{table} Table 1: Metrics for the example text in Figure 2 \[\sum_{n=1}^{n=43}600000^{n}\approx 600000^{43}\approx 288.74\times 10^{246} \tag{1}\] This value is a theoretical ceiling, as it is not possible to randomly combine 43 words and generate, in all iterations, a grammatically correct sentence. Estimating the possible number of grammatically correct sentences is extremely complex, because, to do so, one would have to understand it in such a way that so that it would be possible to enumerate them. According to a 1953 work by Michael West, it was concluded that, out of 600,000 words, it is possible to create a list with approximately 2000 words that represent a coverage of 80% of the entire text, often written. This list was published under the name of "General Service List" (G.S.L.) [14]. In 2013 (60 years after the creation of the original list) the list was expanded to contain 2818 words and is published under the name of "New General Service List" (N.G.S.L.) [15]. This new list increased the coverage of the entire text to around 90%. Given this new information, it was possible to repeat the calculation, with a view to trying to cover the maximum amount of text, with the fewest possible sentences: \[\sum_{n=1}^{n=43}2818^{n}\approx 2818^{43}\approx 22,26\times 10^{147} \tag{2}\] Again, this represented a theoretical ceiling, being practically lower, for the same reason described above. Limiting this value to the advised 25 words, the universe of possible phrases is even smaller: \[\sum_{n=1}^{n=25}2818^{n}\approx 2818^{25}\approx 177,22\times 10^{84} \tag{3}\] These 2818 words only represent the text written in everyday life. As the vocabulary used is circumstantial, when entering a context, new words will have to be added to obtain the same level of coverage. With this motivation, 3 new lists were created, which do not repeat words with the N.G.S.L.: * "Academic Word List" (N.A.W.L.)[16]: 92% coverage; * "TOEIC Service List" (T.S.L.)[17]: 99% coverage; * "Business Word List" (B.S.L.)[18]: 97% coverage. Therefore, Table 2 presents the limits of possible sentences, using the previous lists, as a "theoretical limit". (The number of possible sentences that "make sense" is expected to be lower). Given the above, it became necessary to verify the assumption, starting by importing enough sentences, which would allow obtaining a satisfactory degree of correspondence, as well as having a projection of necessary sentences, below the theoretical maximum limit. ## 5 Implementation This section describes the software stack (both technology, and implementation design choices), used to carry out the measurements and implement the resulting web site. \begin{table} \begin{tabular}{l||c|c|c} \hline List of Words & \# total of words & Ceiling for 25 words & Ceiling for 43 words \\ \hline N.A.W.L. & 3778 & \(2.70\times 10^{89}\) & \(6.64\times 10^{153}\) \\ T.S.L. & 4018 & \(1.25\times 10^{90}\) & \(9.38\times 10^{154}\) \\ B.S.L. & 4518 & \(2.36\times 10^{91}\) & \(2.36\times 10^{91}\) \\ \hline \end{tabular} \end{table} Table 2: Descriptive table of the number of words per word list and maximum possible combinations for advisable sentence length (25 words) and sentence length where it is incomprehensible (43 words) ### Text sources The text sources used where: [https://eur-lex.europa.eu/](https://eur-lex.europa.eu/) - Legislation documents from the European Union, available in 24 languages; HTML format. [https://dumps.wikimedia.org/](https://dumps.wikimedia.org/) - Wikipedia backup dumps. XML+Wikitext format. [https://arxiv.org/](https://arxiv.org/) - Open-access scholarly articles. PDF format.1 Download was performed my mirroring tools, with articles organized in monthly folders. (Only the latest version of each article was ingested.) Footnote 1: Non-PDF articles where discarded. **tBooks** - Several plain text literature content, obtained from sources like [https://www.gutenberg.org/](https://www.gutenberg.org/), [https://chroniclingamerica.loc.gov/](https://chroniclingamerica.loc.gov/), [https://muse.jhu.edu/](https://muse.jhu.edu/), [https://market.cantook.com/](https://market.cantook.com/), [https://www.bookrix.com/](https://www.bookrix.com/), [https://archive.org/](https://archive.org/), [https://manybooks.net/](https://manybooks.net/), [https://www.smashwords.com/](https://www.smashwords.com/), [http://digital.library.upenn.edu/books/](http://digital.library.upenn.edu/books/). Plain text (UTF-8) format. We call this source aggregate **tBooks**.2 Footnote 2: The content extracted from these sources is not publicly accessible on the NSOAMT site, due to possible copyright issues. ### Ingestion pipeline The first stage of the ingestion pipeline is loading a content (in a specific electronic format) and splitting into plain text sentences (see Figure 3). The actual sequence and details of each transformation depend on the format and source of the text. See 5.2.2 for issues and caveats of each source/format. The last stage of the ingestion pipeline is loading the batches of parsed documents into the database. For a large source, such as arXiv, concurrent/parallel loading was needed, as shown in Figure 4. The high level algorithm for ingestion is: 1. Format conversion to (HTML, WikiText, PDF, etc.) to plain text (or plain text groups). 2. Split text into sentences (using 5.2.3). Apply sentence transformation procedures (such as hash calculation). 3. insert the whole document into the database. 4. for each sentence (by order of occurrence in the document): 4.1. search if the sentence already exists in the database: 4.1.1. if yes, associate existing sentence to current document. 4.1.2. if no, insert the new sentence into the database, and then associate to the current document. 5. (Post ingestion) Duplicate sentence elimination. Figure 3: Ingestion pipeline general structure. Steps 1 and 2 can be sped up using multiple processes (when each document and sentences fit into memory). Steps 3 and 4 are performed in a single transaction (to avoid having non-parsed documents in the database), and can also be sped up using parallel execution, but there is a race condition between steps 4.1.1. and 4.1.2. Hence the need for a post-ingestion duplicate clean-up on step 5.3 Footnote 3: Normally, in a SQL database, the race condition would be avoided using a UNIQUE INDEX on the column sentence.plainText. But occasionally, long sentences are ingested, bumping into an undocumented PostgreSQL v12 limitation of unique indexes on large text values (which failed above 2704 plain text bytes, well below the expected 8129 bytes index page size limit). #### 5.2.1 md5hash The data model exhibited in Figure 1 shows a column sentence.md5hash, added for (read) indexing purposes. (As SQL built-in indexing was not possible4 due to very long sentences. These long 'plain text' sentences are probably _garbage_ resulting from a bad text extraction, but the decision was made to keep them, for future research). Footnote 4: Abnormally long sentences would not be indexed by regular PostgreSQL v12 B-Tree INDEX. And neither full-text-search, trigrams, or inverted indexes work with these uncommonly large text strings, for that matter. The choice of the hashing algorithm to be MD5 [19] (between MD5, SHA1, and others) were based on small storage requirements, speed of calculation, and the fact that implementations in python v3.6 hashlib and PostgreSQL v12 gave identical results. It is known that the MD5 algorithm has a lower collision resistance (when compared to other more recent algorithms) [20], but as the purpose here was just to sped up the search (not cryptography grade collision resistance), it suffices. Note that in the model 4.1 hash collisions are possible, expected, and well handled. #### 5.2.2 Electronic document formats Plain textUTF-8 encoded text documents.[9] Pros: Simpler to inspect and compare resulting model data to original text. Cons: Separation between text fragments is sometimes not clear. Example: Titles, page headers, footers and foot notes, sentences interrupted by line and page breaks, sometimes placed and mixed amongst the text content without any convention consistency. Sometimes it is possible to develop a special transformation that identifies these occurrences and splits the text into blocks (without these issues). Sometimes not. (Usually not done, because conventions vary a lot between documents, even from the same source). WikiTextConversion to plain text done using [21]. Figure 4: Concurrent PDF batch ingestion. Pros: Same as plain text. Additionally, the structure of Wikipedia's extracted text (and text style) splits into sentences very well, using NLTK's default sentence tokenizer API. Cons: None that came to mind, although is a relatively short source (in terms of available volume of text). Hyper Text Markup Language (HTML)Extraction of text from HTML [22] content is done using [23]. Pros: Block tags force separation of text fragments (forcing sentence breaks). Cons: Consistency of the formatting (and tag use) in the content layout is a very case-by-case approach. Page layout and navigation information filtering is also handled on a specific source-by-source case. Portable Document Format (PDF)Text extraction from PDF files [24] is performed using pdfminer.six. [25] Pros: Largest source volume of documents available (example: arXiv). Cons: Extraction of text from scientific articles in PDF format is problematic ([26] and [27]). This results in many badly broken sentences. Some PDFs files have internal text represented in ways that result in garbled extracted text, while others even break the extraction process (and as such, PDF file upload is not publicly available on the NSOAMT site). #### 5.2.3 Sentence tokenizer NLTK [28] is a python framework for natural language processing. The default sentence tokenizer API was used to extract an ordered list of sentences from plain text content. It is the core of the ingestion pipeline. Using the default API (without a custom tokenizer trained for a specific text style) does not always produces good results (specifically in text extracted from scientific article's), but the same tokenizer results were consistent across all measurements. ### Sentence validation By sampling a few sentences, several examples with unusual (incorrect) grammar are easily spotted: ... , 2017, 1-10 Editors: Will be set by the publisher 7 1 0 2 v o N 7 ] G L. s c [ 3 v 1 0 0 0 0. 2 0 7 1 : v i X r a LEARNING THE DISTRIBUTION WITH LARGEST MEAN: TWO BANDIT FRAMEWORKS * Emilie Kaufmann 1 and Aur'elien Garivier 2 Abstract. ... As it stands now, the system includes a lot of partial sentences resulting from issues like text mis-extraction and sentence tokenization on titles, headers, footers, formulas, tabular data, graphics extracted as text, graphical text overlays, etc. There are at least two possible mitigation strategies for this problem: * Improve the quality of the text extraction (and sentence tokenization). * Exclude the _improperly_ extracted sentences from the metrics. Improving the quality of text extraction and sentence tokenization seems a never-ending battle (recognize/learn/train/develop text extractors for a never-ending variety of distinct specific documents and text styles). As such, the efforts were focused on filtering out _improperly_ extracted sentences (because it simply felt a simpler and smaller task). The "LanguageTool" version 5.7 [29] seemed like a good candidate for an out-of-the-box linguistic tool that classifies sentences as valid or non-valid: It is open-source, suitably licensed, can be used on-premises and off-line, has a python interface language-tool-python, and the set of validation rules can be customized. In the results section (6), sentences that have been checked using this tool are referred to as **valid sentences**. Note that, filtering sentences using such a tool, provides no warranty that we are working with a subset of "commonly used English" sentences that make sense (when read by a human). It just eliminates a large number of sentences that contain grammar rule violations (from the set of rules that the tool implements), some of which may be caused by bad text extraction, others by bad sentence tokenization, amongst other causes. ### Web interface In order to share the knowledge developed and show the planned translation functionality, a web interface for translation based on the technology studied was developed and made publicly available. This site is available at [https://nsoamt.pdmfc.com](https://nsoamt.pdmfc.com). See Figure 5 for the web frontend homepage. A couple of functionalities were made available in this web site. The main one is translation (See Figure 6). Here the user can write is own text or upload a text/plain file and translate it. The translation is made by parsing the input text into sentences and for each sentence search if already exists in the database and if already have a translation (See Figure 7). A list of highlighted sentences will be presented where green means translated and red means the translation was not found. Since the input text needs to be separated into sentences, the concept of paragraph ends up being lost. The process of reconstructing the original translated text would be complex, would require more computation and the correct reconstruction of the text is not guaranteed. One of the main obstacles encountered is the correct processing of uploaded texts due to the wide variety of file formats. To ensure better results for the user, the file formats allowed for direct upload was restricted to text/plain, due to its simplicity and the tests made show better results on sentence division without much pre- and post-processing. Figure 5: Application landing page. Figure 6: Translation page. Figure 7: Translation workflow. Other functionality available on the website is to search the documents loaded into the database (See Figure 8). From the search page it's possible to open a detailed page of a specific document (See Figure 9). Here the user has access to more information like the MIME-type, size and if available, download the document for a complete analysis of the document. In this page is also available the list of sentences taken from the parsing process. With each sentence along comes the number of times the sentence appears in other documents and a small list of those documents. With this we can analyze the parsing process as well as the repetitions of sentences (centerpiece of this project). It's also possible to search the sentences that came from the parsing process of the uploaded documents (See Figure 8). From this page the user can go to the specific sentence page with more information about it. In the sentence information page (See Figure 11) it is possible to find the number of repetitions of the sentences, for the sentence and a list of documents where the sentence appears. ## 6 Results ### Ingested text The total volume of text ingested is shown in Table 3. Tables 4 and 5 break down the volume metrics by source and language. ### Common sentences Tables 6 and 7 displays the number of common distinct sentences between document sources. Figure 8: Search documents page. Figure 9: Document information page. \begin{table} \begin{tabular}{l||r|r|r|r|r} \hline & ArXiv 0802-2112 & Wikipedia (en) & EUR-LEX (en) & tBooks (en) \\ \hline \hline \#documents & 1,511,891 & 61 & 17,190 & 51,593 \\ \hline \#text characters (UTF-8) & 80,399,442,210 & 14,102,084,349 & 8,615,499,262 & 19,883,677,356 \\ \#sentences & 761,978,703 & 130,111,846 & 24,109,494 & 233,973,998 \\ (same source) & & & & \\ \#distinct sentences & 557,332,655 & 114,423,765 & 8,112,606 & 206,490,528 \\ \#distinct sentences \% & 73.14\% & 85.29\% & 33.65\% & 88.25\% \\ (same source) & & & & \\ \#d.sentences with repetitions \% & 18,914,498 & 2,426,673 & 1,712,047 & 5,471,817 \\ (same source) & & 3.39\% & 2.12\% & 21.10\% & 2.65\% \\ (same source) & & & & \\ \#unique d.sentences & 538,418,157 & 111,997,092 & 6 400,559 & 201,018,711 \\ (same source) & & 96.61\% & 97.88\% & 78.90\% & 97.35\% \\ \hline \end{tabular} \end{table} Table 4: Volume of english text ingested from static sources Figure 11: Sentence information page. Figure 10: Search sentence page. ### Evolution of d.sentences with repetitions A question is raised "How much volume of text would need to be ingested to have a desired % of distinct sentences with repetitions?" To project an answer to this question, we analyzed the evolution of metrics on the arXiv data source alone, for the following reasons: 1. avoid mixing writing styles too much (between data sources). 2. arXiv is the largest volume data source, organized by monthly folder groups. Using the arXiv data source segmented by years 2016 to 2020, Table 8, a trend line was elaborated in Figure 125 Footnote 5: The trend line was calculated using LibreOffice’s trend line feature, for the data points shown here. The \(R^{2}=0.985\) gives some confidence on the logarithmic trend, at least for interpolation. (Although not shown here for brevity, the logarithmic trend line was the best fit compared to other curves (linear, exponential, polynomial, etc.).6) Footnote 6: Also not shown here for brevity, but subdividing the same yearly arXiv data into months - and thus having 12 times more data points - was also visually consistent with the shown the logarithmic trend. Using the trend line for extrapolation, (assuming that the trend line would not change the type of curve), Table 9 shows the projected #text characters that would be required to achieve the desired %d.sentences with repetitions. \begin{table} \begin{tabular}{l||r|r|r} \hline Common \#distinct sentences (en) & arXiv 0802-2112 & Wikipedia (en) & EUR-LEX (en) & tBooks \\ \hline \hline arXiv 0802-2112 & 761,978,703 & & \\ \hline Wikipedia (en) & 46,531 & 130,111,846 & \\ \hline EUR-LEX (en) & 5,448 & 28,130 & 24,109,494 & \\ \hline tBooks & 63,747 & 145,199 & 4,665 & 233,973,998 \\ \hline \hline \end{tabular} \end{table} Table 6: Common **#distinct sentences** between English sources \begin{table} \begin{tabular}{l||r|r} \hline Common \#distinct sentences (pt) & Wikipedia (pt) & EUR-LEX (pt) \\ \hline \hline Wikipedia (pt) & 16,594,472 & \\ \hline EUR-LEX (en) & 8,600 & 34,280,621 \\ \hline \hline Common \#distinct sentences (all sources) & 8,600 \\ \hline \end{tabular} \end{table} Table 7: Common **#distinct sentences** between Portuguese sources \begin{table} \begin{tabular}{l||r|r} \hline Common \#distinct sentences (pt) & Wikipedia (pt) & EUR-LEX (pt) \\ \hline \hline Wikipedia (pt) & 16,594,472 & \\ \hline EUR-LEX (en) & 8,600 & 34,280,621 \\ \hline \hline Common \#distinct sentences (all sources) & 8,600 \\ \hline \end{tabular} \end{table} Table 5: Volume of Portuguese text ingested from static sources Even for the 5% objective (the nearest to the current 3.39% shown in Table 4), it does not seem practical to gather 3.77E+13 characters (\(\approx\) 37 TeraBytes) of text in a short notice, to verify this projection. But the projection is still verifiable on the total number of arXiv characters ingested. From Table 4 for 80,399,442,210 text characters: **Predicted:** 3.49% using the trend line. **Observed:** 3.39% distinct sentences with repetitions Projections for higher %d.sentence with repetitions (25%, 50%, 75% and 100%) are also shown in table 9 for curiosity, and should not be taken very seriously, as it is well known that for a logarithmic curve, small variations on the curve coefficient will cause large changes for distant points. These projections also assume that curve does not change shape. ### Evolution of d.v.sentences with repetitions This motivated the use of LanguageTool 5.3 to reduce the _noise level_ in sentences and analyze how it would affect projections. ("v' in "d.v.sentences" stands for "valid"). On Table 10 we compare the full arXiv metrics (already displayed on Table 4) to the same metrics using only validated sentences. The data for years 2016 to 2020 is shown in Table 11, and the trend line plotted on Figure 13. The column #d.v.sentences is the number of d.sentences that pass the validation too check. The column %valid is = \(\frac{\#d.v.sentences}{\#d.sentences}\cdot 100.00\). The validation tool seems to reject approx. 50% of sentences. It can Figure 12: Trend line for %d.sentences with rep. vs #text characters. \begin{table} \begin{tabular}{l||c|c|c|c} \hline arXiv & \#text characters & \#d.sentences & \#d.sentences & \%d.sentences \\ by year & & & with repetitions & with repetitions \\ \hline \hline 2020 & 10,076,799,973 & 72,042,632 & 2,139,053 & 2.97\% \\ \hline 2019+2020 & 18,498,004,627 & 130,817,330 & 4,114,811 & 3.15\% \\ \hline 2018+2019+2020 & 25,986,041,152 & 182,644,683 & 5,900,194 & 3.23\% \\ \hline 2017+2018+2019+2020 & 32,503,697,718 & 227,734,087 & 7,448,879 & 3.27\% \\ \hline 2016+2017+2018+2019+2020 & 38,441,439,656 & 268,911,716 & 8,849,748 & 3.29\% \\ \hline \end{tabular} \end{table} Table 8: arXiv text metrics for years 2016 to 2020. \begin{table} \begin{tabular}{l||c|c|c|c} \hline \%d.sentence with repetitions & **5.00\%** & 25.00\% & 50.00\% & 75.00\% & 100.00\% \\ \hline \hline \#text characters & **3.77E+13** & 9.81E+48 & 1.82E+93 & 3.39E+137 & 6.29E+181 \\ \hline \end{tabular} \end{table} Table 9: arXiv projections for %d.sentences with repetitions vs #text characters. also be noticed that the %d.v.sentences is higher than the %d.sentences (on the distinct but-non-validated sentence data). The trend line predicts the need for 3.93E+10 (\(\approx\) 28.2 GigaBytes of text) to achieve 5% distinct valid sentences with repetitions. (Table 12 shows text volume extrapolations based on the same trend line for higher percentages). From Table 10 for 80,399,442,210 text characters (\(\approx\) 8.03E+10 text characters) (\(\approx\) 74.9 GigaBytes of text), we observe 5.18% distinct valid sentences with repetitions: **Predicted:** 5.33% using the trend line. **Observed:** 5.18% distinct valid sentences with repetitions ## 7 Conclusions Some success was achieved in extrapolating the evolution of the number of distinct sentences with repetitions vs the volume of ingested text, using a trend line of logarithmic nature. \begin{table} \begin{tabular}{l||r|r|r|} \hline arXiv & \#text & \#d.sentences & \#d.v.sentences & \%valid & \%d.v.sentences \\ by year & characters & & & with repetitions \\ \hline \hline 2020 & 10,076,799,973 & 72,042,632 & 36,257,428 & 50.33\% & 4.36\% \\ \hline 2019+2020 & 18,498,004,627 & 130,817,330 & 64,926,392 & 49.63\% & 4.66\% \\ \hline 2018+2019+2020 & 25,986,041,152 & 182,644,683 & 89,785,881 & 49.16\% & 4.82\% \\ \hline 2017+2018+2019+2020 & 32,503,697,718 & 227,734,087 & 110,992,373 & 48.74\% & 4.91\% \\ \hline 2016+2017+2018+2019+2020 & 38,441,439,656 & 268,911,716 & 130,019,555 & 48.35\% & 4.97\% \\ \hline \end{tabular} \end{table} Table 11: arXiv text metrics for years 2016 to 2020 for valid sentences. Figure 13: Trend line for %d.sentences with rep. vs #text characters. \begin{table} \begin{tabular}{l||r|r|r|r|} \hline arXiv & \#text & \#d.sentences & \#d.v.sentences & \%valid & \%d.v.sentences \\ by year & characters & & & with repetitions \\ \hline \hline 2020 & 10,076,799,973 & 72,042,632 & 36,257,428 & 50.33\% & 4.36\% \\ \hline 2019+2020 & 18,498,004,627 & 130,817,330 & 64,926,392 & 49.63\% & 4.66\% \\ \hline 2018+2019+2020 & 25,986,041,152 & 182,644,683 & 89,785,881 & 49.16\% & 4.82\% \\ \hline 2017+2018+2019+2020 & 32,503,697,718 & 227,734,087 & 110,992,373 & 48.74\% & 4.91\% \\ \hline 2016+2017+2018+2019+2020 & 38,441,439,656 & 268,911,716 & 130,019,555 & 48.35\% & 4.97\% \\ \hline \end{tabular} \end{table} Table 10: Full arXiv metrics comparing sentences with validated sentences. The discouraging aspect is that assuming that the trend line does not change its curve nature (from logarithmic nature to something else) at an hypothetical inflection point, it will not be practical to gather enough text volume even for modest repetition coverage (like 50%). Not enough text volume was gathered to show evidence that this hypothetical inflection point may exist. Also, (at these large, extrapolated text volumes) it would not be feasible to crowd-source translations. ## 8 Future work The study showed interesting results for the text analyzes and translation, but one key point needs to be resolved before further work: The projections based on the current sentence string model show that it should not be possible to gather enough text documents for modest translation coverage (and the volumes needed would also be to high for effective crowd-sourcing anyway). Can a different sentence model provide higher rates of common text matching? Preliminary experiments using character-level simplification techniques within a sentence (elimination of punctuation, digits, date tagging, custom sentence tokenizer, etc) have shown residual improvements that where not considered qualitatively significant to be show here. Can a combination of techniques (such as: * Syntax trees with sub-tree matching. * Take inspiration from transformers and attention models (such as other neural-network techniques [30]). ) be mixed with such an approach? And will crowd-sourcing the translations of the most common text structures still be viable?
2309.09063
Blind Deconvolution of Sparse Graph Signals in the Presence of Perturbations
Blind deconvolution over graphs involves using (observed) output graph signals to obtain both the inputs (sources) as well as the filter that drives (models) the graph diffusion process. This is an ill-posed problem that requires additional assumptions, such as the sources being sparse, to be solvable. This paper addresses the blind deconvolution problem in the presence of imperfect graph information, where the observed graph is a perturbed version of the (unknown) true graph. While not having perfect knowledge of the graph is arguably more the norm than the exception, the body of literature on this topic is relatively small. This is partly due to the fact that translating the uncertainty about the graph topology to standard graph signal processing tools (e.g. eigenvectors or polynomials of the graph) is a challenging endeavor. To address this limitation, we propose an optimization-based estimator that solves the blind identification in the vertex domain, aims at estimating the inverse of the generating filter, and accounts explicitly for additive graph perturbations. Preliminary numerical experiments showcase the effectiveness and potential of the proposed algorithm.
Victor M. Tenorio, Samuel Rey, Antonio G. Marques
2023-09-16T18:07:16Z
http://arxiv.org/abs/2309.09063v1
# Blind Deconvolution of Sparse Graph Signals in the Presence of Perturbations ###### Abstract Blind deconvolution over graphs involves using (observed) output graph signals to obtain both the inputs (sources) as well as the filter that drives (models) the graph diffusion process. This is an ill-posed problem that requires additional assumptions, such as the sources being sparse, to be solvable. This paper addresses the blind deconvolution problem in the presence of imperfect graph information, where the observed graph is a _perturbed_ version of the (unknown) true graph. While not having perfect knowledge of the graph is arguably more the norm than the exception, the body of literature on this topic is relatively small. This is partly due to the fact that translating the uncertainty about the graph topology to standard graph signal processing tools (e.g. eigenvectors or polynomials of the graph) is a challenging endeavor. To address this limitation, we propose an optimization-based estimator that solves the blind identification in the vertex domain, aims at estimating the inverse of the generating filter, and accounts explicitly for additive graph perturbations. Preliminary numerical experiments showcase the effectiveness and potential of the proposed algorithm. Victor M. Tenorio, Samuel Rey, and Antonio G. Marques Dept. of Signal Theory and Communications, King Juan Carlos University, Madrid, Spain Graph Filter Identification, Sparse recovery, Graph Denoising, Robust Graph Signal Processing ## 1 Introduction In recent years, we have witnessed an exponential surge in data utilization, accompanied by a concurrent rise in its complexity. Addressing this growing complexity involves harnessing the inherent structure within the data. In this context, Graph Signal Processing (GSP) [1, 2, 3] emerges as a solution to this challenge, employing graphs to capture the underlying structure of the data and interpreting the data as signals defined on the nodes of the graph. This convergence of signal processing and graph theory facilitates navigating the intricacies of modern information, revealing insights and patterns that traditional methods might overlook. Leveraging the structure of the data becomes even more relevant in ill-posed problems where the available observations are insufficient to solve the task at hand. This is precisely the case of blind deconvolution of graph signals, which is an extension to graphs of the classical problem of blind system identification or blind deconvolution of signals in the time or spatial domain [4, 5]. Specifically in GSP, given a set of output signals that are assumed to be the output of a diffusion process driven by a graph filter (GFi), blind deconvolution tries to jointly identify the sources (nodes) of the diffusion process as well as the GFi that drove the diffusion [6, 7, 8, 9, 10]. This problem finds applications in multiple domains, such as in social analysis (identifying the sources and the propagation dynamics of a rumor in a social network) or in neuroscience (identifying the sources of neurological activity as well as the diffusion patterns in a brain network) [11]. The aforementioned approaches to the blind deconvolution problem assume perfect knowledge of the graph topology. This simplifying assumption allows to compute the frequency representation of the graph signals and the diffusing GFi, leveraging those representations in their proposed algorithms. Nonetheless, in real-world applications, the graph is prone to contain _imperfections_ due to, for example, noise in the observed links or outdated information when the graph varies with time. Furthermore, when in lieu of physical entities the graph captures pairwise relationships that need to be learned from the data, the limitations of the method used to learn the topology may give rise to perturbations. The ubiquity of graph perturbations and their potential impact on the performance of graph-based algorithms highlight the necessity to develop robust algorithms capable of effectively handling imperfect graph information. Unfortunately, this is not a trivial task, since even characterizing the influence that simple perturbations models (e.g., additive noise models) have on classical GSP tools (the eigenvectors of the graph-shift operator (GSO) that define the graph Fourier transform or the powers of the GSO that are used in a GFi) is quite challenging. Despite these challenges, several GSP works have started to look into this relevant limitation. Works in [12, 13] study the influence of perturbations in the spectrum of the graph while [14] focus on the effect of perturbations in GFis of order one. Rather than analyzing the effects of perturbations, [15, 16, 17] address the identification of (different types of) GFis while accounting for imperfections in the observed topology. Finally, the presence of perturbations has also been considered in non-linear methods, where current approaches range from studying the transferability of non-linear GFis to designing novel GFis robust to perturbations [18, 19, 20]. However, notwithstanding the rising focus on the uncertainty in the graph topology, there have been no efforts to tackle the problem of blind deconvolution from a robust standpoint. **Contributions.** To address the previous limitations, this paper poses the task of robust blind deconvolution of graph signals while considering imperfect topology knowledge. It carefully formulates the problem as a non-convex optimization program and, by relying on different convex relaxations, designs an algorithm to find a solution. The key aspects of the proposed approach are: 1) modeling the true (unknown) graph as an explicit optimization variable to account for perturbations in the observed topology; 2) optimizing over the inverse filter rather than the generating one to simplify the objective function; and 3) modeling the dependence of the inverse GFi on the graph via a commutativity constraint in the vertex domain, which bypasses the challenges of dealing with high-order polynomials and working on spectral domain of the graph. The postulated problem uses the observations of the output signals and an imperfect version of the GSO to jointly obtain the sources of the network diffusion, the underlying GFI driving the process, and an enhanced (denoised) estimate of the graph. Since the sources of non-convexity are reduced to a mere bilinearity, the optimization is tackled through an alternating approach, labeled as the Robust Blind Deconvolution over Graphs (RBDG) algorithm. To the best of our knowledge, this is the first work that does not assume perfect knowledge of the graph topology when solving the blind identification problem. ## 2 Blind Deconvolution in Gsp In this section, we introduce notation and some fundamentals of GSP. Then, we discuss blind deconvolution of graph signals, and present how previous works dealt with it. **Notation and GSP preliminaries.** Denote by \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) a graph with \(\mathcal{V}\) and \(\mathcal{E}\) representing its node and edge set, respectively, and with \(|\mathcal{V}|=N\) nodes. Denote by \(\mathbf{A}\in\mathbb{R}^{N\times N}\) its (possibly weighted and directed) adjacency matrix, where \(A_{ij}\neq 0\) if and only if \((i,j)\in\mathcal{E}\). More generally, GSO \(\mathbf{S}\in\mathbb{R}^{N\times N}\) is a matrix encoding the structure of the graph, where \(S_{ij}\) can be non-zero only if \((i,j)\in\mathcal{E}\) or if \(i=j\). Classical examples of matrices playing the role of the GSO are the adjacency matrix or the combinatorial graph Laplacian \(\mathbf{L}=\text{diag}(\mathbf{d})-\mathbf{A}\), where the entries \(d_{j}=\sum_{j}A_{ij}\) represent the nodal degrees. Define also a graph signal as the mapping \(\mathcal{V}\rightarrow\mathbb{R}\), which can be conveniently represented by a vector \(\mathbf{x}\in\mathbb{R}^{N}\), where the entry \(x_{i}\) encodes the signal value at node \(i\in\mathcal{V}\). Finally, a fundamental role in graph signal deconvolution is played by GFs. A GFI is a graph-aware linear operator for graph-signals that can be represented as a matrix polynomial of the GSO of the form \[\mathbf{H}=\sum_{r=0}^{N-1}h_{r}\mathbf{S}^{r}, \tag{1}\] with \(\mathbf{h}=[h_{0},...,h_{N-1}]\) denoting the vector of filter coefficients [21]. Since \(\mathbf{S}^{r}\) encodes the r-hop neighborhood of the graph, GFs are widely used to model diffusion processes over networks [7]. **Blind deconvolution of graph signals.** Consider a diffusion process where the output signal \(\mathbf{y}\in\mathbb{R}^{N}\) is given by \[\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{w}, \tag{2}\] with \(\mathbf{x}\in\mathbb{R}^{N}\) being the input signal being diffused, \(\mathbf{H}\) a GFI modeling the diffusion process, and \(\mathbf{w}\in\mathbb{R}^{N}\) a random vector representing noise or model inaccuracies. Then, given a set of \(M\) observed signals \(\{\mathbf{y}_{i}\}_{i=1}^{M}\) generated according to (2), blind deconvolution aims to find the sources of the network diffusion \(\{\mathbf{x}_{i}\}_{i=1}^{M}\) as well as the GFI \(\mathbf{H}\) controlling the diffusion process. This is a challenging and ill-posed problem since both \(\mathbf{x}\) and \(\mathbf{H}\) are unknown. Therefore, to promote its tractability, a workhorse approach is to assume that there are only a few sources for the network diffusion (i.e. \(\mathbf{x}_{i}\) are sparse). Moreover, exploiting that \(\mathbf{H}\) is a GFI so only the coefficients \(\mathbf{h}\) are unknowns becomes critical. Early works dealing with the blind deconvolution problem in the context of GSP appear in [22, 7]. The approach put forth recovers a lifted rank-one matrix \(\mathbf{Z}=\mathbf{x}\mathbf{h}^{T}\), which exhibits certain desirable properties such as being row sparse and rank one. Later on, the works presented in [23, 8] review several existing methods and extend the previous approach to input signals defined as the combination of a few entries in a dictionary, as well as exploring the problem of graph signal sampling and analyzing its similarities with blind deconvolution. Differently, [22, 9] reformulate the problem to identify the frequency response of the inverse filter as a way to bypass the non-convex bilinear term \(\mathbf{H}\mathbf{x}_{i}\) that arises when jointly identifying the filter and the input signals. The work in [9] is further developed in [10], where the authors incorporate unrolling schemes [24] to strengthen their design and limit the impact of selecting the hyperparameters. ## 3 Robust Blind Deconvolution over Graphs After having established the notation and the problem context, along with an overview of prior approaches to address it, this section presents the formal problem statement and introduces our proposed algorithmic solution. As previously mentioned, we assume that we do not have access to the true GSO, but to a noisy version \(\bar{\mathbf{S}}=\mathbf{S}+\mathbf{\Delta}\). Here, \(\mathbf{\Delta}\) represents a perturbation matrix whose particular structure will depend on the perturbation at hand (e.g., creating/destroying links, or noisy edge weights) [17]. It is easy to note that the uncertainty encoded in \(\mathbf{\Delta}\) renders the blind deconvolution problem more challenging to solve. The blind identification problem accounting for the graph imperfections is formally stated next. **Problem 1**: _Let \(\mathcal{G}\) be a graph with \(N\) nodes, \(\mathbf{S}\) the true (unknown) GSO, and \(\bar{\mathbf{S}}\) be the perturbed (observed) GSO. Moreover, let \(\{\mathbf{y}_{i}\}_{i=1}^{M}\) be the observed output signals obtained from the unknown input signals \(\{\mathbf{x}_{i}\}_{i=1}^{M}\) as described in (2). The aim is to use the duplet \(\{\mathbf{y}_{i}\}_{i=1}^{M}\),\(\mathbf{S}\) to find i) the input signals \(\{\mathbf{x}_{i}\}_{i=1}^{M}\); ii) the filter driving the diffusion process \(\mathbf{H}\); and iii) an enhanced estimate of the real GSO \(\mathbf{S}\). To that end, the following assumptions are in order: (**ASI**) The input signals \(\mathbf{x}_{i}\) are sparse, i.e., \(\|\mathbf{x}_{i}\|_{0}=K\ \forall\ i\in\{1,...,M\}\), where \(K\ll N\). (**AS2**) \(\mathbf{H}\) is a polynomial of \(\mathbf{S}\). (**AS3**) \(\mathbf{S}\) and \(\bar{\mathbf{S}}\) are close according to some metric \(d(\mathbf{S},\bar{\mathbf{S}})\), i.e., the observed perturbations are "small" in some sense._ Assumptions (**AS1**) and (**AS2**), which limit the degrees of freedom of the bilinear model, are standard in the context of blind deconvolution of graph signals [9, 7]. By limiting the level of perturbations, Assumption (**AS3**) guarantees that matrices \(\mathbf{S}\) and \(\bar{\mathbf{S}}\) are "similar" and, as a result, that \(\bar{\mathbf{S}}\) contains meaningful information about \(\mathbf{S}\). For convenience, let us group the input and output signals in the columns of the \(N\times M\) matrices \(\mathbf{X}:=[\mathbf{x}_{1},...,\mathbf{x}_{M}]\) and \(\mathbf{Y}:=[\mathbf{y}_{1},...,\mathbf{y}_{M}]\), respectively. A natural optimization-based formulation for Problem 1 is \[\min_{\mathbf{X},\mathbf{H},\mathbf{S}} \|\mathbf{Y}-\mathbf{H}\mathbf{X}\|_{F}^{2}+\beta\|\mathbf{S}\|_{0}\] (3) s.to: \[d(\mathbf{S},\bar{\mathbf{S}})\leq\epsilon_{1} \tag{4}\] \[\|\mathbf{x}_{i}\|_{0}\leq K\ \forall\ i\in\{1,...,M\}\] (5) \[\|\mathbf{H}\mathbf{S}-\mathbf{S}\mathbf{H}\|_{F}^{2}\leq\epsilon_ {2}, \tag{6}\] where we formulated the objective function to minimize the error between the output signals \(\mathbf{Y}\) and the prediction \(\mathbf{H}\mathbf{X}\), along with an \(\ell_{0}\) term of the GSO to promote a sparse solution for \(\mathbf{S}\). The key constraint in our approach is (4), which relates to (**AS3**) and bounds the distance between the GSO \(\mathbf{S}\) and the observation \(\bar{\mathbf{S}}\). The choice of a suitable distance function will depend on the nature of the perturbation encoded in \(\mathbf{\Delta}\), with plausible examples being the \(\ell_{0}\) pseudo-norm when perturbations create/destroy links or the Frobenius norm when the weights of \(\bar{\mathbf{S}}\) present noise. Next, the constraint (5) is used to limit the sparsity of the signals \(\mathbf{X}\) while (6) promotes that \(\mathbf{H}\) is a GFI (i.e., a polynomial on \(\mathbf{S}\)) as stated in (**AS2**). Note that the commutativity exploits the fact that, since \(\mathbf{H}\) is a polynomial of \(\mathbf{S}\), the two matrices share the same eigenvectors. This simple observation prevents us from dealing with the spectrum of \(\mathbf{S}\) and with high-order polynomials, simplifying the optimization problem. Nevertheless, (3) is clearly a non-convex optimization problem, due to the bilinear terms in (3) and (6) and the \(\ell_{0}\) norms in (3) and (5). To design a convex solution, we rely on an alternating optimization approach. To that end, we make the following considerations: * Inspired by [9, 10], we replace the optimization variable \(\mathbf{H}\) by its inverse, denoted by \(\mathbf{G}:=\mathbf{H}^{-1}\). This change of variable simplifies the objective function by replacing the bilinearity \(\|\mathbf{Y}-\mathbf{H}\mathbf{X}\|_{F}^{2}\) with \(\|\mathbf{G}\mathbf{Y}-\mathbf{X}\|_{F}^{2}\), which is convex in both \(\mathbf{G}\) and \(\mathbf{X}\). A key aspect to note is that this variable change still allows us to use the commutativity term, since the inverse filter \(\mathbf{G}\) also shares eigenvectors with \(\mathbf{S}\).1 Footnote 1: To demonstrate this, let us write the generating GFI as \(\mathbf{H}=\mathbf{V}\text{diag}(\bar{\mathbf{h}})\mathbf{V}^{-1}\), where \(\mathbf{V}\) are the eigenvectors of \(\mathbf{S}\) and \(\bar{\mathbf{h}}\) is the frequency response of the filter. We can then write \(\mathbf{G}=\mathbf{V}\text{diag}(\bar{\mathbf{g}})\mathbf{V}^{-1}\), where \(\bar{g}_{i}=1/\bar{h}_{i}\) for all \(i\). Therefore, \(\mathbf{G}\) and \(\mathbf{S}\) share eigenvectors, and thus commute. * We replace the \(\ell_{0}\) norm by its convex surrogate, the \(\ell_{1}\) norm (to simplify exposition, iterative re-weighted \(\ell_{1}\) alternatives [25] are not included in the formulation but they are considered in the numerical section). * We move the constraints to the objective function by adding them as regularizers. * We assume that \(\bar{\mathbf{S}}\) contains perturbations that create and destroy links of \(\mathbf{S}\) and select the \(\ell_{1}\) norm as the distance function between \(\bar{\mathbf{S}}\) and \(\mathbf{S}\). Nevertheless, recall that any other convex distance can be readily employed. Taking into account the previous considerations, we end up with the following optimization problem \[\min_{\mathbf{X},\mathbf{G},\mathbf{S}} \|\mathbf{G}\mathbf{Y}-\mathbf{X}\|_{F}^{2}+\beta\|\mathbf{S}\|_ {1}+\lambda\|\mathbf{S}-\bar{\mathbf{S}}\|_{1} \tag{7}\] \[+\alpha\|\mathbf{X}\|_{1}+\gamma\|\mathbf{G}\mathbf{S}-\mathbf{ S}\mathbf{G}\|_{F}^{2}\] \[\text{s.to:}\quad\mathrm{Trace}(\mathbf{G})=1,\] where the constraint is used to prevent the trivial solution (\(\mathbf{X}=0,\mathbf{G}=0,\mathbf{S}=\bar{\mathbf{S}}\)). The problem in (7) is still non-convex due to the bilinearity in the commutativity term, but can be solved iteratively using an alternating minimization approach where, at each iteration \(t\), two steps are performed: * **Step 1** (filter and source identification): in this step, we find the optimal solutions for both \(\mathbf{G}\) and \(\mathbf{X}\) by solving the following problem \[\mathbf{G}_{(t)},\mathbf{X}_{(t)}=\text{arg}\min_{\mathbf{G}, \mathbf{X}} \|\mathbf{G}\mathbf{Y}-\mathbf{X}\|_{F}^{2}+\alpha\|\mathbf{X}\|_{1}\] (8) \[+\gamma\|\mathbf{G}\mathbf{S}_{(t-1)}-\mathbf{S}_{(t-1)}\mathbf{ G}\|_{F}^{2}\] \[\text{s.to:}\quad\mathrm{Trace}(\mathbf{G})=1,\] where we used the estimation of \(\mathbf{S}\) from the previous iteration, \(\mathbf{S}_{(t-1)}\). This problem, which is convex in both \(\mathbf{G}\) and \(\mathbf{X}\), can be solved using standard convex solvers. However, more efficient approaches, such as coordinate descent or proximal methods can be employed [26]. * **Step 2** (graph denoising): with the solutions of the previous step, we now aim to find a new estimation of the GSO by solving \[\mathbf{S}_{(t)}=\text{arg}\min_{\mathbf{S}} \beta\|\mathbf{S}\|_{1}+\lambda\|\mathbf{S}-\bar{\mathbf{S}}\|_{1}\] (9) \[+\gamma\|\mathbf{G}_{(t)}\mathbf{S}-\mathbf{S}\mathbf{G}_{(t)}\|_ {F}^{2},\] which is also convex and amenable to efficient approaches like the previous step. The complete pseudo-code is presented in Algorithm 1. The two steps (8)-(9), which are coupled by the commutativity term, are repeated for a fixed number of iterations \(T\) or until a stopping criterion is met. For the first iteration, the GSO is initialized to the imperfect observation \(\mathbf{S}_{(0)}=\bar{\mathbf{S}}\). The output of Algorithm 1 are the estimates for the inverse filter \(\bar{\mathbf{G}}=\mathbf{G}_{(T)}\), the source signals \(\hat{\mathbf{X}}=\mathbf{X}_{(T)}\), and the denoised GSO \(\hat{\mathbf{S}}=\mathbf{S}_{(T)}\). ``` Input:\(\mathbf{Y},\bar{\mathbf{S}}\) Output:\(\hat{\mathbf{G}},\hat{\mathbf{X}},\hat{\mathbf{S}}\). 1 Initialize \(\mathbf{S}_{(0)}\) as \(\mathbf{S}_{(0)}=\bar{\mathbf{S}}\). for\(t=1\)to\(T\)do 2 Compute \(\mathbf{G}_{(t)}\) and \(\mathbf{X}_{(t)}\) by solving (8) using \(\mathbf{S}_{(t-1)}\). Compute \(\mathbf{S}_{(t)}\) by solving (9) using \(\mathbf{G}_{(t)}\) and \(\mathbf{X}_{(t)}\). 3 end for 4\(\hat{\mathbf{G}}=\mathbf{G}_{(T)},\ \hat{\mathbf{X}}=\mathbf{X}_{(T)},\ \hat{\mathbf{S}}= \mathbf{S}_{(T)}\). ``` **Algorithm 1**Robust blind deconvolution with graph denoising. Note that, unlike [9], we solve the problem in the vertex domain, without relying on the frequency representation of the signals. This allows us to bypass the instability that the perturbations in \(\bar{\mathbf{S}}\) generate in the graph eigenvectors, yielding a more robust approach. Another advantage of the proposed algorithm is that, unlike [7], we do not require the sparsity pattern of the sources to be the same across all signals. Algorithm 1 incorporates four essential hyperparameters: \(\alpha\), \(\gamma\), \(\beta\), and \(\lambda\). These hyperparameters can be found via grid search or any other hyperparameter search algorithm. Since \((\alpha,\gamma,\beta,\lambda)\) are associated with regularizers derived from constraints, they can be interpreted as Lagrange multipliers; as a result, their values can be potentially tuned using tailored dual algorithms. Alternatively, an approach based on the unrolling technique, like the one proposed in [10], could be implemented. Several of these options will be explored and compared in the journal version of this paper. ## 4 Numerical Experiments In this section, we numerically assess the capabilities of the proposed algorithm. To do so, we first explain the setup common to all experiments and then we present and analyze the results for three test cases. The code used to run the simulations is available in GitHub2. Footnote 2: [https://github.com/vmtcnoriorio/RobustBlindDeconvolution](https://github.com/vmtcnoriorio/RobustBlindDeconvolution) **Experiment setup**: Unless stated otherwise, we sample graphs with \(N=20\) nodes from the small-world random graph model [27], we assign \(K=2\) sources of the network diffusion by selecting \(K\) nodes uniformly at random for each graph signal and we generate \(M=50\) graph signals. We use the adjacency matrix as the GSO, and the coefficients of the GFi are sampled uniformly at random between 0 and 1. The graph is perturbed by rewiring a 10% of the total number of edges. The error represented in the figures is the normalized median error across 25 realizations. We compare Algorithm 1 (labelled as "RBD-G" in the figure legends) with: i) the scheme presented in [9] ("Ye et. al."); and ii) a slight modification of (3)-(6), where we augment the objective in (3) with the constraints (4)-(6), replace the \(\ell_{0}\) pseudo norm with the \(\ell_{1}\) norm, and set the distance function as the \(\ell_{1}\) norm of the difference. The modified problem in ii) is solved using an iterative 3-step process where in step 1 we estimate the GFi \(\mathbf{H}\), in step 2 we optimize \(\mathbf{X}\), and in step 3 we estimate \(\mathbf{S}\). This approach is labeled as "RBD-H" in the figures. Finally, the lines whose labels contain "-rew" represent versions of the previous algorithms where the \(\ell_{1}\) norms of \(\mathbf{X}\) and \(\mathbf{S}\) are replaced with a reweighted \(\ell_{1}\)-approach [25]. **Test case 1**: Here, we analyze the impact of perturbations on the recovered GFi \(\hat{\mathbf{G}}\). Figure 1-(a) shows the normalized error as we increase the perturbations in \(\hat{\mathbf{S}}\). In general, we observe that the error grows as the ratio of rewired links increases and that the proposed approach outperforms the alternatives in every perturbed scenario. As we can see, for the unperturbed case (left-most point of the horizontal axis), the algorithm that yields the most accurate filter estimate is the approach in [9], as we would expect. However, for the smallest perturbation value considered (second left-most point of the horizontal axis), the performance of the approach in [9] drops dramatically. In contrast, our robust algorithms, both using the reweighted version of the \(\ell_{1}\) norm and without it, obtain results in the order of \(10^{-4}\) and \(10^{-1}\), respectively, for all perturbation values. In other words, they are able to properly deal with perturbations in the graph and consistently obtain an accurate representation of the GFi driving the process. Finally, it is worth noting that the naive 3-step approach is not able to properly identify the GFi, most probably due to the increased complexity of introducing a third step along with the non-convexity introduced by the additional bilinear term \(\mathbf{H}\mathbf{X}\). **Test case 2**: Here our focus is on analyzing how the number of sources affects the quality of the estimation \(\hat{\mathbf{X}}\). Figure 1-(b) depicts the error in the recovered \(\hat{\mathbf{X}}\) as the number of non-zero entries (\(K\)) in each \(\mathbf{x}_{i}\) increases. From the results, it follows that our algorithms clearly outperform the alternatives. More precisely, we observe that the error remains below \(10^{-3}\) when \(K<5\), (which corresponds to 25% of the total number of nodes in the graph), and then starts deteriorating. Interestingly, comparing "RBD-G-rew" and "RBD-G", we observe that the reweighted \(\ell_{1}\) norm not only provides a better estimate of \(\hat{\mathbf{X}}\), but it is also more resilient to denser signals \(\mathbf{X}\). The results also illustrate that, for the considered setup (10% of errors in the graph), the alternatives are unable to properly identify the sources. **Test case 3**: In the last test case, we analyze the performance of the graph denoising task when increasing the number of available graph signals \(M\). Note that the approach in "Ye et al." does not perform denoising, and therefore we set \(\hat{\mathbf{S}}=\hat{\mathbf{S}}\) for this algorithm. We can see in Figure 1-(c) that, as expected, the normalized error of the GSO for the algorithms proposed in this work decreases as \(M\) increases. The alternatives are again unable to properly denoise the GSO. It is worth mentioning that, even when only \(M=20\) signals are observed (and hence \(M=N\)), our "RBD-G-rew" algorithm already obtains a normalized error of 0.1, providing a high-quality representation of the true GSO. ## 5 Conclusion This work addressed the problem of blind deconvolution in the context of GSP when, instead of perfectly knowing the GSO, only an imperfect (perturbed) observation \(\hat{\mathbf{S}}\) is available. Our robust estimator was designed as the solution to an optimization problem where the sparse sources, the (inverse) filter, and the true graph were jointly learned. Two critical aspects of our design were: a) declaring the true graph as an optimization variable, and b) exploiting the fact that the inverse of the generative filter is a polynomial of the true graph via a commutativity bilinear constraint. These features resulted in an optimization fully formulated in the vertex domain and whose only source of non-convexity is the bilinear constraint, which in turn, is amenable to be solved via an alternating minimization approach. Preliminary numerical experiments showcased that our algorithm is able to identify the three variables in different scenarios. Future work includes the theoretical characterization of the proposed algorithm, designing more efficient optimization schemes to reduce the computational complexity, and using unrolling techniques to set the value of the regularization constants. Figure 1: Comparing the estimation performance of 5 blind deconvolution approaches. (a) shows the error in the recovered filter \(\hat{\mathbf{G}}\) when modifying the number of links perturbed in \(\hat{\mathbf{S}}\), where the values in the x-axis represent the proportion of the total number of links that have been perturbed, (b) represents the error in the sources of the diffusion \(\mathbf{X}\) with respect to the increase in the number of active sources \(K\), and (c) plots the error in the denoised GSO when increasing the number of samples \(M\).
2310.04424
Stability Analysis of Non-Linear Classifiers using Gene Regulatory Neural Network for Biological AI
The Gene Regulatory Network (GRN) of biological cells governs a number of key functionalities that enables them to adapt and survive through different environmental conditions. Close observation of the GRN shows that the structure and operational principles resembles an Artificial Neural Network (ANN), which can pave the way for the development of Biological Artificial Intelligence. In particular, a gene's transcription and translation process resembles a sigmoidal-like property based on transcription factor inputs. In this paper, we develop a mathematical model of gene-perceptron using a dual-layered transcription-translation chemical reaction model, enabling us to transform a GRN into a Gene Regulatory Neural Network (GRNN). We perform stability analysis for each gene-perceptron within the fully-connected GRNN sub network to determine temporal as well as stable concentration outputs that will result in reliable computing performance. We focus on a non-linear classifier application for the GRNN, where we analyzed generic multi-layer GRNNs as well as E.Coli GRNN that is derived from trans-omic experimental data. Our analysis found that varying the parameters of the chemical reactions can allow us shift the boundaries of the classification region, laying the platform for programmable GRNNs that suit diverse application requirements.
Adrian Ratwatte, Samitha Somathilaka, Sasitharan Balasubramaniam, Assaf A. Gilad
2023-09-14T21:37:38Z
http://arxiv.org/abs/2310.04424v1
# Stability Analysis of Non-Linear Classifiers using Gene Regulatory Neural Network for Biological AI ###### Abstract The Gene Regulatory Network (GRN) of biological cells governs a number of key functionalities that enables them to adapt and survive through different environmental conditions. Close observation of the GRN shows that the structure and operational principles resembles an Artificial Neural Network (ANN), which can pave the way for the development of Biological Artificial Intelligence. In particular, a gene's transcription and translation process resembles a sigmoidal-like property based on transcription factor inputs. In this paper, we develop a mathematical model of gene-perceptron using a dual-layered transcription-translation chemical reaction model, enabling us to transform a GRN into a Gene Regulatory Neural Network (GRNN). We perform stability analysis for each gene-perceptron within the fully-connected GRNN sub-network to determine temporal as well as stable concentration outputs that will result in reliable computing performance. We focus on a non-linear classifier application for the GRNN, where we analyzed generic multi-layer GRNNs as well as _E.Coli_ GRNN that is derived from trans-omic experimental data. Our analysis found that varying the parameters of the chemical reactions can allow us shift the boundaries of the classification region, laying the platform for programmable GRNNs that suit diverse application requirements. ## 1 Introduction In recent years, the field of Artificial intelligence (AI) has developed rapidly resulting in sophisticated learning algorithms that have benefited a plethora of applications (e.g., manufacturing, economics, computer vision, robotics, etc.) [(1)]. Inspired by the functions of neurons, the ultimate vision of AI is to create human-like intelligence that one day will have a working capacity close to the brain. Based on the system applications, AI can be categorized into software or hardware-based. Software-based AI includes various forms of algorithms that depends on their structure as well as training process (e.g., convolutional neural networks [(2)], recurrent neural networks [(3)], where a novel applications is large language models such as Generative Pre-trained Transformer (GPT) [(4)]. Neuromorphic processors is a hardware-based AI platform that architecturally consists of neurons and synapses constructed from memristor devices that communicate based on encoded neural spikes. [(5)]. Presently, the vast majority of AI machines are constructed using instruction-encoded circuits and silicon-based semiconductors and nanotechnology [(6)], [(7)], [(8)]. While this enables more efficient computer systems that have capabilities of learning and computing, it also results in significant challenges such as deployments in wet non-silicon mediums (e.g., biological mediums), as well as utilizing large amounts of energy. [(9)]. Current research has aimed to address these challenges and one direction taken is through Biological AI, where computing are performed through living biological cells [(10)], [(11)]. A recent examples is the _DishBrain_, where the system is composed of living neurons that can be trained to play the game of "_Pong_" on a computer [(12)]. In other works, ANNs have been programmed into bacterial cells [(13)], [(14)]. Similarly, molecular circuits programmed to behave like ANN have also been proposed, and one example is the Bio-molecular Neural Network (BNN) [(15)]. The underlying basis for all these approaches, is the communication of molecules [(16)] that operates as part of the chemical reactions to enable computing operations. From the perspective of Gene Regulatory Networks (GRN), there has been a connection between its structure and the opera tion of a ANN. In our recent work [17], we developed a model that transforms the gene-gene interaction within the GRN using weights, forming a **GRNN** while also exploring the impact of structural changes on the computing capacity. In this study, we investigate the behaviour of a fully-connected GRNN derived from a GRN, focusing on the stability analysis of the gene translation and transcription process during its computing operation. The stability analysis focuses on each perceptron of the GRNN, which we term as **gene-perceptron**. Figure 1 illustrates the mapping from ANN to GRNN. In a conventional ANN, a perceptron takes multiple inputs (\(x_{1}\) and \(x_{2}\)) and computes their weighted summation (\(\sum\)) that goes through an activation function (\(z(x)\)). In the context of the GRNN, the weights are represented as Transcription Factors (TF) concentration corresponding to half-maximal RNA concentration (\(K_{A}\)) and gene-product copy number (\(C_{N}\)), which individually impact RNA and protein concentrations. Input-genes (\(g_{X_{1}}\) and \(g_{X_{2}}\)) have TFs that binds to the promoter region of the gene-perceptron \(g_{1,i}\), which transcribes to RNA (\(R_{i}\)) and then translate to protein (\(P_{i}\)). This can be considered as a weighted summation, which results in regulatory effects on gene expression within the gene-perceptron. Based on the stability of each gene-perceptron at the steady state, the maximum-stable protein concentration (\([P]^{*}\)), represents the output. We mathematically model chemical reactions of the transcription and translation process of a gene-perceptron, which we term as the dual-layered transcription-translation reaction model (from here on we simply term this as dual-layered chemical reaction model). The dual-layered chemical reaction model can be integrated with trans-omic data model (transcriptome and proteome) and the cellular GRN in order for us to identify active genes for the specific environments, which will be the basis for us to create the GRNN. Based on this platform, we will perform stability analysis at the steady-state of molecular production (RNA and protein) for the gene-perceptron. Once we prove the stability of the gene-perceptron, as an application we focus on a non-linear classifier relying on the maximum-stable protein concentration for different concentrations of TFs that acts as inputs. To evaluate the model's performance, we analyze two generic multi-layer GRNN networks and an E.Coli GRNN. We also show that we can manipulate and shift the classification areas based on different parameter configurations. The contributions of this study can be outlined as follows: * **Developing GRNN inspired from ANN structures using dual-layer chemical reaction models:** Using the dual-layered chemical reaction model, we show that gene transcription and RNA translation process exhibit a sigmoidal-like molecular concentration dynamics at their stable points. This behavior is governed by the weights, which is a function of gene product copy number and transcription factors TFs concentration corresponding to the half-maximal RNA concentration. * **Stability analysis of GRNN:** We developed full mathematical models derived from the chemical reactions and apply Lyapunov's stability analysis for the gene-perceptron to determine stable protein concentration as well as temporal production that will facilitate reliable GRNN computing. * **GRNN application for non-linear classifiers:** Using the stability analysis, we are able to determine the decision boundaries of the derived GRNNs to classify data within regions of protein concentration output. By varying parameters of the chemical reactions, we demonstrate how the classification area can be shifted, which can serve as a tool for engineering the GRN for several non-linear classifiers based on the application's requirements. ## System Modeling This section describes the mathematical models for the gene transcription and translation within gene-perceptrons, employing a dual-layered chemical reaction model (Figure 2) that breaks down the steps of the translation and transcription process. The production of RNAs depends on RNA polymerase, TFs and \(\sigma\) factors that binds to the promoter (\(Prom\)) [18], as well as the dissociation constant (\(k_{d}\)). Once the TF binds to the promoters \(Prom\), the transcription begins at the rate of \(k_{1}\). This is followed by the RNA degradation at the rate of \(d_{1}\) based on their half-life value [19], RNA binding proteins [20] as well as the degradosome components that includes _RNase E_, _RNA helicase_, as well as _PNParse_[21]. Following the transcription of the RNAs is the translation into protein, which occurs at the rate of \(k_{2}\) facilitated by Ribosome and Transfer RNA (tRNA). Once the RNA is translated, the protein molecules start to degrade gradually at the rate of \(d_{2}\). Significant factors that affect the degradation of protein are non-coding RNA, as well as energy-dependent and energy-independent Proteases. Overall, to maintain the concentration Figure 1: Illustration of mapping between components of ANN to GRNN. In this depiction, \(w_{i}\) and \(w_{i}(K_{A},C_{N})\) represent the weights of a perceptron in ANN and GRNN, respectively, while activation function \(z(x)\) is equivalent to a combination of the transcription process of RNA concentration \([R]_{i}\) as well as translation of maximum-stable protein concentration \([P]_{i}^{*}\). The chemical reactions are governed by the transcriptions rate \(k_{1}\), translation rate \(k_{2}\), degradation rate of RNA \(d_{1}\) and degradation rate of protein \(d_{2}\). stability in the cell, RNAs and protein production are balanced by the degradation process. By taking the dual-layered chemical reactions model into account, we model the concentration changes at the transcriptome and proteome using mathematical models. These models, enable us to assess the stability of the gene-perceptron expression through the eigenvalue method and determine the stabilization time using the Lyapunov stability theorem. After determining if a particular gene-perceptron expression is stable, we determine the stability of the entire GRNN. Then, based on the application study, the classification ranges for each gene-perceptron in a network is determined at the equilibrium maximum-stable protein concentration state. Based on the sigmoidal input-output behavior and adjustable threshold, we deduce that gene-perceptrons in the GRNN consist of conventional NN properties. For the overview of the algorithm mentioned above, please refer to Figure 3. ## Modelling Transcription of a Gene In this section, we discuss transcription and the corresponding RNA concentration model. During the transcription process, the RNA polymerase and TFs bind to the promoter region and then the \(\sigma\) factor attaches to the promoter region and unwinds the DNA (22). This is followed by the \(\sigma\) factor release from the polymerase, allowing for the elongation of the RNA chain. Based on (23), the concentration change over time \(t\) of RNA for a particular gene-perceptron \(i\) can be expressed as follows (chemical species are represented using uppercase letters (e.g., \(X\)), and their corresponding concentration is enclosed within brackets (e.g., \([X]\))) \[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\frac{[TF]^{n}}{K_{A_{i}}^{n}+[TF]^{n}}- d_{1_{i}}[R]_{i}. \tag{1}\] The gene-perceptron is activated by the TF, where \([R]_{i}\), \(k_{1_{i}}\), \([TF]\), \(d_{1_{i}}\), \(n\), \(C_{N_{i}}\) and \(K_{A_{i}}\) are the RNA concentration, transcription rate, concentration of TFs, degradation rate of RNA, Hill coefficient, gene product copy number and TF concentration when the production of RNA is at the half maximal point for gene-perceptron \(i\), respectively. Given the initial RNA concentration transcribed by a gene-perceptron is \([R]_{i}(0)\) (i.e., \([R]_{i}(t=0)=[R]_{i}(0)\)), the solution of Eq. 1 is derived as follows \[[R]_{i}=\frac{k_{1_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n}}{[TF]^{n}+K_ {A_{i}}^{n}}\right)(1-e^{d_{1_{i}}t})+[R]_{i}(0)e^{d_{1_{i}}t}. \tag{2}\] In contrast, in the event that the gene-perceptron is repressed by the TF, the RNA concentration changes over time \(t\) is represented as follows, \[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\frac{K_{A_{i}}^{n}}{K_{A_{i}}^{n}+[TF]^ {n}}-d_{1_{i}}[R]_{i}. \tag{3}\] Eq. 1 and 3 is expressed as a mass balance differential equation with the difference between the RNA synthesis, which is modelled using the Hill function integrated with the degradation process of the RNA (24), (25), (26). The Hill coefficient \(n\) represents the number of TF molecules that bind simultaneously to the promoter \(Prom\) with \(K_{d}\) reaction dissociation constant when the gene-perceptron is transcribing RNA (23) and is represented as \(Prom+n\ TF\stackrel{{ K_{d}}}{{\rightleftharpoons}}Prom_{nTF}\). The Hill coefficient is critical for the sigmoidal input-output characteristics of the gene-perceptron, as depicted in Figure 4. According to the plot, we can see that when we increase the hill coefficient, the sigmoidicity increase for the maximum-stable protein concentration (\([P]^{*}\)) over the input-gene concentration (\([TF]\)). Thus, when a gene-perceptron possesses a higher hill coefficient, it exhibits more sigmoidal-like behavior. (for our analytical model we consider \(n=1\)). Figure 3: Flow chart for the calculation of classification areas as well as stability based on the dual-layered transcription-translation chemical reaction model of each gene-perceptron. Figure 2: Illustration of dual-layered transcription-translation chemical reaction model of the gene-perceptron. Each components corresponds to the synthesis and degradation of RNA and protein for the \(j^{\text{th}}\) gene-perceptron in the \(i^{\text{th}}\) layer (\(g_{i,j}\)) of the GRNN. Here, \(RnpB\), \(SsrA\) and \(SsrS\) are examples for non-coding RNA (ncRNA). Examples of energy-dependent proteases include \(Lon\), \(HflB\), \(ClpXP\) and \(HslUV\). Active TF, RNAP, PNpace, RNase E and tRNA corresponds to active TFs, RNA polymerase, Polyt ribonucleotide phosphorylase, Ribonuclease E and transfer RNA, respectively. ## Modelling Translation of a RNA In this section, we describe RNA-to-protein translation and associated models. Initially, the ribosome and tRNAs form a complex that draws the amino acids in the polypeptide chain to attach to the first codon position of the RNA [27]. This is followed by the tRNAs adding amino acids one by one to form a polypeptide chain while moving along the RNA [28]. Once the stop codon is detected, the polypeptide chain is released, dissociating the ribosome complex from the RNA and forming the protein [29]. This process can be summarized through the protein concentration change over time, and is modelled as follows for a particular gene-perceptron \(i\): \[\frac{d[P]_{i}}{dt}=k_{2_{i}}[R]_{i}-d_{2_{i}}[P]_{i}, \tag{4}\] where \([P]_{i},k_{2_{i}}\) and \(d_{2_{i}}\) are the protein concentration, translation rate and degradation rate of protein for gene-perceptron \(i\). Moreover, \([R]_{i}\) is the concentration of RNA from Eq. 1, and the TF activates the gene-perceptron \(i\) based on Eq. 3 if the TF represses the gene-perceptron. Similar to Eq. 1 and 3, Eq. 4 is modelled based on mass-balance differential equation taking the difference between the RNA produced at the transcriptome level which is translated into protein at the rate of \(k_{2_{i}}\) and the amount of protein that is degraded at the rate of \(d_{2_{i}}\) due to the factors presented in Figure 2. Provided that the initial protein concentration translated by a RNA for gene-perceptron \(i\) is \([P]_{i}(0)\) (i.e., \([P]_{i}(t=0)=[P]_{i}(0)\)), the solution of Eq. 4 is given by \[[P]_{i} =\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n }}{[TF]^{n}+K_{A_{i}}^{n}}\right)\left(\frac{1}{d_{2_{i}}}-\frac{e^{d_{1_{i}}t} }{d_{1_{i}}+d_{2_{i}}}\right)\] \[+[R]_{i}(0)k_{2_{i}}\left(\frac{e^{d_{1_{i}}t}}{d_{1_{i}}+d_{2_{i} }}\right)+e^{-d_{2_{i}}t}[P]_{i}(0)-e^{-d_{2_{i}}t}\] \[\times[R]_{i}(0)k_{2_{i}}\frac{1}{(d_{1_{i}}+d_{2_{i}})}-e^{-d_{2_ {i}}t}\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}}\] \[\left(\frac{[TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}\right)\times\left( \frac{1}{d_{2_{i}}}-\frac{1}{(d_{1_{i}}+d_{2_{i}})}\right). \tag{5}\] ## Methods This section introduces the mathematical models for the stability analysis and RNA/Protein concentration changes over time, and subsequently demonstrates how to apply these mathematical models in the GRNNs. ### Gene Expression Stability Analysis In this section, we discuss the approach towards analyzing the stability of the gene-perceptron expression. Our view of the stability of the gene-perceptron is when the RNA transcription as well as the protein translation concentrations reach maximum over time and remain stable at that level exhibiting a sigmoidal behavior. To confirm the existence of transcription and translation upper bounds, we use eigenvalue-based stability analysis. This, in turn, ensures a stable classification region of the GRNN due to a protein concentration with minimum fluctuations that can result in minimized computing errors. Moreover, another crucial property that should be considered in GRNN computing is the time it takes the GRNN to reach stability, which is investigated using the Lyapunov function in the following sections. ### Stability of Gene-Perceptron based on Eigenvalues The stability of the gene-perceptron is governed by the concentration changes of the gene expression as well as protein translation using the Jacobian matrix of Eq. 1 and 4, which enables us to define the equilibrium point based on the eigenvalues. While we have only considered the case of gene transcription in Eq. 1, our approach is also applicable for repression process defined in Eq. 3. Since we are analysing the stability of the gene-perceptron at the equilibrium point, we can represent the maximum-stable RNA \([R]_{i}^{*}\) and protein \([P]_{i}^{*}\) concentration as follows: \[[R]_{i}^{*}=\frac{k_{1_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n}}{[TF]^{n} +K_{A_{i}}^{n}}\right), \tag{6}\] \[[P]_{i}^{*}=\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}d_{2_{i}}}\left(\frac{[ TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}\right). \tag{7}\] The maximum-stable RNA and protein concentrations are determined for different TF concentrations. Additionally, we can vary gene-specific parameters such as \(C_{N_{i}}\) to achieve different non-linear classification ranges [30], implying that by engineering the cell, we can change its decision-making process. To determine the eigenvalues of Eq. 1 and 4 at the equilibrium points of Eq. 6 and 7, we use the Jacobian matrix given in Eq. 24 (please see Appendix). Hence, the eigenvalues are \(\lambda_{1}=-d_{1_{i}}\) and \(\lambda_{2}=-d_{2_{i}}\). Since all the eigenvalues (\(\lambda_{1}\) and \(\lambda_{2}\)) are negative, we can conclude that the gene-perceptron reaches maximum-stable concentration level. ### Stability of a Gene-Perceptron using Lyapunov function To determine the temporal stability, we employ Lyapunov stability theorem that is based on the function (\(V([R]_{i},[P]_{i})\)) (from the Appendix Eq. 25) which satisfies the necessary conditions: \(V\left([R]_{i},[P]_{i}\right)=0\) when \([R]_{i}=[R]_{i}^{*}\) and \([P]_{i}=[P]_{i}^{*}\); where \([R]_{i}^{*}\) and \([P]_{i}^{*}\) are RNA and protein concentration at the equilibrium. Additionally, \(V\left([R]_{i},[P]_{i}\right)>0\) due to the quadratic nature of all terms. Finally, we consider the first derivative of Eq. 25 as given by Eq. 27, as the last condition to be satisfied for the stability of the gene-perceptron. Then, according to the Lyapunov's theorem, if Eq. 27 is negative, the gene-perceptron is Figure 4: Sigmoidicity fluctuations for different Hill coefficients. asymptotically stable and if Eq. 27 is less than or equal to zero, the gene-perceptron is Lyapunov stable (See Eq. 25 - 27 in the Appendix for the complete derivation). Since it is difficult to directly determine the sign of the derivative of the Lyapunov function in Eq. 27 (please see the Appendix), we illustrate the temporal fluctuation of Eq. 27 in Figure 5. This provides us the insights into the dynamic stability behavior of the gene-perceptron. ## Gene Regulatory Neural Network Analysis While the previous section present the stability analysis of each individual gene-perceptron, they need to be integrated into a GRNN in order to perform the classification operation. We focus on two types of generic multi-layer GRNNs. In the first network, we consider direct gene relationships within the GRN from the input to the outputs that mimics a multi-layer ANN. In the second case, we consider a Random structured multi-layer GRNN with intermediate gene-perceptrons. ### Multi-layer GRNN This GRNN network, which is illustrated in Figure 6, consists of three hidden layer gene-perceptrons (\(g_{1,1},g_{1,2},g_{1,3}\)) and one output layer gene-perceptron (\(g_{2,1}\)) (\(g_{i,j}\) represents the \(j^{\text{th}}\) gene-perceptron in \(i^{\text{th}}\) layer in the sub-network). The concentrations that is output from layer 1 to layer 2 are \([TF]_{1,1}\), \([TF]_{1,2}\), \([TF]_{1,3}\) and \([P]\) is the output from gene-perceptron \(g_{2,1}\). The two input-genes (\(g_{X_{1}}\) and \(g_{X_{2}}\)) are TFs with corresponding concentrations (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)), respectively. The RNA concentration change over time \(t\), for the hidden layer gene-perceptrons, based on Eq. 1, can be expressed as, \[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\left(\frac{[TF]_{x_{1}}^{n}}{K_{A_{i}}^{ n}+[TF]_{x_{1}}^{n}}\right)\cdot\left(\frac{[TF]_{x_{2}}^{n}}{K_{A_{i}}^{n}+[TF]_{x_ {2}}^{n}}\right)-d_{1_{i}}[R]_{i}, \tag{8}\] for the activators, \(i=g_{1,1},g_{1,2}\). Since the gene-perceptron \(g_{1,3}\) has a repression from gene-perceptron \(g_{x_{3}}\), the changes in the RNA production based on Eq. 3, is given by \[\frac{d[R]_{g_{1,3}}}{dt}=k_{1_{g_{1,3}}}C_{N_{g_{1,3}}}\left( \frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}}}^{n}+[TF]_{x_{1}}^{n}}\cdot\frac{K_{A_ {g_{1,3}}}}{K_{A_{g_{1,3}}}^{n}+[TF]_{x_{2}}^{n}}\right)\\ -d_{1_{g_{1,3}}}[R]_{g_{1,3}}. \tag{9}\] The RNA concentration changes of the output gene-perceptron \(g_{2,1}\) that consists of TFs from the gene-perceptrons \(g_{1,1},g_{1,2}\) and \(g_{1,3}\) with the output protein concentration that contribute as TF concentration (\([TF]_{1,1}=[P]_{g_{1,1}},[TF]_{1,2}=[P]_{g_{1,2}}\) and \([TF]_{1,3}=[P]_{g_{1,3}}\)) to accumulate in order to invoke the expression is given by, \[\frac{d[R]_{g_{2,1}}}{dt}=k_{1_{g_{2,1}}}C_{N_{g_{2,1}}}\left( \frac{[TF]_{1,1}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,1}^{n}}\right)\\ \cdot\left(\frac{[TF]_{1,2}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,2}^ {n}}\right)\cdot\left(\frac{[TF]_{1,3}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,3}^ {n}}\right)-d_{1_{g_{2,1}}}[R]_{g_{2,1}}. \tag{10}\] Each of the gene-perceptron also undergoes a translation process. Therefore, the protein concentration change for each gene-perceptron can be modelled using Eq. 4 for \(i=g_{1,1}\), \(g_{1,2}\), \(g_{1,3}\) and \(g_{2,1}\). The maximum-stable protein concentration can be derived by setting Eq. 8 -10 to zero to find \([R]_{i}^{*}\), which is then plugged into Eq. 4 and set to zero for \(i=g_{1,1},g_{1,2},g_{1,3}\) and \(g_{2,1}\), respectively. \[i=g_{1,1},g_{1,2}\Longrightarrow[P]_{i}^{*}=\frac{k_{1_{i}}k_{2_ {i}}C_{N_{i}}}{d_{1_{i}}d_{2_{i}}}\left(\frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}} }^{n}+[TF]_{x_{1}}^{n}}\right)\\ \times\left(\frac{[TF]_{x_{2}}^{n}}{K_{A_{i}}^{n}+[TF]_{x_{2}}^{ n}}\right), \tag{11}\] \[[P]_{g_{1,3}}^{*}=\frac{k_{1_{g_{1,3}}}k_{2_{g_{1,3}}}C_{N_{g_{1,3}}}}{d_{1_{g_{ 1,3}}}d_{2_{g_{1,3}}}}\Bigg{(}\frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}}}^{n}+[TF ]_{x_{1}}^{n}}\Bigg{)}\\ \times\left(\frac{K_{A_{g_{1,3}}}}{K_{A_{g_{1,3}}}+[TF]_{x_{2}}^{ n}}\right), \tag{12}\] Figure 5: Temporal stability of a Gene-perceptron based on the derivative of the Lyapunov function with respect to time. This shows that the gene-perceptron reaching stability over time. Figure 6: Multi-layer GRNN with two-input layer nodes, three hidden-layer gene-perceptrons (\(g_{1,1},g_{1,2},g_{1,3}\)) and one output layer gene-perceptron (\(g_{2,1}\)) and their corresponding output concentrations are transcription factors \([TF]_{1,1},[TF]_{1,2},[TF]_{1,3}\) and protein concentration \([P]\) respectively. There are two input-genes (\(g_{x_{1}}\), \(g_{x_{2}}\)) considered as two TFs with concentration of \([TF]_{x_{1}}\) and \([TF]_{x_{2}}\), respectively. In this context, \(g_{i,j}\) represents the \(j^{\text{th}}\) gene-perceptron in \(i^{\text{th}}\) layer in the GRNN. Input-gene activators and input-gene regressors are denoted by \((+)\) and \((-)\) edges, respectively. The weights \((w)\) of this GRNN is a function of the TF concentration corresponding to the half-maximal RNA concentration (\(K_{A_{i}}\)) and gene-product copy number (\(C_{N_{i}}\)) for the gene-perceptron \(i\) represented as \(w(K_{A_{i}},C_{N_{i}})\). \[[P]^{*}_{g_{2,1}}=\frac{k_{1_{g_{2,1}}}k_{2_{g_{2,1}}}C_{N_{g_{2,1}}} }{d_{1_{g_{2,1}}}d_{2_{g_{2,1}}}}\left(\frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[ TF]^{n}_{1,1}}\right)\] \[\times\left(\frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,2}}\right)\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,3}} \right). \tag{13}\] Eq. 11 - 13, which are the stable concentration quantity of proteins produced, is used to compute the classification areas for each gene-perceptron based on the value of concentration, which is further elaborated in the Results section as we present a case study. Subsequently, we apply the approach from the Methods Section to show the stability of the gene-perceptron in this GRNN. The overall stability of the GRNN based on the derived Lyapunov function of Eq. 27 (please see Appendix), which can be further expressed for \(l\) number of TFs connected to a gene-perceptron (\(i\)), is represented as follows \[\frac{dV}{dt}=-\prod_{j=1}^{l}\frac{C^{2}_{N_{i}}\cdot[TF]^{2n}_{ j}\cdot k^{2}_{1_{i}}\cdot e^{(-2t(d_{i_{1}}+d_{2_{i}}))}}{d_{1_{i}}d_{2_{i}}([TF]^{ n}_{j}+K^{n}_{A_{j}})^{2}(d_{1_{i}}-d_{2_{i}})^{2}}\] \[\times(d^{3}_{2_{i}}\cdot e^{(2d_{2}t_{i})}-2d_{1_{i}}d^{2}_{2_{i} }\cdot e^{(2d_{2}t_{i})}+d_{1_{i}}d_{2_{i}}\cdot e^{(2d_{2}t_{i})})\] \[\qquad\qquad+(d_{1_{i}}k^{2}_{2_{i}}\cdot e^{(2d_{1}t_{i})}+d_{2_ {i}}k^{2}_{2_{i}}\cdot e^{(2d_{2}t_{i})})-\] \[\qquad\qquad-(d_{1_{i}}k^{2}_{2_{i}}\cdot e^{(t(d_{1_{i}}+d_{2_{i }}))})+d_{2_{i}}k^{2}_{2_{i}}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))}), \tag{14}\] where \([TF]_{j}\) and \(K_{A_{j}}\) are concentration of \(j^{\text{th}}\) TF and corresponding half maximal RNA concentration for gene-perceptron \(i\), respectively. ### Random Structured GRNN As described earlier, relationship of gene-perceptrons within a GRN that have common TFs may have intermediate gene-perceptrons within the path of connections. We analyze how this impacts on the overall stability of the GRNN, where the network for this case is presented in Figure 7. In this form of networks, it is necessary to consider the RNA concentration change from the intermediate gene-perceptron (\(g_{2,1}\)) and its impact on the output layer gene-perceptron (\(g_{3,1}\)). The expressions for each gene-perceptrons, and their relative TFs from their immediate predecessor, is represented as follows: \[\frac{d[R]_{g_{2,1}}}{dt}=k_{1_{g_{2,1}}}C_{N_{g_{2,1}}}\left( \frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,1}}\right)-d_{1_{g_{2,1}} }[R]_{g_{2,1}}, \tag{15}\] \[\frac{d[R]_{g_{3,1}}}{dt}=k_{1_{g_{3,1}}}C_{N_{g_{3,1}}}\left( \frac{[TF]^{n}_{2,1}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{2,1}}\right)\cdot\left( \frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,2}}\right)\] \[\times\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,3}}\right)-d_{1_{g_{3,1}}}[R]_{g_{3,1}}. \tag{16}\] Here, the protein concentration from Eq. 5 can be derived from Eq. 15 (i.e., \([TF]_{1,1}=[P]_{1,1}\)), since the gene-perceptron \(g_{2,1}\) is activated by gene-perceptron \(g_{1,1}\). The RNA concentration models behaves similarly to the case without the intermediate gene-perceptron for the gene-perceptrons \(g_{1,1}\), \(g_{1,2}\)\(g_{1,3}\) and can be derived directly from Eq. 8 and 9. Using Eq. 4 we can determine the protein concentration change for each gene-perceptron Figure 7. Using the maximum-stable protein concentration derived from Eq. 15 and 16, we can determine \([R]^{*}_{i}\), which is then applied to Eq. 4 and used to determine the maximum-stable value for \(i=g_{2,1}\) and \(g_{3,1}\). This will result in the following maximum-stable protein production that is represented as follows \[[P]^{*}_{g_{2,1}}=\frac{k_{1_{g_{2,1}}}k_{2_{g_{2,1}}}C_{N_{g_{2,1}}}}{d_{1_{g _{2,1}}}d_{2_{g_{2,1}}}}\left(\frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n }_{1,1}}\right), \tag{17}\] \[[P]^{*}_{g_{3,1}}=\frac{k_{1_{g_{3,1}}}k_{2_{g_{3,1}}}C_{N_{g_{3,1}}}}{d_{1_{g _{3,1}}}d_{2_{g_{3,1}}}}\left(\frac{[TF]^{n}_{2,1}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n }_{2,1}}\right)\] \[\cdot\left(\frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,2}}\right)\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,3}} \right). \tag{18}\] We use Eq. 11 to determine \([P]^{*}_{i}\) for \(i=g_{1,1}\) and \(g_{1,2}\), while for \(i=g_{1,3}\) we use Eq. 12. For the stability analysis, Eq. 14 is used with \(l=2\) for \(g_{1,1}\), \(g_{1,2}\) and \(g_{1,3}\), \(l=1\) for \(g_{2,1}\) and \(l=3\) for \(g_{3,1}\) corresponding to the number of TFs for each gene-perceptron. ## 4 Results In this section, we perform the temporal stability analysis and obtain the classification areas for the two multi-layer GRNN network topologies (Figures 6, 7) as well as the GRNN derived from _E.Coli_ GRN. from the input-gene \(g_{X_{1}}\). The output-layer gene-perceptron (\(g_{2,1}\)) followed a similar trend as gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) attaining Lyapunov stability within the initial 30 seconds because its immediate predecessors are all activators. Given the gene-perceptron's stability at the equilibrium (Figure 8), we can use Eq. 11 - 13 to calculate output protein \([P]_{i}^{*}\) for different input concentrations (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)). The calculated output protein \([P]_{i}^{*}\) is illustrated over varying input concentrations, highlighting the values above and below the threshold (\([P]^{*}=0.5\)). Decision boundaries reflect how the classification areas change based on the edge (activation or repression) connected to the target gene-perceptron and corresponding parameters in Eq. 11 - 13. The inputs (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)) vary, while parameters like gene product copy number (\(C_{N_{1}}\)), transcription rate (\(k_{1_{j}}\)), translation rate (\(k_{2_{j}}\)), RNA degradation rate (\(d_{1_{j}}\)), protein degradation rate (\(d_{2_{j}}\)) and TF concentration corresponding to the half maximal RNA concentration (\(K_{A_{j}}\)) are kept constant. We consider two parameters sets to determine the different classification regions, which are presented in Table 1. For the parameters set 1, we obtain the classification areas shown in Figure (a)a. The decision boundary and their top-view for each gene-perceptron are shown in the first and second row, respectively. The gene-perceptron \(g_{1,2}\) has the largest classification area above the threshold due its lower TF concentration corresponding to half maximal RNA concentration \(K_{A_{t}}\), compared to gene-perceptrons \(g_{1,1}\) and \(g_{1,3}\). Moreover, the decision boundaries for gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) exhibits a similar shape classifying majority of the values above the threshold. In contrast, the gene-perceptron \(g_{1,3}\) covers larger area for the values below the threshold since it is repressed by the input-gene \(g_{x_{2}}\). The intersection of classification areas corresponding to hidden layer gene-perceptrons is represented by the output layer gene-perceptron \(g_{2,1}\), where the classification area above the threshold is approximately bounded by input concentrations, \(2.5\leq[TF]_{x_{1}}\leq 3.5\) and \(3.4\leq[TF]_{x_{2}}\). Due to the significant contribution from gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) beyond the threshold, the output layer gene-perceptron \(g_{2,1}\) exhibits a rightward shift. For the parameter set 2 (Table 1), the lower \(K_{A_{t}}\) values have shifted the classification area above the threshold compared to parameter set 1. This shift is evident in Figure (b)b, particularly for the gene-perceptron \(g_{1,2}\), which results in classifying majority of the values above the threshold. Conversely, for the gene-perceptron \(g_{1,3}\), the classification area shifts below the threshold due to the repression from the input when reducing the half maximal RNA concentration \(K_{A_{t}}\). The classification range for the gene-perceptron \(g_{1,1}\) expands compared to parameter set 1, approximately bounded by \(2.3\leq[TF]_{x,1}\) and \(2.1\leq[TF]_{x,2}\). Considering all gene-perceptrons, the output layer gene-perceptron \(g_{2,1}\) shows a leftward shift in the decision boundary, becoming slightly more linear. Overall, modifying the half maximal RNA concentration \(K_{A_{t}}\) can significantly expand the classification area. Eq. 14 and the parameter set 1 from Table 2. Similar to the Figure 8, gene-perceptrons \(g_{1,1},g_{1,2},g_{3,1}\) and the intermediate gene-perceptron \(g_{2,1}\) exhibit consistent stability fluctuations due to their immediate predecessor being activators. Additionally, gene-perceptron \(g_{1,3}\) shows similar stability fluctuation patterns as the gene-perceptron \(g_{1,3}\) in the network without the intermediate gene-perceptron and this is because both are being influenced by their repressive predecessors. Following the temporal stability analysis, we apply Eq. 11 and 12 to determine the maximum-stable protein concentration (\([P]_{i}^{*}\)) for the gene-perceptrons \(g_{1,1},g_{1,2}\) and \(g_{1,3}\). However, unlike the GRNN in Figure 6, Eq. 13 is not used to determine the classification area for the output layer gene-perceptron. Instead, for the computation of \([P]_{i}^{*}\) for the gene-perceptrons \(g_{2,1}\) and \(g_{3,1}\), both Eq. 17 and 18 is employed due to the addition of the intermediate gene-perceptron compared to the multi-layer GRNN in Figure 6. The calculated protein concentration output \([P]_{i}^{*}\) values for different input concentrations used to determine the classification area for each gene-perceptron is presented in Figure 12. We also used two different sets of parameters from Table 2 to analyze different classification areas. The parameter set 1 results in the classification areas shown in Figure 11(a). As the gene-perceptron \(g_{2,1}\) serves as the intermediate gene-perceptron of \(g_{1,1}\), we observe similar classification areas and decision boundaries. Additionally, repression from the input-gene \(g_{x_{1}}\) to the gene-perceptron \(g_{1,3}\) results in a distinctive decision boundary, approximately within the range of \(3\leq[TF]_{x_{2}}\) and \(3\geq[TF]_{x_{1}}\). Overall, the gene-perceptron \(g_{3,1}\) represents the intersection of the hidden layer gene-perceptrons, with the classification area beyond the threshold bounded by Figure 11: Temporal stability for each gene-perceptrons in the _E. coli_ GRNN. Figure 10: Temporal stability of the gene-perceptrons for the Random Structured GRNN. Figure 9: Parameter configurations for the Multi-layer GRNN depicted in Figure 6. Each graph depicts the classification area of each gene-perceptron and for (a) Parameter set 1, as well as (b) Parameter set 2 (\(g_{2,1}\) is the output gene-perceptron that combines all classification areas of gene-perceptrons from the previous layer). \(2.5\leq[TF]_{x_{2}}\leq 3.5\) and \(3\geq[TF]_{x_{1}}\). In contrast, reducing the TF concentration at the half maximal RNA concentration (\(K_{A_{t}}\)) for a gene-perceptron as shown in parameter set 2, alters the classification areas for both gene-perceptron \(g_{1,1}\) and its immediate intermediate gene-perceptron \(g_{2,1}\), as illustrated in Figure 11(b). The classification area significantly expands above the threshold, while dropping below it when lowering the TF concentration corresponding to the half-maximal RNA concentration \(K_{A_{t}}\), as it is inversely proportional to the maximum protein concentration \([P]_{i}^{*}\) based on Eqs. 8 and 17. Alterations made to gene-perceptron \(g_{1,1}\) notably impacts \(g_{2,1}\), the predecessor gene-perceptron in the GRNN. Other hidden layer gene-perceptrons \(g_{1,2}\) and \(g_{1,3}\) remain unaffected between parameter sets 1 and 2. Parameter set 2 results in a leftward shift in the classification area of the output layer gene-perceptron \(g_{3,1}\) compared to set 1. In summary, parameter adjustments leads to shifts in the decision boundary of the output layer gene-perceptrons; with decreased \(K_{A_{t}}\) causing a leftward shift in the the classification area. ### E.Coli GRNN Classification Analysis This section demonstrates the classification areas for the _E.coli_ GRNN illustrated in Figure 12(a), which is extracted from the trans-omic data of _E.coli_ GRNN (31). The network consists of two input-genes (\(b3025,b3357\)), two hidden layer gene-perceptrons (\(b1891\) and \(b1892\)) and one output layer gene-perceptron (\(b1071\)) with their corresponding TF concentrations \([TF]_{i}\) for \(i=b3025,b3357,b1891\) and \(b1892\), and protein concentration \([P]_{b1071}\). In this specific GRNN, all TFs are considered activators. For the output layer gene-perceptron (\(i=b1071\)), we employ Eqs. 8, 4 and 11 with TFs \(x_{1}=b1891\) and \(x_{2}=b1892\) to calculate RNA, protein concentration change and maximum protein concentration (\([P]_{i}^{*}\)), respectively using the parameter values in Table 3. Similar to the previous GRNNs, we based the stability analysis for this GRNN on Eq. 14. For the 2 input layer gene-perceptrons (\(i=b1891\) and \(b1892\)), we consider TFs \(j=b3025,b3357\), while for the output layer gene-perceptron \(i=b1071\), we evaluate stability with the TFs \(j=b1891,b1891\). In the previous GRNNs, we found that in Figures 8, 10 that the gene-perceptrons with an immediate activator, exhibits a consistent stability fluctuations before reaching Lyapunov stability \(\left(\frac{dV}{dt}\approx 0\right)\). This is also a similar behaviour with the _E.Coli_ GRNN, which is shown in Figure 11, which shows the temporal stability for the gene-perceptrons (\(g_{1,1}\), \(g_{1,2}\) and \(g_{2,1}\)) that is influenced by the immediate activator predecessors displaying uniform stability. Overall, the analysis indicates that all the gene-perceptrons in the GRNN eventually attained the Lyapunov stability, ensuring network-wide stability, but with different timing periods. Once proving the stability of the GRNN, we ascertain the maximum-stable protein concentration to obtain the classification Figure 12: Parameter configurations for the Random Structured GRNN in Figure 6. Each graph depicts the classification area of each gene-perceptron and for (a) Parameter set 1; (b) Parameter set 2 (\(g_{3,1}\) is the output gene-perceptron that combines all classification areas of gene-perceptrons from the previous layer). ranges. In order to compute maximum-stable protein concentration (\([P]_{i}^{*}\)) for gene-perceptrons \(i=b1891\) and \(1892\), we use Eq. 11 with the replacement of \(x_{1}\) and \(x_{2}\) by \(b3025\) and \(b3357\) as input genes. Furthermore, for the computation of output concentrations \([P]_{i}^{*}\), concerning gene-perceptron \(i=b\,1071\), Eq. 11 is used with TFs as \(x_{1}=b\,1891\) and \(x_{2}=b\,1892\) with the assumption that the Hill coefficient \(n\) is equal to \(1\) in all simulations. Since \(K_{A_{i}}\) is the TF concentration corresponding to the half maximal RNA concentration, there are two \(K_{A_{i}}\) values for each gene-perceptron because each has two TFs, as shown in Figure 12(a). The time-series data of gene expression levels for _E.coli_ was used by first identifying the gene's half maximal expression level \(K_{A_{i}}\) and then finding the expression level of its TF at that corresponding time point. For the remaining parameters that was obtained from literature as shown in Table 3, the average value was used. The classification area from our analysis is shown in Figure 12(b). The classification area of gene-perceptron \(b\,1892\) has expanded towards the left when compared to \(b\,1891\), and this is because the expression level of the half-maximal RNA concentration \(K_{A_{i}}\) of both TFs (\(b3025\) and \(b\,3357\)) corresponding to \(b\,1891\) exceed the value of \(K_{A_{i}}\) for \(b\,1892\). The classification area above the threshold of \(b\,1892\) is defined within the limits of \([TF]_{b3025}\geq 2.7\) and \([TF]_{b3357}\geq 2.7\), in contrast to \(b\,1891\) which is approximately bounded by \([TF]_{b3025}\geq 3.5\) and \([TF]_{b3357}\geq 3.8\). Consistent with the decision boundary simulations performed on the two generic multi-layer GRNNs (Figure 9 and 12), the output-layer gene-perceptron (\(b1071\)) of this GRNN also exhibited a intersection of classification areas driven by the input-layer gene-perceptrons. In line with this, as gene-perceptron \(b\,1891\) had the majority of its classification area below the threshold and gene-perceptron \(b\,1892\) had the majority above the threshold, the decision boundary of gene-perceptron \(b\,1071\) is approximately bounded by \([TF]_{b3025}\geq 2.9\) and \([TF]_{b3357}\geq 2.9\). Overall, gene-perceptrons within the GRNN derived from E.coli GRN exhibit tunable decision boundaries by selecting sub-netowrks from the GRN at steady-state and collectively they function as multi-layer GRNN showcasing aspects of biological AI. ## 6 Conclusion In this study, we introduced a GRNN that can be derived from a cell's GRN and mathematical modelling this for the transcription and translation process, transforming a gene into a gene-perceptron. We also performed stability analysis for the GRNN as it functions as a non-linear classifier. This is based on the eigenvalue method and the Lyapunov's stability theorem, with the latter approach capable of determining the time at which the stability is achieved. The classification application was applied to two multi-layer GRNNs as well as a sub-network extracted from the E.coli GRN using trans-omic data. From the simulation for different parameter settings for the two multi-layer GRNN revealed that the TF concentration at the half maximal gene expression level \(K_{A_{i}}\), has a significant impact on the shifting of the classification boundary. Based on the outcomes of the stability analysis and simulations, we can conclude that the GRN exhibits NN properties as the gene-perceptron demonstrated sigmoidal-like behavior for multiple inputs and tunable decision boundary. Further, by engineering living cells it is possible to obtain desired non-linear classifiers based on our application. Our model has potential to transform GRNs into GRNN when the suitable parameters are established for the dual-layered chemical reaction model. ## 7 Author Contributions A.R., S.S. and S.B. designed the theoretical framework of the study. The implementation of the analysis was done by A.R. while Figure 13: _E. coli_ GRNN classification analysis. (a) Fully-connected GRNN derived from the E.coli GRN. This network consists of two input-genes (\(b3025,b3357\)), two hidden layer gene-perceptrons (\(b\,1891\) and \(b\,1892\)), and one output layer gene-perceptron (\(b1071\)). (b) Classification regions of each gene perceptron within the _E. coli_ GRNN, with gene-perceptron \(b\,1071\) as the output. A.G. provided the knowledge for the biological aspect of this study. All the authors wrote and reviewed the final manuscript. ## Acknowledgments This publication has emanated from research conducted with the financial support of National Science Foundation (NSF) under Grant Number 2316960. ## Declaration of Interests The authors declare no competing interests. ## Appendix ### RNA and Protein Concentration Model To model the RNA and protein concentration change, mass-balance differential equations were used based on Hill function. Transcription of a gene-perceptron begins with TF and RNA polymerase binding to the promoter, which is modelled by, \[[Prom.TF]=C_{N_{i}}\frac{[TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}, \tag{19}\] where \([TF],n,K_{A_{i}},[Prom.TF]\) and \(C_{N_{i}}\) are concentration of TFs, Hill coefficient, TF concentration corresponding to half maximal RNA concentration, complex produced after TFs bind to promoter and gene product copy number, respectively. The complex, \(Prom.TF\) transcribes into RNA at the rate of \(k_{1_{i}}\) and subsequently RNA degrades at the rate of \(d_{1_{i}}\) which can be modelled as \[\frac{d[R]_{i}}{dt}=k_{1_{i}}[Prom.TF]-d_{1_{i}}[R]_{i}. \tag{20}\] By plugging Eq. 19 in Eq. 20 we can obtain Eq. 1. In contrast, if a gene-perceptron is repressed by a TF, Eq. 19 can be expressed as \[[Prom.TF]=C_{N_{i}}\frac{K_{A_{i}}^{n}}{K_{A_{i}}^{n}+[TF]^{n}}. \tag{21}\] Since the initial RNA concentration transcribed by a gene-perceptron is \([R]_{i}(0)\) (i.e., \([R]_{i}(t=0)=[R]_{i}(0)\)), the solution of Eq. 1 as given by Eq. 2 can be derived using the integrating factor, \(IF=e^{f\,d_{1_{i}}\,dt}=e^{d_{2_{i}}t}\), where \(t\) and \(d_{1_{i}}\) are time and RNA degradation rate, respectively. Transcribed RNA is then translated into protein at the proteome level. To solve the differential equation of protein concentration change for Eq. 4 we can follow 2 steps. **Step 1**: Replacing RNA concentration (\([R]_{i}\)) in Eq. 4 with the solution obtained for the differential equation of RNA concentration change from Eq. 2. **Step 2**: Using the integrating factor (\(IF=e^{f\,d_{2_{i}}\,dt}=e^{d_{2_{i}}t}\)) and initial RNA concentration (\([R]_{i}(0)\)), as well as initial protein concentration \([P]_{i}(0)\) (i.e., \([P]_{i}(t=0)=[P]_{i}(0)\)) we can obtain the equation for the protein concentration in Eq. 5. By setting \(\frac{d\,[R]_{i}}{dt}=0\), we can obtain maximum-stable RNA concentration at the steady-state (\([R]_{i}^{*}\)) expressed by Eq. 6. In addition, protein concentration at the steady-state (\([P]_{i}^{*}\)) can be represented by Eq. 7 which is derived by plugging \(\frac{d\,[P]_{i}}{dt}=0\) in Eq. 4. ### Determining Gene-perceptron Stability In this section, we derive the stability of a gene-perceptron using eigenvalues of differential equations for RNA and protein concentration change (Eq. 1 and 4) and using Lypunov's stability theorem. Based on (15), we applied eigenvalue method to determine the stability in the gene-perceptrons. Suppose \(f\) and \(g\) are functions of \([R]_{i}\) and \([P]_{i}\). Such that, \[\text{Eq.}1 \Longrightarrow\frac{d\,[R]_{i}}{dt}=f\left([R]_{i},[P]_{i} \right), \tag{22}\] \[\text{Eq.}4 \Longrightarrow\frac{d\,[P]_{i}}{dt}=g([R]_{i},[P]_{i}). \tag{23}\] Then, the Jacobian matrix for Eqs. 1 and 4 at the equilibrium point is represented as, \[J_{i}=\begin{bmatrix}\frac{\partial f}{\partial[R]_{i}}&\frac{\partial f}{ \partial[P]_{i}}\\ \frac{\partial g}{\partial[R]_{i}}&\frac{\partial g}{\partial[P]_{i}}\end{bmatrix} =\begin{bmatrix}-d_{1_{i}}&0\\ k_{2_{i}}&-d_{2_{i}}\end{bmatrix}, \tag{24}\] for gene-perceptron \(i\). Using the characteristic equation \(|J_{i}-\lambda I|=0\) we can determine the eigenvalues for the above Jacobian matrix (Eq. 24) as \(\lambda_{1}=-d_{1_{i}},\lambda_{2}=-d_{2_{i}}\). Hence, all the eigenvalues are negative, indicating that the gene-perceptron is stable, where \(\lambda\) is scalar, \(I\) is a \(2\times 2\) identity matrix, \(d_{2_{i}}\) is the protein degradation rate, \(d_{1_{i}}\) is the RNA degradation rate and \(k_{2_{i}}\) is the translation rate. We use the Lyapunov function (\(V\)) to perform the temporal stability analysis defined for the Eqs. 1 and 4 as follows, \[V\left([R]_{i},[P]_{i}\right)=\left([R]_{i}-[R]_{i}^{*}\right)^{2}+\left([P]_{ i}-[P]_{i}^{*}\right)^{2}. \tag{25}\] According to the Lyapunov's stability theorem, \(V\left([R]_{i},[P]_{i}\right)=0\) when \([R]_{i}=[R]_{i}^{*}\) and \([P]_{i}=[P]_{i}^{*}\), where \([R]_{i}^{*}\) and \([P]_{i}^{*}\) are RNA and protein concentration at the equilibrium. It is clear that \(V\left([R]_{i},[P]_{i}\right)>0\), since all terms are quadratic. Finally, we consider the first derivative of Eq. 25 as the last condition for the stability, which is represented as \[\dot{V}([R]_{i},[P]_{i})=\frac{dV}{dt}=\frac{\partial V}{\partial[R]_{i}}. \frac{d[R]_{i}}{dt}+\frac{\partial V}{\partial[P]_{i}}.\frac{d[P]_{i}}{dt}. \tag{26}\] By plugging \(\frac{d[R]_{i}}{dt}\) and \(\frac{d[P]_{i}}{dt}\) from Eq. 1 and 4, differentiating Eq. 25 with respect to \([R]_{i}\) and \([P]_{i}\) to obtain \(\frac{\partial V}{\partial[R]_{i}}\) and \(\frac{\partial V}{\partial[P]_{i}}\) and finally replacing \([R]_{i}^{*},[P]_{i}^{*},[R]_{i}\) and \([P]_{i}\), with Eq. 6, 7, 2 and 5 we get Eq. 26, which is represented as follows \[\text{Eq.}26 \Longrightarrow\frac{dV}{dt}=-\frac{C_{N_{i}}^{2}\cdot[TF]^{2n} \cdot k_{1_{i}}^{2}\cdot e^{(-2(d_{1_{i}}+d_{2_{i}}))}}{d_{1_{i}}d_{2_{i}}([TF] ^{n}+K_{A_{i}}^{n})^{2}(d_{1_{i}}-d_{2_{i}})^{2}}\] \[\cdot(d_{2_{i}}^{3}\cdot e^{(2d_{2_{i}}t)}-2d_{1_{i}}d_{2_{i}}^{2} \cdot e^{(2d_{2_{i}}t)}+d_{1_{i}}^{2}d_{2_{i}}\cdot e^{(2d_{2_{i}}t)})\] \[+(d_{1_{k}}k_{2_{i}}^{2}\cdot e^{(2d_{1_{i}}t)}+d_{2_{k}}k_{2_{i}}^ {2}\cdot e^{(2d_{2_{i}}t)})\] \[-(d_{1_{k}}k_{2_{i}}^{2}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))})+d_{2_{k} }k_{2_{i}}^{2}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))}), \tag{27}\] where we assume initial RNA concentration of zero (\([R]_{i}(0)=0\)) and initial protein concentration of zero (\([P]_{i}(0)=0\)). The above equation is used to determine the stability of the gene-perceptron for different parameter configurations.
2301.13619
Brillouin and Kerr nonlinearities of a low-index silicon oxynitride platform
Nonlinear optical effects including stimulated Brillouin scattering (SBS) and four-wave mixing (FWM) play an important role in microwave photonics, optical frequency combs, and quantum photonics. Harnessing SBS and FWM in a low-loss and versatile integrated platform would open the path to building large-scale Brillouin/Kerr-based photonic integrated circuits. In this letter, we investigate the Brillouin and Kerr properties of a low-index (n=1.513 @ 1550 nm) silicon oxynitride (SiON) platform. We observed, for the first time, backward SBS in SiON waveguides with a Brillouin gain coefficient of 0.3$\rm m^{-1}W^{-1}$, which can potentially be increased to 0.95$\rm m^{-1}W^{-1}$ by just tailoring the waveguide cross-section. We also performed FWM experiments in SiON rings and obtained the nonlinear parameter $\gamma$, of 0.02 $\rm m^{-1}W^{-1}$. Our results point to a low-loss and low-index photonic integrated platform that is both Brillouin and Kerr active.
Kaixuan Ye, Yvan Klaver, Oscar A Jimenez Gordillo, Roel Botter, Okky Daulay, Francesco Morichetti, Andrea Melloni, David Marpaung
2023-01-31T13:23:38Z
http://arxiv.org/abs/2301.13619v1
# Brillouin and Kerr nonlinearities of a low-index silicon oxynitride platform ###### Abstract Nonlinear optical effects including stimulated Brillouin scattering (SBS) and four-wave mixing (FWM) play an important role in microwave photonics, optical frequency combs, and quantum photonics. Harnessing SBS and FWM in a low-loss and versatile integrated platform would open the path to building large-scale Brillouin/Kerr-based photonic integrated circuits. In this letter, we investigate the Brillouin and Kerr properties of a low-index (n=1.513 @ 1550 nm) silicon oxynitride (SiON) platform. We observed, for the first time, backward SBS in SiON waveguides with a Brillouin gain coefficient of 0.3 m\({}^{-1}\)W\({}^{-1}\), which can potentially be increased to 0.95 m\({}^{-1}\)W\({}^{-1}\) by just tailoring the waveguide cross-section. We also performed FWM experiments in SiON rings and obtained the nonlinear parameter \(\gamma\), of 0.02 m\({}^{-1}\)W\({}^{-1}\). Our results point to a low-loss and low-index photonic integrated platform that is both Brillouin and Kerr active. + Footnote †: Corresponding author: [email protected] ## I Introduction Stimulated Brillouin scattering (SBS), which is an interaction between optical and acoustic waves, is currently revolutionizing photonic integrated circuit designs [1; 2; 3; 4; 5; 6; 7; 8]. Featuring a narrow-band (tens of MHz) gain resonance shifted around tens of GHz away from the pump light, the on-chip SBS plays a significant role in microwave photonics [9; 10; 11], narrow-linewidth integrated lasers [7; 12; 13], and on-chip nonreciprocal light propagation [3; 14]. Efficient on-chip SBS process requires simultaneously guiding both the optical and gigahertz acoustic waves in a waveguide, making it challenging to be realized in most integrated platforms. Several encouraging results have been demonstrated recently in various platforms, including chalcogenide [2], silicon [5], doped silica [15], aluminum gallium arsenide [16], and aluminum nitride [17]. In addition, SBS has also been observed in silicon nitride-based waveguides [7; 8; 18], opening the pathway to intersect Brillouin scattering with Kerr nonlinearities in low-loss and mature platforms. Silicon oxynitride (SiON) is another highly-developed integrated platform that has appealing properties including low propagation loss, wide transparency window, absence of multi-photon absorption effects, and stress-free fabrication [19; 20]. The optical and mechanical properties of SiON could be tuned continuously between those of SiO\({}_{2}\) and Si\({}_{3}\)N\({}_{4}\) at different nitrogen/oxygen (N/O) ratios [21; 22]. For example, a variety of SiON, known as Hydex (n=1.7 @ 1550 nm), has been widely used for Kerr-based nonlinear optic applications including optical frequency comb [23], optical neural network [24], and quantum photonics [25]. A slightly higher index SiON (n=1.83 @ 1550 nm) was also proposed in [20; 26] for Kerr-based applications. In both cases, the SiON platforms have a refractive index close to silicon nitride (n=1.98 @ 1550 nm) instead of silicon oxide (n=1.45 @ 1550nm). The relatively high refractive index induces a high nonlinear index, making it useful for Kerr-based nonlinear optic applications. But from the Brillouin perspectives, high refractive index SiON is less attractive due to the high content of the nitrogen that leads to a meager photoelastic coefficient p\({}_{12}\) because of the weak p\({}_{12}\) of the Si\({}_{3}\)N\({}_{4}\)[18]. Moreover, high-index SiON also has similar mechanical properties Figure 1: (a) Artistic representation of the SiON waveguides, showing the four-wave mixing process in an all-pass microring resonator and the backward stimulated Brillouin scattering (SBS) in a spiral waveguide. (b) The cross-section of the SiON platform in our work. (c) The chip photograph of the SiON microring resonators with a FSR of 50 GHz. (d) The chip photograph of the 5-cm SiON straight waveguide. to Si\({}_{3}\)N\({}_{4}\), such as high acoustic velocity that prevents acoustic confinement when cladded with SiO\({}_{2}\)[7; 8; 18]. In this paper, we investigate the Brillouin and Kerr properties of a SiON integrated platform with a relatively lower refractive index (n=1.513 @ 1550 nm). Contrasting to SiON platforms mentioned above, the SiON platform investigated here has a larger photoelastic coefficient p\({}_{12}\), lower acoustic velocity, and a larger cross-section, all of which lead to an enhanced SBS effect. We experimentally observed, for the first time to our knowledge, backward SBS in SiON waveguides. We also characterized the Brillouin gain coefficient \(g_{b}\) of the SiON waveguides with different widths. We found out the \(g_{b}\) of this SiON waveguide can potentially be increased to around 0.95 m\({}^{-1}\)W\({}^{-1}\) by simply tailoring the waveguide cross-section. This sufficiently large Brillouin gain coefficient, together with the low propagation loss, makes it possible to generate decent SBS gain for a plethora of Brillouin-based applications in this SiON platform. Furthermore, we also measured the nonlinear parameter \(\gamma\) and nonlinear index \(n_{2}\) of this SiON platform through four-wave mixing (FWM) experiments in a ring resonator. While the measured \(\gamma\) is an order of magnitude lower when compared to that of high-index SiON, we expect that with lower losses and higher pump power, the unique interplay between the SBS and Kerr effect such as Brillouin-assisted Kerr frequency comb [27; 28] could be observed in this integrated platform. ## Results We performed the backward SBS and four-wave mixing experiments in single-pass (spiral or straight) waveguides and microring resonators respectively, as shown in Fig. 1(a). The cross-section of this platform is shown in Fig. 1(b) [29; 30]. The 2.2 um-thick SiON layer has a refractive index \(n\) of 1.513 at 1550 nm. It is on top of a 15-um SiO\({}_{2}\) layer and is covered by a 7 um-thick SiO\({}_{2}\) upper cladding. The refractive index contrast \(\Delta n\) between the core and the cladding is 4.4%, enabling a bending radius of 600 um with negligible radiation losses. Fig. 1(c) shows the photograph of the microring resonators in this platform with a free spectral range (FSR) of 50 GHz and coupling coefficients varying from 0.05 to 0.8. Fig. 1(d) shows the photograph of several groups of 5-cm straight waveguides with different widths. The measured propagation loss of those straight waveguides is 0.25 dB/cm with coupling loss to lensed-tip fibers of approximately 3 dB/facet. ### Stimulated Brillouin Scattering in SiON Waveguides We developed a finite element model [8] in COMSOL to estimate the SBS response of the SiON waveguides. The simulated optical field and the corresponding acoustic response of the 2.2 um-wide SiON waveguide are shown in Fig. 2(a) and (b), respectively. The optical field is well confined around the SiON core area because of the total internal reflection (TIR). However, the TIR condition does not hold for the acoustic response because the acoustic velocity of the SiON (\(\sim\) 6.2 km/s) is higher than that of the SiO\({}_{2}\) (\(\sim\) 5.9 km/s). As a result, part of the acoustic field would leak into the cladding as shown in Fig. 2(b). Nevertheless, most of the acoustic field still Figure 2: (a) Simulated optical mode of the SiON waveguide. (b) Simulated acoustic response of the SiON waveguide. (c)-(h) Measured SBS gain spectra of the 2.0 μm, 2.2 μm, 2.3 μm, 2.4 μm, 2.6 μm, and 3.5 μm-wide SiON waveguides, respectively. (i) Brillouin gain coefficients and linewidth of the SiON waveguides with different widths. remains inside the SiON core because of the relatively large cross-section area [31]. This results in a large overlap between the optical and acoustic fields that leads to improved Brillouin gain coefficient. Extensive simulation results of the SBS gain coefficients are included in the Supplementary. To verify the simulation results, we characterized the SBS responses of the SiON waveguides with a pump-probe experimental apparatus [8; 18]. The pump and probe light are intensity-modulated and coupled into the opposite facets of the waveguide. We keep the pump frequency fixed at 1561 nm while sweeping the probe at frequencies down shifted from the pump by about 15 GHz. When the frequency difference between the pump and the probe is close to the Brillouin frequency shift of the SiON waveguide, the probe will experience the SBS gain and a peak will be detected at the lock-in amplifier (See the Supplementary for more details about the SBS experiment). Several 5 cm-long SiON waveguides are characterized to investigate the influence of waveguide width on the Brillouin gain spectra. The measured SBS responses of the 2.0 \(\upmu\)m, 2.2 \(\upmu\)m, 2.3 \(\upmu\)m, 2.4 \(\upmu\)m, 2.6 \(\upmu\)m, and 3.5 \(\upmu\)m-wide waveguides are shown in Fig. 2(c) to (h), respectively. All waveguides show a clear SBS peak well above the noise floor with the Brillouin frequency shift increases from 14.22 GHz for the 2.0 \(\upmu\)m-wide waveguide to 14.48 GHz for the 3.5 \(\upmu\)m-wide waveguide. Fig. 2(i) plots the measured Brillouin gain coefficient \(g_{b}\) and the SBS linewidth of the SiON waveguides with different widths (See the Supplementary for more details about the Brillouin gain coefficient calculation). The Brillouin gain coefficient \(g_{b}\) increases from 0.1 m\({}^{-1}\)W\({}^{-1}\) to 0.32 m\({}^{-1}\)W\({}^{-1}\) when the waveguide width increases from 2.0 \(\upmu\)m to 3.5 \(\upmu\)m. In the meantime, the linewidth of the SBS peak reduces from 358 MHz to 105 MHz. The increasing Brillouin gain coefficient and the narrowing of the SBS linewidth indicate an improvement in acoustic confinement when the SiON waveguides become wider. The Brillouin gain coefficient can be further increased by optimizing the cross-section of the waveguide through the genetic algorithm [8]. Fig. 3 (a) and (b) show the simulated optical mode and the acoustic response of a SiON waveguide with the same core refractive index but with an optimized cross-section for SBS gain. The dimension of such a waveguide is 4.0 \(\upmu\)m \(\times\) 3.2 \(\upmu\)m with a top cladding of 3 \(\upmu\)m and a bottom cladding of 10 \(\upmu\)m. Compared to the optical and acoustic fields of the waveguide structure in this work, less acoustic field is scattered into the cladding while the optical field is still well confined in the optimized waveguide structure. The Brillouin gain spectrum of the optimized waveguide structure is shown in Fig. 3 (c). The simulated peak Brillouin gain coefficient of this waveguide is 0.95 m\({}^{-1}\)W\({}^{-1}\), which is 3\(\times\) higher than the waveguide structure measured in this work. Furthermore, the propagation loss in this SiON platform can also be significantly lowered by reducing sidewall roughness and improving the thermal annealing process [30], allowing for longer effective waveguide length for the SBS process. Fig. 3 (d) estimates the SBS gain of both the measured and the optimized SiON waveguides with different propagation losses. The optimized Brillouin gain coefficient (around 0.95 m\({}^{-1}\)W\({}^{-1}\)), along with the improved propagation loss (around 0.1 dB/cm), can enhance the SBS gain from less than 0.5 dB to near 1.5 dB for a 60-cm waveguide, which is sufficient for applications like SBS-based narrow-bandwidth microwave photonic notch filters [8; 10]. ### Four-wave mixing in SiON Waveguides We further investigate the Kerr nonlinearities of this SiON platform. High-index SiON platforms are widely used for Kerr-based nonlinear optics applications because of the relatively large nonlinear parameter \(\gamma\)[19]. However, the nonlinear parameter \(\gamma\) is highly dependent on the refractive index and the geometry of the waveguide. The SiON waveguide in this work has a relatively lower refractive index and a larger cross-section compared with other SiON platforms [19; 20], and the nonlinear index \(n_{2}\) and nonlinear parameter \(\gamma\) of the SiON waveguide in this platform has never been characterized before. We devised a four-wave mixing (FWM) experiment for the nonlinear parameter characterization. Because of the limited effective length of the available samples, the FWM conversion efficiency of the straight waveguide is comparable with that of the fiber pigtails, making it difficult to accurately measure the \(n_{2}\) and the \(\gamma\). We use the all-pass ring resonators to enhance the FWM in the SiON waveguide so that the contribution from fibers in the setup can be neglected [32]. The ring resonator Figure 3: (a) Simulated optical mode and (b) simulated acoustic response and (c) simulated Brillouin gain spectrum of the optimized SiON waveguide. (d) Estimated SBS gain from the optimized and current SiON waveguides. applied in our experiment is made of the 2.2 um-wide SiON waveguide and it has a free spectral range (FSR) of 50 GHz and a power coupling coefficient of 0.05. The pump laser is locked close to the resonance of the ring resonator to mitigate the thermal influence on the ring resonator. The signal laser is set close to 2 \(\times\) FSR away from the pump signal and is combined with the pump light with a 99:1 coupler. The combined pump and signal are coupled into the all-pass ring resonator with a lensed fiber with a spot size of 2 um. The generated idler is then coupled out from the chip and sent to the optical spectrum analyzer to measure the conversion efficiency from the signal to the generated idler (See the Supplementary for details of the FWM experiment). To determine the field enhancement factor of the FWM process in the ring resonator, we first characterized the resonance response of the ring resonator with a vector network analyzer, as shown in Fig. 4 (a) (See the Supplementary for details of the characterization). The measured full-width at half-maximum (FWHM) is 612 MHz with an extinction ratio of 8.9 dB, corresponding to a loaded Q-factor of 330,000 and a propagation loss of 0.27 dB/cm. Fig. 4 (b) shows the measured FWM response of the 50 GHz SiON ring resonator. A clear peak is shown at 2 \(\times\) FSR down shifted from the pump frequency, which is the idler generated from the FWM process between the pump and signal in the ring resonator. The nonlinear index \(n_{2}\) and nonlinear parameter \(\gamma\) of the SiON waveguide in this platform can be estimated from the conversion efficiency between the signal and the idler (See the supplementary for details of the calculation). Fig. 4 (c) shows the measured conversion efficiency of the FWM process at different pump power. Based on this measurement, the calculated \(\gamma\) and \(n_{2}\) of the 2.2 um-wide SiON waveguide are 0.024 m\({}^{-1}\)W\({}^{-1}\) and 4.16 \(\times 10^{-20}\) m\({}^{2}\)/W, respectively. We also estimated the nonlinear parameter \(\gamma\) of the SiON waveguides with different widths based on the measured value of \(n_{2}\), as shown in Fig. 4 (d). The \(\gamma\) decreases from around 0.025 m\({}^{-1}\)W\({}^{-1}\) to 0.020 m\({}^{-1}\)W\({}^{-1}\) when the waveguide width reduces from 2.0 um to 3.5 um. ## Discussion For Brillouin-Kerr interactions, the balance between the nonlinearities needs to be considered. In microcavities, it is generally preferred to have larger Brillouin gain, as it is easier to inhibit cascading or other unwanted interactions via mode manipulation. Comparing the values of the measured \(g_{b}\) in Fig. 2 (i) and \(\gamma\) in Fig. 4 (a), the SiON waveguides reported here have an order of magnitude larger Brillouin gain compared to Kerr nonlinearity. This \(g_{b}/\gamma\) ratio is similar to previous demonstrations of Brillouin-assisted Kerr frequency combs in [27; 28], showing the potential to realize it in an integrated platform. In conclusion, we have investigated the Brillouin and Kerr properties of a SiON integrated platform with a relatively low refractive index. We observed, for the first time, the backward SBS response of those SiON waveguides. We also measured its nonlinear index \(n_{2}\) and nonlinear parameter \(\gamma\). These SiON waveguides can be fabricated in a versatile and low-loss integrated platform, and can potentially lead to a plethora of Brillouin and Kerr-based applications, including narrow-bandwidth microwave photonic filters, and narrow-linewidth lasers, and optical frequency combs. ## Author contributions D.M. and K.Y. developed the concept and proposed the physical system. K.Y. and Y.K. developed and performed numerical simulations. K.Y. performed the SBS characterisation with input from R.B., K.Y., and O.D. Y.K. and K.Y. performed the FWM experiments. O.A.J.G., F.M., and A.M. developed and fabricated the samples. K.Y., D.M., and Y.K. wrote the manuscript. D.M. led and supervised the entire project. ## Funding information This project is funded by the European Research Council Consolidator Grant (101043229 TRIFFIC) and Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) projects (740.018.021 and 15702). Figure 4: (a) Measured resonance response of the SiON ring resonator. (b) Measured four-wave mixing response of the SiON ring resonator. (c) Conversion efficiency of the four-wave mixing at different pump power. (d) The estimated nonlinear parameter \(\gamma\) of the SiON waveguides with different widths.
2309.10532
A Cognitively-Inspired Neural Architecture for Visual Abstract Reasoning Using Contrastive Perceptual and Conceptual Processing
We introduce a new neural architecture for solving visual abstract reasoning tasks inspired by human cognition, specifically by observations that human abstract reasoning often interleaves perceptual and conceptual processing as part of a flexible, iterative, and dynamic cognitive process. Inspired by this principle, our architecture models visual abstract reasoning as an iterative, self-contrasting learning process that pursues consistency between perceptual and conceptual processing of visual stimuli. We explain how this new Contrastive Perceptual-Conceptual Network (CPCNet) works using matrix reasoning problems in the style of the well-known Raven's Progressive Matrices intelligence test. Experiments on the machine learning dataset RAVEN show that CPCNet achieves higher accuracy than all previously published models while also using the weakest inductive bias. We also point out a substantial and previously unremarked class imbalance in the original RAVEN dataset, and we propose a new variant of RAVEN -- AB-RAVEN -- that is more balanced in terms of abstract concepts.
Yuan Yang, Deepayan Sanyal, James Ainooson, Joel Michelson, Effat Farhana, Maithilee Kunda
2023-09-19T11:18:01Z
http://arxiv.org/abs/2309.10532v3
A Cognitively-Inspired Neural Architecture for Visual Abstract Reasoning Using Contrastive Perceptual and Conceptual Processing ###### Abstract We introduce a new neural architecture for solving visual abstract reasoning tasks inspired by human cognition, specifically by observations that human abstract reasoning often interleaves perceptual and conceptual processing as part of a flexible, iterative, and dynamic cognitive process. Inspired by this principle, our architecture models visual abstract reasoning as an iterative, self-contrasting learning process that pursues consistency between perceptual and conceptual processing of visual stimuli. We explain how this new Contrastive Perceptual-Conceptual Network (CPCNet) works using matrix reasoning problems in the style of the well-known Raven's Progressive Matrices intelligence test. Experiments on the machine learning datasets, the RAVEN family and PGM, show that CPCNet achieves higher accuracy than all previously published models while also using the weakest inductive bias. We also point out a substantial and previously unremarked class imbalance in the original RAVEN dataset, and we propose a new variant of RAVEN--AB-RAVEN--that is more balanced in terms of abstract concepts. ## Introduction Analogy-making--the process of comparing and contrasting two or more things to enable additional relational inferences of various kinds--has been argued to be one of the foundational aspects of human intelligence [12]. So, how do humans make analogies? Consider the simple analogy in Figure 1. What relationships do you notice? Initially, one might recognize fish and birds as animals that move around in the water and air, respectively. Fish and birds both have similar body structures in terms of their heads, fins/wings, and tails. However, one might further reflect that birds get propulsion from their wings, whereas many fish get propulsion from their tails. This alternate mapping (bird wings to fish tails, and bird tails to fish fins) is influenced by conceptual processing of the initial perceptual inputs of the figure, and can in turn influence further perceptual processing of which similarities we emphasize and how we build our analogical representations. Theories of human perceptual and conceptual systems (e.g., [1]), including in the context of analogy-making (e.g., [1]), have made observations about this kind of bidirectional interplay between perceptual and conceptual processing, and forms of this interplay have also been explored in knowledge-based (i.e., symbolic) computational models of analogical reasoning [12]. In this paper: 1. We propose a new, cognitively-inspired Contrastive Perceptual-Conceptual neural Network (CPCNet) that models this kind of interplay between perceptual and conceptual processing in the context of visual abstract reasoning tasks like the example shown in Figure 2. 2. Using the abstract reasoning datasets-RAVEN, I-RAVEN, RAVEN-FAIR, and PGM, we experimentally demonstrate that CPCNet is more effective than previous architectures by achieving the highest accuracy with the weakest inductive bias. Figure 1: A is to B as C is to D. But in what ways? Figure 2: An example item of Raven’s Progressive Matrices [13]. 3. Finally, we point out a substantial, previously unremarked class imbalance in the original RAVEN dataset, and we propose a new variant--AB-RAVEN--that is more balanced in terms of abstract concepts. ## Approaches to Visual Abstract Reasoning Raven's Progressive Matrices (RPM) is a family of human intelligence tests created by Raven (1936) about a century ago. RPM is acclaimed as the best single-format intelligence test that exists to date for evaluating the core intelligence factors, such as general intelligence and fluid intelligence [14, 15]. RPM is a kind of visual abstract reasoning task, where human subjects are expected to discover abstract patterns and concepts from raw visual stimuli, and apply these abstract patterns to reason about the visual stimuli [16]. Figure 2 gives an example item of RPM. It consists of a matrix of images with the last entry missing and multiple (usually eight) answer choices. To solve such an item, the human subject needs to select an answer choice to complete the matrix so that the abstract patterns among rows and columns are consistent. For example, the abstract pattern in Figure 2 is that taking the union of the first two entries in a row (or a column) leads to the third entry in the row (or column), which leads to the correct answer of the fourth choice. In the recent surge of work using deep learning to tackle visual abstract reasoning, most deep neural network models have followed a standard image classification paradigm, as shown in Figure 2(a). Taking as input the raster images of matrix entries and answer choices, this paradigm repeatedly applies feature extractions as in the famous ImageNet work [13], decreasing the sizes of the spatial dimensions but increasing the size of the channel dimension, until a single vector can be obtained to represent the entire input problem, and then a MLP classification head is appended to predict the class label, which is the index of the correct answer choice. An alternate approach leverages the observations about human cognition outlined in the introduction above, i.e., that reasoning can often be enhanced by interleaving perceptual and conceptual processing, allowing each process to influence the other. Figure 2(b) illustrates this kind of approach. Taking the same raw visual stimuli as in the image classification paradigm, this alternate paradigm uses feature extractors, simulating early vision processing, to form an initial visual representation of input images. Then there follows two types of processing: (1) perceptual processing that refines the perceptual (visual) representation of input images, for example, refining blurry feature maps of lines and angles to clear feature maps of shapes, and (2) conceptual processing that enriches the representation of abstract concepts, i.e., the relations between input images. Then comes the main difference between the image classification paradigm and this paradigm--these two types of processing form a dynamic cycle, in which the perceptual and conceptual processing depend on each other's output. This cycle allows for direct interplay between perceptual and conceptual processing. The cycle keeps running for multiple steps until a consistency between perceptual and conceptual Figure 3: Two Paradigms for Solving RPM. Note that the sizes and numbers of tensors are diagrammatic, not representing the implementation. processing is reached (thus adding a computational requirement for checking or representing the consistency at every step). The resulting consistent representation takes on a dual role as perceptual and conceptual representation both and is used to predict the answer label. Figure 2(b) depicts reasoning on RPM-like problems as a complex, flexible, and dynamic process. While it is not difficult to mechanically construct a deep neural net that mimics this kind of processing, the real technical difficulty lies is how to optimize it properly---given its dynamics, how can we make sure the network steadily converges to achieve the consistency needed to drive robust reasoning? We describe our solution to this challenge in this paper. ## Detailed Motivation We focus on the interplay between perceptual and conceptual representation, not only because we expect it to provide added flexibility and dynamics to a reasoning process, but also because this kind of entangled representation is frequently implied in human cognitive studies [1]. This theory of representation could also be corroborated by the eye tracking studies on RPM [1], in which the subject's attention moved back and forth between the matrix entries, visiting each entry multiple times, rather than scanning the entries linearly (though other explanations also exist for such gaze phenomena). This section will explain the feasibility of our approach in terms of implementation and the rationale in terms of analogical reasoning. The effectiveness of this paradigm will be shown in the experiment section. ### Feasibility for implementation This subsection explains how this paradigm can be implemented and the implementation can behave as we described in the introduction section. Given the complex and dynamic nature of the cognitively-inspired paradigm, it is apparently inadvisable to mechanically compose multiple neural net modules into a neural net according to Figure 2(b). Also, it is possible that the dynamic nature of the interplay could stop the training and inference from converging, if conditions that encourage convergence are not found. In the feed-forward architectures commonly used in deep neural nets, we do not have this type of convergence issue. Thus, we can try to approximate the dynamic process with a feed-forward network. Observe that there are two paths in Figure 2(b) that give the paradigm its dynamic nature, i.e., changing its own internal states---(1) _Path 1_ starting from the perceptual representation through the conceptual representation and returning back to the perceptual representation, and similarly, (2) _Path 2_ starting from the conceptual representation through the perceptual representation and returning back to the conceptual representation. Therefore, if we unroll the cycles with these two paths and add the consistency computation mentioned above, we will have a feed-forward architecture, as shown in Figure 4, which **approximates** the fully iterative, cognitively-inspired paradigm in a feed-forward manner. We thus name our approach the Contrastive Perceptual-Conceptual Network (CPCNet). As indicated in the introduction section, this paradigm pursues consistency between perceptual and conceptual representations (the rationale will be explained later). There are two designs in the feed-forward architecture to make this happen. First, after each iteration of _Path 1_ and _Path 2_, the red paths in Figure 4 first compute the consistency information between the perceptual and conceptual representations. This consistency information could be computed as a shared component between the perceptual and conceptual representations through a linear mapping, or more complex non-linear operations could be used to have more representational power. Either way, the consistency information is then used to update the perceptual and conceptual representations, for example, deducting the consistency information from the perceptual and conceptual representations. Intuitively, this way, it would become easier and easier for later iterations to research a "full" consistency because the job of finding consistency is amortized over multiple iterations. Second, the above computation structure only makes the consistency more likely to happen. But it does not necessarily happen. Thus, we designed two classification heads at the end of the architecture, which classify the perceptual and conceptual representations, respectively. Then, during training, the loss function is used to pull predictions of the perceptual and conceptual representations toward the same correct answer label. If the classification heads are not too deep, the pressure of loss function will go back through the classification heads and pull the perceptual and conceptual representations toward a consistent position. Here, the meaning of "consistent" becomes more clear---consistent representations could be mapped to the same correct answer label though some simple mappings, like a two-layer MLP. This design here is very similar to the idea of supervised contrastive learning [14], but it does not require data augmentation or pre-training. Instead, it relies on the delicate architecture design, which is inspired by the interleaved human cognitive process, to achieve the contrastive effect. ### Rationale for analogical reasoning Visual abstract reasoning tasks like the RPM can be considered as analogy tasks, because when an RPM item is solved, the human subject is making analogies between rows or columns (or both). To explain the rationale more clearly, let's consider the simpler visual analogy _"A is to B as C is to D"_ in Figure 1 from the introduction in more depth. Suppose that a human subject has formed an initial visual representation for each analog in the analogy by looking at the figure for two seconds, but it is probably not the final, correct representation. According to the influential structure-mapping theory of analogy [1], the subject needs to construct a mapping \(F\) between the base domain \((A,B)\) and the target domain\((C,D)\). This mapping depends on how the analogs are represented. Given the initial visual representations of analogs, the fish and the bird are probably mapped to each other according to their appearance, e.g., _head to head, fins_ to wings, and tail to tail,_ and the air and the water are mapped in a holistic way. Then, if the subject's thinking moves to a higher level and tries to map the relations (i.e., \(G\) in Figure 1) in \((A,B)\) to the ones in \((C,D)\), she will find that they do not exactly match. In particular, many fish use tails for propulsion and fins for direction, whereas birds use wings for propulsion and tails for direction. This observation on \(G\) updates the mapping \(F\) and the representations of analogs--_fish fins to bird tails, fish tails to bird wings, fish heads to bird heads_, and _air to water holistically_. Given this clearer mapping \(F\), if the subject moves up to a higher level again and compare the relations \(G\), the mapping between \(B\) and \(D\) could be further refined to _air dynamics is mapped to fluid dynamics_ (rather than their colors) and thus the representation of water and air are also updated to focus on their dynamics properties. If the subject can give initial representations of analogs that can directly lead to the final correct mappings \(F\) and relations \(G\), she may not need to go through any iterative process like this. However, in real-life situations where stimuli have so many meanings and connections, the correct representations of analogs cannot always be formed immediately. This iterative process of working on \(F\), \(G\), and the representations of analogs is always needed to make and understand analogies. Given \(F\) corresponding to the perceptual processing and \(G\) corresponding to the conceptual processing, this iterative process is equivalent to the interplay between perceptual and conceptual processing. About the desire for consistency, its rationale follows from an assumption that the analogy is completely interpreted or understood only if the iterative process has ended, i.e., no updates are being made to representations of analogs anymore. In other words, it has been well recognized that analogical proportions enjoy central permutation as a characteristic property [10]. That is, _A is to B as C is to D_ if and only if _A is to C as B is to D_. This corresponds to interpretations of the analogy in Figure 1 in the horizontal or vertical direction. Two directions are equivalent. That one direction holds implies that the other direction also holds. Given this symmetry, \(G\) could also be regarded as a mapping between \((A,C)\) and \((B,C)\). If the interpretation of the analogy is unique, i.e., the mappings are unique, we will have \(F\circ G=G\circ F\), i.e., \(F\) and \(G\) are commutative. This equation is a very concise and beautiful description of analogy-making. And by pursuing consistency between perceptual and conceptual processing, we are actually pursuing equality in this equation, albeit in a data-driven and approximate way. ## Related Work There is a long line of research on computational solutions to RPM tasks, especially RPM-like datasets. Reviewing every one of them would be impossible here. We thus use a taxonomy to classify them into four categories and briefly describe each of them. More extensive reviews can be found in [13, 14, 15]. **Imagency-Based Approach.** Visual mental imagery refers to mental imagistic representation in human cognition [12]. It plays a crucial role in human visual reasoning ability. The most important characteristic of mental imagery is that human can experience mental imagery in the absence of the concurrent sensory input. The imagery-based approach simulates human mental imagery by directly operating on the raw visual input and, through mental operations like rotation, addition, and subtraction, it can solve a substantial portion of original RPM tests [13, 14]. **Logical Reasoning.** The computational models using logical reasoning work on symbolic representations of RPM items and reason in formal logic. For example, an entry image \(A\) in a matrix could be described by a set of propositions: "triangle(\(A\))=True, triangle-large(\(A\))=False, triangle-on-the-left(\(A\))=True, square(\(A\))=True, square-small(\(A\))=True, etc". The symbolic representations in these models are manually constructed or obtained through a preprocessing module. Representatives of this approach are ANALOGY [14], FAIRMAN,and BETTERMAN [13]. **Neuro-Symbolic Approach.** The neuro-symbolic models consist of two modules--a neural perception frontend and a symbolic reasoning backend. The neural perception frontend (usually implemented as neural nets) extracts the symbolic representation of entry images (including but not limited to logical representation and probability representation) which are based on a predefined formal representation system. The Figure 4: CPCNet: A Feed-Forward Architecture that unrolls the paradigm in Figure 2(b) symbolic reasoning backend performs symbolic manipulation or probability calculation according to the predefined formal representation system. Neuro-symbolic approach and the next approach--learning models--are data-driven approach, whereas the first two approaches are knowledge-based approaches. Examples of neuro-symbolic models for solving RPM-like tasks include ALANS2, PrAE, VAE-GPP, TRIVR, LoGe, NVSA and AMR(Zhang et al., 2022, 2021; Shi, Li, and Xue, 2021; He, Ren, and Bai, 2021; Yu et al., 2021; Hersche et al., 2023; Xu et al., 2023). **Learning Models.** Unlike the previous approach, learning approach does not rely on any predefined representation system of geometric objects and abstract patterns. Instead, the representations are learned from raw perceptual input and represented as feature vectors. When this paper is written, almost all of the popular deep learning architectures have been experimented on RPM tasks, such as CNN, ResNet family, recurrent neural networks, and attention models (Hu et al., 2021; Benny, Pekar, and Wolf, 2021; Sahu, Basioti, and Pavlovic, 2023; Wei, Chen, and Yuan, 2023). This approach has become more and more popular recently because of large RPM-like datasets created in the last five years (Barrett et al., 2018; Zhang et al., 2019). However, it needs to be pointed out that these datasets are not perfect. For example, the RAVEN dataset (Zhang et al., 2019) is flawed because of the context-blind issue, i.e., training only on the answer choices leads to good performance(Hu et al., 2021). Thus, two variants of RAVEN--I-RAVEN (Hu et al., 2021) and RAVEN-FAIR(Benny, Pekar, and Wolf, 2021)--were proposed to fix this issue. Besides different variants of RAVEN, the evaluation setting of learning models is a more complicated issue. We will elaborate more on this in the experiment section. ## CPCNet for Solving RPM-Like Datasets Based on the above discussion about the feasibility and rationale, we can now formalize our method for solving RPM-like problems. In the main paper, we describe what kind of operations are applied at each step; detailed implementation and hyper-parameters can be found in the supplementary material. We use the single-choice evaluation protocol, which is more challenging than the commonly-used multi-choice evaluation protocol, because comparing answer choices gives the model advantage over evaluating each single answer choice independently (see more about this in (Benny, Pekar, and Wolf, 2021)). Thus, by inserting each answer choice into the matrix and evaluating them individually, we actually turn every multi-choice item into eight binary classification items, where the input to our model is a real tensor \(x\) of shape \((R=rows,C=columns,H_{orig}=height,W_{orig}=width,channels=1)\) and the class label \(y\) indicates whether the inserted answer choice is correct. For the feature extractor that simulates early vision in Figure 4, we adopt a convolution-based encoder \(f_{E}\) whose output channel number is set to \(K>1\). Since the early vision usually does not involve forming abstract relations which are between entry images, \(f_{E}\) is applied to encode each matrix entry individually: \[z_{r,c} =f_{E}(x_{r,c})\;\forall(r,c)\in\{1,\ldots,R\}\times\{1,\ldots,C\} \tag{1}\] \[z =[z_{r,c}]_{r=1,\ldots,R,c=1,\ldots,C}\in\mathbb{R}^{R\times C \times H\times W\times K} \tag{2}\] where \(H<H_{orig}\) and \(W<W_{orig}\) as channels are increased from 1 to \(K\). Let \(z_{1}^{(0)}=z_{2}^{(0)}=z\) for the following Path 1 and 2, respectively. For each iteration \(i\in\{1,\ldots,L\}\) after \(f_{E}\), we need to define Path 1, Path 2, and the consistency computation between them. For Path 1, we define perceptual and conceptual processing as convolution-based modules \(h_{1}^{(i)}\) and \(g_{1}^{(i)}\), respectively. Similarly, for Path 2, we define the perceptual and conceptual processing as convolution-based modules \(h_{2}^{(i)}\) and \(g_{2}^{(i)}\). For the consistency computation, we define a two-layer MLP \(q^{(i)}\). The hyper-parameters of these modules are all set to values that preserve the input tensor shape \((R,C,H,W,K)\), i.e., the output channels of \(h_{1}^{(i)}\), \(g_{1}^{(i)}\), \(h_{2}^{(i)}\), and \(g_{2}^{(i)}\) and the output units of \(q^{(i)}\) are all set to \(K\). For RPM task, the abstract concepts lie in the row and column dimensions as the abstract concepts are represented by rows and columns. We thus apply the convolutions of conceptual processing \(g_{1}^{(i)}\) and \(g_{2}^{(i)}\) on the \((R,C)\) dimensions of the input tensor, and apply the convolutions of perceptual processing \(h_{1}^{(i)}\) and \(h_{2}^{(i)}\) on on the \((H,W)\) dimensions. And the consistency computation \(q^{(i)}\) is applied on the channel dimension. Note that dimensions when not being computed are treated as transparent, i.e., like extended batch dimensions. Let the outputs from Path 1 and 2 of Iteration \(i-1\) be \(z_{1}^{(i-1)}\) and \(z_{2}^{(i-1)}\), the computation of Iteration \(i\) is: \[u_{1} =h_{1}^{(i)}\circ g_{1}^{(i)}(z_{1}^{(i-1)}) \tag{3}\] \[u_{2} =g_{2}^{(i)}\circ h_{2}^{(i)}(z_{2}^{(i-1)})\] (4) \[v_{1} =q^{(i)}(u_{1})\] (5) \[v_{2} =q^{(i)}(u_{2})\] (6) \[z_{1}^{(i)} =u_{1}-v_{2}\] (7) \[z_{2}^{(i)} =u_{2}-v_{1} \tag{8}\] At last, we define two classification heads \(p_{1}\) and \(p_{2}\) for Path 1 and Path 2, respectively. \[\hat{y}_{1} =p_{1}(flatten(mean(z_{1}^{(L)}))) \tag{9}\] \[\hat{y}_{2} =p_{2}(flatten(mean(z_{2}^{(L)}))) \tag{10}\] where the \(mean\) takes the mean over the channel dimension of size \(K\) and the \(flatten\) flattens the input to a vector of length \(R\times C\times H\times W\). For training, we compute binary cross entropy losses for both \(\hat{y}_{1}\) and \(\hat{y}_{2}\) with respect to \(y\) and add them up as the final loss. For testing, we simply add up the \(z_{1}^{(L)}\) and \(z_{2}^{(L)}\) as a score of the input \(x\) and select the highest score among all the answer choices. ## Experiments ### Datasets and a Striking Observation We did our experiments on the RAVEN dataset (Zhang et al., 2019) and its variants. RAVEN items have 7 spatial config \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & Number & Position & Number/Position & Type & Size & Color \\ \hline Constant & 0 & 0 & 31803 & 19240 & 16451 & 21817 \\ \hline Progression & 1857 & 1809 & 0 & 19386 & 14049 & 12797 \\ \hline Arithmetic & 872 & 2933 & 0 & 0 & 13012 & 12686 \\ \hline Distribute-Three & 2925 & 2791 & 0 & 19447 & 16355 & 12767 \\ \hline Sum by Color & \multicolumn{4}{c|}{15187} & \multicolumn{4}{c|}{209810} \\ \hline \end{tabular} \end{table} Table 1: Numbers of RAVEN (I-RAVEN and RAVEN-FAIR) items containing each combination of abstract rules and attributes. \begin{table} \begin{tabular}{|c|l c c c c c c c c|} \hline & Model & Avg. Acc. & Center & 2x2Grid & 3x3Grid & L-R & U-D & O-IC & O-IG \\ \hline \multirow{3}{*}{Multi-Choice} & LEN & 78.3\% & 82.3\% & 58.5\% & 64.3\% & 87.0\% & 85.5\% & 88.9\% & 81.9\% \\ & MXGNet & 83.91\% & - & - & - & - & - & - & - \\ Evaluation & CoPINet & 91.42\% & 95.05\% & 77.45\% & 78.85\% & 99.10\% & 99.65\% & 98.50\% & 91.35\% \\ Protocol & DCNet & 93.58\% & 97.80\% & 81.70\% & 86.65\% & 99.75\% & 99.75\% & 98.95\% & 91.45\% \\ & SAVIR-T & 94.0\% & 97.8\% & 94.7\% & 83.8\% & 97.8\% & 98.2\% & 97.6\% & 88.0\% \\ \hline \multirow{9}{*}{Single-Choice} & WReN & 14.69\% & 13.09\% & 28.62\% & 28.27\% & 7.49\% & 6.34\% & 8.38\% & 10.56\% \\ & ARNe & 19.67\% & - & - & - & - & - & - & - \\ & NCD & 39.66\% & 45.45\% & 35.50\% & 39.50\% & 34.85\% & 33.40\% & 40.25\% & 30.00\% \\ & PraE & 65.03\% & 76.50\% & 78.60\% & 28.55\% & 90.05\% & 90.85\% & 48.05\% & 42.60\% \\ Single-Choice & ALANS & 74.4\% & 69.1\% & 80.2\% & 75.0\% & 72.2\% & 73.3\% & 76.3\% & 74.9\% \\ Evaluation & MRNet & 84.0\% & - & - & - & - & - & - & - \\ Protocol & NVSA & 87.7\% & 99.7\% & 93.5\% & 57.1\% & 99.8\% & 99.1\% & 98.1\% & 65.4\% \\ & SCL & 91.6\% & 98.1\% & 91.0\% & 82.5\% & 96.8\% & 96.5\% & 96.0\% & 80.1\% \\ & AlgebraicMR & 92.9\% & 98.8\% & 91.9\% & **93.1\%** & 99.2\% & 99.1\% & 98.2\% & 70.1\% \\ & Rel-AIR & 94.1\% & 99.0\% & 92.4\% & 87.1\% & 98.7\% & 97.9\% & 98.0\% & 85.3\% \\ & CPCNet(Ours) & **96.92\%** & **100.0\%** & **96.70\%** & 86.05\% & **100.0\%** & **99.90\%** & **99.90\%** & **95.90\%** \\ \hline & Human & 84.4 & 95.5\% & 81.8\% & 79.6\% & 86.4\% & 81.8\% & 86.4\% & 81.8\% \\ \hline \end{tabular} \end{table} Table 2: Numbers of AB-RAVEN items containing each combination of abstract rules and attributes. Figure 5: 7 spatial configurations of the RAVEN dataset. Each configuration is illustrated by a complete 3x3 matrix. In the **Center** configuration, each entry contains only one object located at the center. In **2x2Grid** and **3x3Grid**, objects can only be located on the grid positions in each entry. In **Left-Right** and **Up-Down**, each entry contains exactly two objects located at the fixed positions shown in the figure. **Out-InCenter** is a combination of two center configurations and **Out-InGrid** is a combination of a center configuration and a 2x2Grid configuration. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & Number & Position & Number/Position & Type & Size & Color \\ \hline Constant & 0 & 0 & 19574 & 17507 & 15263 & 21220 \\ \hline Progression & 4058 & 4040 & 0 & 17641 & 11998 & 11010 \\ \hline Arithmetic & 6329 & 6307 & 0 & 0 & 11508 & 10932 \\ \hline Distribute-Three & 6335 & 6421 & 0 & 17379 & 15080 & 10907 \\ \hline Sum by Color & \multicolumn{4}{c|}{33490} & \multicolumn{4}{c|}{180219} \\ \hline \end{tabular} \end{table} Table 1: Numbers of RAVEN (I-RAVEN and RAVEN-FAIR) items containing each combination of abstract rules and attributes. urations to organize the geometric objects in matrix entries (see Figure 5). Each configuration has 6000, 2000, and 2000 items for training, validation, and test, respectively. A very striking result from almost all previous works on RAVEN is that the accuracies on 2x2Grid, 3x3Grid, and Out-InGrid are always significantly lower than on other item configurations. Some argue this result is because specific abstract rules (i.e., concepts, relations, or patterns) are difficult to learn [22]; some argue that the noise attributes in the grid configurations causes this result [23]. Although these arguments are not wrong, the fundamental reason for this result might be a much more mundane (but apparently previously unremarked!) one--that RAVEN is an extremely imbalanced dataset in terms of the abstract rules represented in its problems. This point can be seen in Table 1, which counts dataset items for each combination of abstract rules and attributes. There are two types of combinations in this table--the red ones that exist only in the 2x2Grid, 3x3Grid, and Out-InGrid configurations and the green ones that exist mainly in the other four configurations and exist in roughly 65% of the items of 2x2Grid, 3x3Grid, and Out-InGrid. Moreover, the sum of the green ones is roughly 15 times of the sum of the red ones. Therefore, the red ones are much less represented both in their own configurations and globally. This argument also applies to RAVEN's variants, such as I-RAVEN and RAVEN-FAIR because they share the same set of abstract rules and attributes with the the original RAVEN. As we all know, deep learning models require sufficient data to work properly and every deep learning model has a lower limit for training data. We thus hypothesize that it is because the red ones in Table 1 are less represented that previous models usually work relatively worse on 2x2Grid, 3x3Grid, and Out-InGrid. To verify that this, we constructed a new variant of RAVEN which is more Balanced in terms of Abstract rules and attributes (we thus call it AB-RAVEN). Table 2 shows the statistics of AB-RAVEN. It was made more balanced by decreasing the number of non-grid training items and increasing the number of grid training items while keeping the overall size of training set unchanged. The validation and test sets of AB-RAVEN remain the same as RAVEN's. If the hypothesis is true, training on this dataset will lead to a smaller gap between grid and non-grid (testing) accuracies. Meanwhile, this dataset can also check if previous models' high accuracies on non-grid configurations are a result of excessively many training items of non-grid configurations. As you can see in Table 2, AB-RAVEN is not perfectly balanced, but just just more balanced than RAVEN. This is because making it perfectly balanced will violate the design of 7 configurations, i.e., need to remove all non-grid configuration items. More details about AB-RAVEN is provided in the supplementary material. ## Results and Discussion We designed our model to avoid using meta-targets and structure information for auxiliary training because this kind of handcrafted auxiliary information is not always available for human RPM tests and general visual abstract reasoning tasks; instead, only the score of each answer choice is predicted individually and the highest score is selected as the answer, i.e., using the single-choice evaluation protocol, which is more difficult than the opposite--multi-choice evaluation protocol [1]. Although not discussed very often in literature, it has been shown by multiple works [22, 23, 24, 25, 26, 27, 28] that when using single-choice evaluation, i.e., not allowing the model to comparing answer choices before scoring them and thus not allowing it to use the backdoor of the original RAVEN, the original RAVEN is more challenging than I-RAVEN and RAVEN-FAIR, that is, the same model always achieves a higher accuracy on I-RAVEN and RAVEN-FAIR than on RAVEN. This makes sense because the way the answer choices of RAVEN were generated makes the answer choices more similar to each other than in I-RAVEN and RAVEN-FAIR and thus more confusing to the model when evaluated individually; on the contrary, the ways answer choices were generated in I-RAVEN and RAVEN-FAIR make the distractors differ from the correct answer by more attributes and thus less confusing to the model when evaluated individually. Therefore, due to the page limit, we report only the results on datasets of interest to us--RAVEN and AB-RAVEN--here. Other results can be found in the supplementary material. Table 3 shows that our model achieve the best average accuracy compared to previous models and the best configuration accuracies on 6 out of 7 configurations. Although our model's accuracy is only 2.82%\(\sim\)2.92% higher than the second and third highest accuracies of Rel-AIR and SAVIR-T, our model is solving the RAVEN dataset in a more difficult and general way--SAVIR-T uses the the easier multi-choice evaluation protocol and is designed to utilize the inductive bias that is specific to RPM [23], and while Rel-AIR uses the harder single-choice evaluation, Rel-AIR employs a separately-trained entry-encoder to explicitly extract values of size and position attributes, which are also specific to RPM rules. Many of the models in Table 3 more or less used inductive bias that is specific to RAVEN either in the model design or in the training procedure. On the contrary, our inductive bias, if we consider it as a kind of inductive bias, is the interplay and consistency between perceptual and conceptual processing, which is more meaningful for solving and understanding general visual abstract reasoning. In particular, CoPINet and DCNet, which have been reported to utilize the back \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline Model & Avg. Acc. & Center & 2x2Grid & 3x3Grid & L-R & U-D & O-IC & O-IG \\ \hline CPCNet(ours) & 98.84\% & 99.75\% & 99.20\% & 94.95\% & 99.70\% & 99.80\% & 99.50\% & 98.95\% \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracies on the AB-RAVEN, using single-choice evaluation protocol. door of RAVEN Hu et al. (2021), also achieved lower accuracies than ours. Table 4 shows our model's accuracy on the AB-RAVEN dataset. Our model achieved a nearly perfect overall accuracy and, compared to Table 3, the accuracy gap between grid and non-grid configurations has been reduced from 7.07% to 1.98%. This verifies our hypothesis about the imbalanced dataset in the last subsection. Besides the RAVEN family, we also tested CPCNet on PGM and achieved the best accuracy of 98.4%. More details can be found in the supplementary material. ## Conclusion In this paper, we investigated a cognitively-inspired paradigm for solving visual abstract reasoning tasks by leveraging the interplay between perceptual and conceptual processing. We discussed feasibility of this approach and detailed rationale, and, we designed a deep neural net architecture to simulate the interplay of processes suggested in the cognitively-inspired paradigm, i.e., the interplay and consistency between perceptual and conceptual processing. Experiments shows that our CPCNet architecture is more effective than all previous models for solving RPM-like datasets. Moreover, with a balanced dataset (AB-Raven) in terms of abstract concepts, this architecture can even achieve a nearly perfect performance.
2301.00103
Renormalization of Ising cage-net model and generalized foliation
A large class of type-I fracton models, including the X-cube model, have been found to be fixed points of the foliated renormalization group (RG). The system size of such foliated models can be changed by adding or removing decoupled layers of $2$D topological states and continuous deformation of the Hamiltonian. In this paper, we study a closely related model -- the Ising cage-net model -- and find that this model is not foliated in the same sense. In fact, we point out certain unnatural restrictions in the foliated RG, and find that removing these restrictions leads to a generalized foliated RG under which the Ising cage-net model is a fixed point, and which includes the original foliated RG as a special case. The Ising cage-net model thus gives a prototypical example of the generalized foliated RG, and its system size can be changed either by condensing / uncondensing bosonic planon excitations near a 2D plane or through a linear depth quantum circuit in the same plane. We show that these two apparently different RG procedures are closely related, as they lead to the same gapped boundary when implemented in part of a plane. Finally, we briefly discuss the implications for foliated fracton phases, whose universal properties will need to be reexamined in light of the generalized foliated RG.
Zongyuan Wang, Xiuqi Ma, David T. Stephen, Michael Hermele, Xie Chen
2022-12-31T03:02:38Z
http://arxiv.org/abs/2301.00103v2
# Renormalization of Ising cage-net model and generalized foliation ###### Abstract A large class of type-I fracton models, including the X-cube model, have been found to be fixed points of the foliated renormalization group (RG). The system size of such _foliated_ models can be changed by adding or removing decoupled layers of 2D topological states and continuous deformation of the Hamiltonian. In this paper, we study a closely related model - the Ising cage-net model - and find that this model is not foliated in the same sense. In fact, we point out certain unnatural restrictions in the foliated RG, and find that removing these restrictions leads to a generalized foliated RG under which the Ising cage-net model is a fixed point, and which includes the original foliated RG as a special case. The Ising cage-net model thus gives a prototypical example of the generalized foliated RG, and its system size can be changed either by condensing / uncondensing bosonic planon excitations near a 2D plane or through a linear depth quantum circuit in the same plane. We show that these two apparently different RG procedures are closely related, as they lead to the same gapped boundary when implemented in part of a plane. Finally, we briefly discuss the implications for foliated fracton phases, whose universal properties will need to be reexamined in light of the generalized foliated RG. ## I Introduction The renormalization group (RG) plays a fundamental role in the characterization and classification of quantum phases of matter.[1; 2; 3] It is a piece of conventional wisdom that each phase - defined as a deformation class of quantum systems - is characterized by a unique RG fixed point, which encodes the universal long-distance and low-energy properties of the phase. Moreover, the existence of such a fixed point underlies the key role played by continuum quantum field theory as a tool to describe universal properties of phases (and phase transitions) while discarding extraneous non-universal information. Fracton models in three spatial dimensions (3D) [4; 5] provide exceptions to this conventional wisdom, and accordingly challenge our understanding of the relationships among quantum phases of matter, the renormalization group, and quantum field theory. This is nicely illustrated in the X-cube model,[6] perhaps the simplest fracton model. The defining characteristic of a fracton model is the presence of excitations of restricted mobility, and the X-cube model supports point-like excitations mobile in planes (planons), along lines (lineons), and for which an isolated excitation is fully immobile (fractons). The model is exactly solvable and has zero correlation length, so we might expect it to be a fixed point of the RG, as is the case for toric code and string-net models.[7; 8] However, the X-cube model on a lattice of linear size \(L\) is equivalent (under the application of a finite-depth circuit) to an X-cube model on a smaller lattice stacked with 2D toric code layers.[9] Therefore, when trying to coarse-grain the X-cube model, non-trivial 2D layers are left behind. These layers cannot be integrated out or otherwise removed, thus preventing the model from being a fixed point of any conventional RG procedure. This behavior is closely related to the striking system-size dependence of certain properties, such as the ground state degeneracy (GSD) and the number of types of fractional excitations, both of which grow exponentially in the linear system size.[9; 10] Similar phenomena occur in other fracton models, including Haah's cubic code[11]. It is interesting to ask whether some fracton models are fixed points of a suitably generalized RG. While there are many schemes and procedures for carrying out RG in different settings, it is important to emphasize that simply finding a new RG scheme is not enough. Instead, a more radical generalization of what we usually mean by RG is needed, because, for instance, any RG procedure that can have the fracton models as fixed points must allow for the increase / decrease in GSD and the addition / removal of fractional excitations in the process. Along these lines, it was found the X-cube model is a fixed point of a _foliated RG_ procedure.[9; 12; 13; 14] It is helpful to recall the conventional RG procedure for gapped phases[2; 3], which allows, in each RG step, for continuous deformations of the Hamiltonian that keep the gap open, and for the addition/removal of trivial gapped systems (those whose ground state is a product state). In the foliated RG, one also allows addition or removal of decoupled, gapped 2D systems. Such 2D systems can be topologically ordered and thus carry non-trivial GSD and fractional excitation types, hence allowing for these properties to change under RG. In the case of the X-cube model, we can remove 2D toric code layers under the foliated RG, thus making the model into a fixed point. More generally, a large class of type-I fracton models[6] - those where some of the fractional excitations are mobile - are fixed points of the foliated RG. The foliated RG leads to the closely related notion of _foliated fracton phases_.[10; 15] Foliated fracton phases, which we define in Appendix A, are a coarser equivalence relation on ground states than ordinary phases, and each foliated fracton phase contains a fixed point of the foliated RG. This fixed point captures certain universal properties that are the same everywhere in the foliated phase, and these properties are referred to as _foliated fracton order_. When a model belongs to a foliated fracton phase, it is a convenient shorthand terminology to refer to the model as being foliated. An interesting type-I fracton model that has not been investigated from this perspective is the Ising cage-net model.[16] The Ising cage-net model is very similar to the X-cube model in many ways. Both are exactly solvable models that can be obtained from a coupled layer construction, based on toric code layers in the X-cube case,[17; 18] and doubled-Ising string-net layers in the cage-net case.[16] Both have fracton excitations that are created at the corners of a rectangular membrane operator. Both have lineon excitations (abelian in the X-cube model and non-abelian in the cage-net model) that move in the \(x\), \(y\) and \(z\) directions. Both have other planon excitations that move in \(xy\), \(yz\) or \(zx\) planes. Despite these similarities, it has not been clear whether the Ising cage-net model is foliated in the sense defined above. It is important to emphasize that, while both involve a layer structure, the coupled-layer constructions of X-cube and cage-net models are very different from foliated RG and from the notion of foliated fracton phases. In particular, there is no obvious relationship between whether a model can be obtained by a coupled-layer construction and whether it is foliated. By analogy with the X-cube model, it is natural to guess that the Ising cage-net model is a foliated RG fixed point upon adding/removing doubled-Ising string-net layers. However, this cannot be the case, because the doubled-Ising string-net model contains non-abelian excitations with quantum dimension \(\sqrt{2}\), while the cage-net model has excitations with integer quantum dimension only.[16] While this argument does not rule out the possibility of a foliated RG fixed point with other 2D topological states as resources, in fact the Ising cage-net model is not foliated. This can be seen by studying the model's GSD, which has been computed by some of the authors in a separate paper.[19] It is found that the GSD does not grow by integer multiples when the system size grows by unity in the \(x\), \(y\) or \(z\) directions. The question is then open again: can we think of the Ising cage-net model as a fixed point of a suitably generalized RG? More specifically, can the foliated RG be generalized somehow to include the Ising cage-net model? In fact, we argue in this paper that the foliated RG _should_ be extended, independent of the Ising cage-net example. We do this by re-examining foliated RG from two complementary perspectives, one based on planon condensation, and the other based on quantum circuits, and point out that in both these pictures, the foliated RG has unnatural restrictions. These observations lead us to a generalized foliated RG under which, remarkably, the Ising cage-net model is a fixed point. The generalized foliated RG can be carried out either by condensing or uncondensing bosonic planon excitations supported near a 2D plane, or by acting with a quantum circuit, supported near a 2D plane, whose depth scales with the linear size of the plane. We show that either of these operations can be used to decrease or increase the system size of the Ising cage-net model, which is thus a generalized foliated RG fixed point. The two apparently different ways of carrying out the generalized foliated RG are closely related, through a connection that we explain between anyon condensation and a class of linear depth circuits that we refer to as _sequential circuits_. We note that the original foliated RG arises as a special case of the generalized procedure introduced here. In particular, for the X-cube model, instead of decoupling a toric code layer and removing it to decrease system size, we can condense the bosonic planon that effectively comes from the toric code layer (either \(e\) or \(m\)), which has the same effect as removing the layer. Alternatively, we can act with a certain linear-depth circuit (more specifically, a sequential circuit) whose effect is to condense the same bosonic planon. Therefore, we can use generalized foliation to study the X-cube model, the Ising cage-net model and many other type-I fracton models within a single framework. Just as foliated RG comes with the notion of foliated fracton phases and foliated fracton order, we expect that the generalized foliated RG comes with corresponding notions of generalized foliated Figure 1: Top: the foliated RG scheme, where a layer of topologically ordered state (shown in orange) can be added into or removed from a foliated fracton model via a finite depth circuit. Bottom: generalized foliated RG scheme realized by condensation of bosonic planons or a sequential linear depth circuit around the plane. fraction phases and generalized foliated fracton order. It will be interesting to study these notions in future work. The paper is structured as follows: In Sec. II, we review the original foliated RG by focusing on the X-cube model. In Sec. III, we review the Ising cage-net model, which is not foliated according to the original scheme. Section IV then briefly points out some unnatural restrictions within the original foliated RG, and proposes a generalized foliated RG where these restrictions are removed. In Sec. V, we show that the Ising cage-net model is foliated in terms of a generalized foliated RG defined by planon condensation. Then, in Sec. VI, we demonstrate that the generalized foliated RG can also be implemented by a planar linear depth circuit. The linear depth circuit has a special structure, and we dub it a sequential circuit; in Sec. VII we show how the sequential circuit we use is closely related to the condensation of planons via gapped boundaries. Finally, in Sec. VIII, we conclude with a brief discussion on the implications of and outlook for the generalized foliated RG. ## II Foliation in X-cube Before our discussion of the 'generalized foliation', it is instructive to review the original notion of foliation and see how the corresponding RG procedure is carried out for the X-cube. The X-cube model has a foliated structure, where layers of the toric code can be added to or removed from the X-cube via a finite depth circuit \(\mathcal{S}\). [9] Given an X-cube ground state \(|\Psi_{\text{X.C.}}\rangle\) of the system size \(L_{x}\times L_{y}\times L_{z}\) and a toric code ground state \(|\Psi_{\text{T.C.}}\rangle\), \(\mathcal{S}\) yields a \(|\Psi_{\text{X.C.}}\rangle\) of the size \(L_{x}\times L_{y}\times(L_{z}+1)\). In rest of this section, we review the finite depth circuit \(\mathcal{S}\) on the three-torus. Let us consider the X-cube Hamiltonian defined on a cubic lattice on the three-torus; and one copy of the toric code Hamiltonian defined on a square lattice on the two-torus. For both models, the local qubit DOFs are placed on the edges. The X-cube Hamiltonian [6] \[H_{\text{X.C.}}=-\sum_{v}\left(A_{v}^{x}+A_{v}^{y}+A_{v}^{z}\right)-\sum_{c}B _{c} \tag{1}\] contains three types of vertex terms \(A_{v}^{x}\), \(A_{v}^{y}\), and \(A_{v}^{z}\); and one type of cube term \(B_{c}\), as shown in Fig. 2. The toric code Hamiltonian [20] \[H_{\text{T.C.}}=-\sum_{v}Q_{v}-\sum_{p}B_{p} \tag{2}\] is a sum of local terms as shown in Fig. 3. To construct the circuit, we first insert a decoupled toric code into the X-cube. As depicted in Fig. 4, when the inserted toric code lies in the \(xy\)-plane, it bisects the \(z\)-direction edges in the X-cube model, thus creating new qubit edges \(k^{\prime}\) colored in orange. These new \(k^{\prime}\) edges are added to the system as product states whose Hamiltonian is chosen to be \(H_{0}=-\sum_{\{k^{\prime}\}}Z_{k^{\prime}}\). For each bisected edge \(i\) in the X-cube Hamiltonian, we substitute \(Z_{i}\to Z_{i^{\prime}}\) and \(X_{i}\to X_{i^{\prime}}\). The circuit \(\mathcal{S}\) is a product of two finite depth circuits \(\mathcal{S}_{2}\) and \(\mathcal{S}_{1}\), \(\mathcal{S}=\mathcal{S}_{2}\mathcal{S}_{1}\). Each is a product of the controlled-NOT (CNOT) gates. The circuit \(\mathcal{S}_{1}\) acts on the edges of the modified X-cube Hamiltonian, as shown in Fig. 4(a). Every CNOT gate in \(\mathcal{S}_{1}\) has an \(i^{\prime}\) edge serving as the controlled qubit and the corresponding \(k^{\prime}\) edge as the target. On the other hand, \(\mathcal{S}_{2}\) acts on both edges of the X-cube and those of the toric code. Every edge of the Figure 4: The insertion of a layer of toric code living on an \(xy\)-plane (blue colored square lattice) into a cubic lattice, which hosts the X-cube. The inserted layer bisects an edge \(i\) near the inserted plane into edges labeled by \(i^{\prime}\) and \(k^{\prime}\). For every bisected edge, the X-cube Hamiltonian is modified by replacing \(Z_{i}\to Z_{i^{\prime}}\) and \(X_{i}\to X_{i^{\prime}}\). The new edges \(k^{\prime}\) are product states with the Hamiltonian of \(H_{0}=-\sum_{\{k^{\prime}\}}Z_{k^{\prime}}\). Figure 3: (a) The vertex term \(Q_{v}\) in the toric code Hamiltonian. (b) The plaquette term \(B_{p}\). toric code serves as the controlled qubit for the CNOT gates whose targets are edges in the modified X-cube. An illustration of \(\mathcal{S}_{2}\) is given in Fig. 5b. The CNOT gate, acting by conjugation, has the actions of \[\begin{split} ZI\mapsto ZI,& IZ\leftrightarrow ZZ,\\ XI\leftrightarrow XX,& IX\mapsto IX,\end{split} \tag{3}\] where the first qubit is the control and the second is the target. All the CNOT gates in \(\mathcal{S}_{1}\) or \(\mathcal{S}_{2}\) commute with each other. Therefore, \(\mathcal{S}\) is a finite depth circuit. By direct computation, we see that \[\mathcal{S}\left(\tilde{H}_{\mathrm{X.C.}}^{(L_{x},L_{y},L_{z})}+H_{\mathrm{T. C.}}+H_{0}\right)\mathcal{S}^{\dagger}\cong H_{\mathrm{X.C.}}^{(L_{x},L_{y},L_{z}+1)}, \tag{4}\] where \(\tilde{H}_{\mathrm{X.C.}}\) is the modified X-cube Hamiltonian, and the symbol \(\cong\) denotes that the L.H.S. and the R.H.S. share the same ground space. ## III Ising cage-net In this section, we review the basic definition and properties of the Ising cage-net model. The Ising cage-net is an exactly solvable model obtained from the coupled layer construction [16], in which decoupled layers of the doubled-Ising string-net [21; 22; 23; 24; 25] are coupled together through the particle-loop (p-loop) condensation. Specifically, we take three stacks of the doubled-Ising string-net defined on a square-octagon lattice (see Fig. 6), and stack them together to form a truncated cubic lattice, as shown in Fig. 7. Each of the six faces of a cube is an octagonal plaquette. We call an edge \(l\), parallel to the \(\mu\)-direction for \(\mu\in\{x,y,z\}\), a \(\mu\)-principal edge, and denote it by \(l_{\mu}\). As a 2D lattice model, the doubled-Ising string-net is built from the Ising unitary modular tensor category [26; 27], which consists of an index set \(\{0,1,2\}\) and a set of symbols \((\delta_{ijk},d_{s},F_{kln}^{ijm},R_{l}^{ij})\). The model has a three-dimensional local Hilbert space of \(\mathrm{span}_{\mathbb{C}}\{\left|\left\langle 0\right\rangle,\left|1\right\rangle, \left|2\right\rangle\right\}\) for each edge of the square-octagon lattice. The states \(\left|0\right\rangle\), \(\left|1\right\rangle\), \(\left|2\right\rangle\) are dubbed as 0-string, 1-string, and 2-string respectively. The commuting projector Hamiltonian \[H_{\mathrm{D.I.}}=-\sum_{v}Q_{v}-\sum_{p}B_{p} \tag{5}\] consists of the vertex projector \(Q_{v}\) and the plaquette projector \(B_{p}=\sum_{s=0}^{2}(d_{s}/D)B_{p}^{s}\) (see Fig. 6). The symbol \(d_{s}\) takes values in \(d_{0}=d_{2}=1\), and \(d_{1}=\sqrt{2}\). \(D=\sum_{s}(d_{s})^{2}\) is _the total quantum dimension_ of the model. Figure 7: A truncated cubic lattice. It is formed by intersecting layers of the square-octagon lattice. Every cube has six octagonal faces. At the corners of each cube are octahedrons (see Fig. 9). The edges \(l\), parallel to \(\mu\) direction for \(\mu\in\{x,y,z\}\), are called the \(\mu\)-principal edges, which are denoted by \(l_{\mu}\). For the system of decoupled layers, a \(\mu\)-principal edge has a nine-dimensional local space given by the tensor product of \((\,\mathrm{span}_{\mathbb{C}}\{\left|0\right\rangle,\left|1\right\rangle, \left|2\right\rangle\})^{\otimes 2}\). Figure 5: An illustration of the finite depth circuit \(\mathcal{S}=\mathcal{S}_{2}\mathcal{S}_{1}\). (a) The action of the circuit \(\mathcal{S}_{1}\) when focus on an elementary cube of the original cubic lattice. The arrows, representing the CNOT gates, point from the controlled qubits to the targets. (b) \(\mathcal{S}_{2}\)’s action viewed at a cube. \(Q_{v}\)'s action is defined by \[Q_{v}\Bigg{|}\begin{array}{c}j\\ i\end{array}\Big{\rangle}=\delta_{ijk}\Bigg{|}\begin{array}{c}j\\ i\end{array}\Big{\rangle}, \tag{6}\] where the symbol \(\delta_{ijk}\) is symmetric under permutation of its indices. The non-zero elements are \(\delta_{000}=\delta_{011}=\delta_{211}=\delta_{022}=1\), up to permutations. The subspace where all the vertex terms \(Q_{\mu}\) are satisfied is called _the stable vertex subspace_\(\mathcal{H}_{Q_{\mu}}^{\mathrm{DL.25}}\). The plaquette operator \(B_{p}^{s}\)'s action are evaluated by the graphical rules, which are defined via the \(d\)- and \(F\)-symbols (Appendix B). \(B_{p}^{s}\) acts on a plaquette by fusing a loop of \(s\) into the edges as, for example, (7) For every ground state \(\ket{\Psi_{\mathrm{D.I.}}}\), which is a superposition of different configurations of closed loops satisfying \(Q_{v}\) at each vertex, \(B_{p}^{s}\) acts as \[B_{p}^{s}\ket{\Psi_{\mathrm{D.I.}}}=d_{s}\ket{\Psi_{\mathrm{D.I.}}}. \tag{8}\] Moreover, the \(B_{p}^{s}\) operators form a commutative fusion algebra of \[B_{p}^{i}B_{p}^{j}=\sum_{k=0}^{2}\delta_{ijk}B_{p}^{k}. \tag{9}\] The doubled-Ising string-net has nine topological excitations \(\{1,\psi,\psi,\sigma,\bar{\sigma},\sigma\psi,\psi\bar{\sigma},\sigma\bar{ \sigma},\psi\bar{\psi}\}\). In terms of the theory of anyons, these excitations come from a copy of the chiral Ising anyon \(\{1,\sigma,\psi\}\), and an anti-chiral copy \(\{1,\bar{\sigma},\bar{\psi}\}\). The fusion rules for the chiral Ising anyon are \[\begin{array}{c|ccc}\times&\mathbbm{1}&\sigma&\psi\\ \hline\mathbbm{1}&\mathbbm{1}&\sigma&\psi\\ \sigma&\sigma&\mathbbm{1}+\psi&\sigma\\ \psi&\psi&\sigma&\mathbbm{1}\end{array} \tag{10}\] The anti-chiral Ising anyon obeys the same fusion rules; we simply replace the anyon labels above with the barred version. Among the nine excitations, the non-abelian \(\sigma\bar{\sigma}\) and the abelian \(\psi\bar{\psi}\) are bosons. They are also the only non-trivial pure fluxon excitations. A _fluxon_ excitation violates exactly one \(B_{p}\) term and none of the \(Q_{v}\) terms. A fluxon string-operator \(W_{l}^{\mathrm{fluxon}}\) creates the fluxon and its anti-particle on the two adjacent plaquettes sharing the edge \(l\) (see Fig. 6). In particular, the \(\psi\bar{\psi}\) has a string-operator \[W_{l}^{\psi\bar{\psi}}=(-1)^{n_{1}(l)}, \tag{11}\] where \(n_{1}(l)=1\) if the edge \(l\) is in the state \(\ket{1}\), and \(n_{1}(l)=0\) otherwise. To couple the stacks of the doubled-Ising string-net layers together, we condense the \(\psi\bar{\psi}\) p-loop. Illustrated in Fig. 8 is the smallest \(\psi\bar{\psi}\) p-loop created by the coupling operator \[V_{l_{\mu}}=W_{l_{\mu}}^{(\psi\bar{\psi})_{\mu\nu}}W_{l_{\mu}}^{(\psi\bar{\psi })_{\mu\rho}}, \tag{12}\] which is a product of \(\psi\bar{\psi}\) string-operators, from the \(\mu\nu\)- and \(\mu\rho\)-planes, acting on the edge \(l_{\mu}\). We add \(-V_{l_{\mu}}\) for every principal edge to the Hamiltonian of the decoupled layers. \(-V_{l_{\mu}}\) penalizes the presence of the states \(\ket{01}\), \(\ket{10}\), \(\ket{21}\), and \(\ket{12}\) on \(l_{\mu}\). Using the Brillouin-Wigner degenerate perturbation theory and treating doubled-Ising string-nets as perturbations, we arrive at the Ising cage-net. Hence, on a principal edge, the Ising cage-net has a five-dimensional local Hilbert space of \(\mathrm{span}_{\mathbb{C}}\{\ket{00},\ket{11},\ket{02},\ket{20},\ket{22}\}\). Other edges are unchanged. The Ising cage-net has a commuting Hamiltonian of \[H_{\mathrm{L.C.}}=-\sum_{\mu\nu,v}A_{v}^{\mu\nu}-\sum_{p_{e}}B_{p_{e}}-\sum_{p _{a}}\frac{1}{2}\left(B_{p_{a}}^{0}+B_{p_{a}}^{2}\right)-\sum_{c}B_{c}, \tag{13}\] where \(A_{v}^{\mu\nu}\) is the vertex projector in a \(\mu\nu\)-plane; \(B_{p_{e}}\) is the doubled-Ising string-net plaquette projector for a square plaquette; \(\frac{1}{2}\left(B_{p_{a}}^{0}+B_{p_{a}}^{2}\right)\) is a plaquette term associated with each octagonal plaquette \(p_{c}\); and \[B_{c}=\prod_{p_{a}\in c}\frac{\sqrt{2}}{2}B_{p_{a}}^{1} \tag{14}\] Figure 8: An elementary \(\psi\bar{\psi}\) particle-loop (p-loop), the red loop, created by the coupling operator \(V_{l_{\mu}}\) shown by the green tube. We represent a flux by a line segment normal to the hosting plaquette. Joining the segments together, we have the red loop. is the cube term. The vertex term acts as \[A_{v}^{\mu\nu}\left|\begin{array}{c}j\\ \end{array}\right|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! with a finite-depth quantum circuit. The topological layer cannot itself be created with a finite-depth circuit from a product state. However, it is now well-understood that it can be created with a linear-depth circuit [28; 29]. Therefore, if we view foliated RG as a generalization of usual entanglement RG [2; 3], in which one is allowed to add ancillary degrees of freedom in a product state and then apply finite-depth circuits, moving to foliated RG corresponds to additionally allowing linear-depth circuits within a 2D subsystem of the 3D model. However, from this perspective, the current definition of foliated RG is restricted, in that we only allow the linear-depth circuit to act on the ancillae qubits and not on the 3D bulk. A more natural definition would be to allow the linear-depth circuit to act arbitrarily within a 2D layer on both the ancillae and the bulk. We remark that the kinds of linear-depth circuits involved here have a special structure that preserves the area law of entanglement, as discussed in more detail in Sec. VII. Second, we can also view foliated RG in terms of condensation. Namely, suppose we want to implement the inverse process of removing a single layer from the X-cube model, reducing its size in one direction. This can be achieved by condensing a planon within a single layer, corresponding to disentangling the toric code layer and then trivializing that layer by condensing a boson. In this case, the planon which we condense is very special: it can be viewed as being part of a 2D theory that is decoupled from the rest of the excitation spectrum of the 3D bulk. To be more general, if we allow condensation of planons in RG, we should allow condensation of arbitrary planons, not only those that are part of decoupled 2D theories. In light of the above, there are two natural ways to extend the notion of foliated RG: linear-depth circuits and planon condensation. In what follows, we will show that both approaches lead to a generalized foliated RG that is applicable to the Ising cage-net model. Then, in Sec. VII, we argue that these two approaches, while seemingly distinct, are in fact very closely related to each other. ## V RG via condensation How can the system size of the Ising cage-net model be increased / decreased? In this section, we show that it can be changed through condensation and uncondensation of bosonic planons. This is closely tied to the topic of anyon condensation in 2D systems, and we refer the reader to Ref. [30] and references therein for a review. Let us begin by considering the process of condensing planons in an \(xy\)-plane to decrease the system size in the \(z\) direction by one (Fig. 11). Recall from the last section that for each \(xy\)-plane there is a bosonic planon \(\psi\bar{\psi}\) which can be condensed. When \(\psi\bar{\psi}\) in plane \(z=0\) is condensed, the quasi-particle content of the model changes as follows: 1. Since \(\psi\bar{\psi}\) is the fracton dipole, fractons between planes \(z=0\) and \(z=1\) are identified with the corresponding fracton between planes \(z=-1\) and \(z=0\). 2. The planons \(\psi\) and \(\bar{\psi}\) on the \(z=0\) plane are identified. 3. The \(\sigma\bar{\sigma}\) planon on the \(z=0\) plane splits into two abelian bosonic planons \(e\) and \(m\) with a mutual \(-1\) braiding statistics. 4. The lineons in the \(z=0\) plane composed of \(\sigma_{xy}\sigma_{xz}\), \(\bar{\sigma}_{xy}\sigma_{xz}\), \(\sigma_{xy}\bar{\sigma}_{xz}\), and \(\bar{\sigma}_{xy}\bar{\sigma}_{xz}\) are all confined. 5. Planons and lineons on other planes are unchanged. After this step, we can further condense either \(e\) or \(m\). This gets rid of the remaining planons on the \(z=0\) plane without affecting other quasi-particle excitations. Now, we see that the quasi-particle content of the model is the same as that of an Ising cage-net model with the \(z=0\) plane removed. The planons and lineons on planes other than \(z=0\) are left intact. Moreover, the fracton between \(z=0\) and \(z=1\), which is now identified with the fracton between \(z=-1\) and \(z=0\), becomes the new fracton between \(z=-1\) and \(z=1\). Therefore, the size of the Ising cage-net model can be decreased by one in the \(z\) direction by first condensing the \(\psi\bar{\psi}\) planon in a plane, and then by condensing one of the split channels of the \(\sigma\bar{\sigma}\) planon on the same plane. We see that if we allow condensation of bosonic planons as a RG operation, we obtain a generalized foliated RG under which the Ising cage-net model is a fixed point. As noted in Sec. IV, the original foliated RG for the X-cube model can also be viewed in terms of such condensation. The condensation of planons is, of course, a singular process where the bulk gap needs to close and then reopen, corresponding to a phase transition between Figure 11: An illustration of the relevant \(xy\)-planes of a \(L_{x}\times L_{y}\times L_{z}\) Ising cage-net. Via the condensation process described in the text, we remove the \(z=0\) plane and obtain a \(L_{x}\times L_{y}\times(L_{z}-1)\) Ising cage-net. different standard phases (see Appendix A for the definition of standard phases). This means that, similar to the original foliated RG, the generalized foliated RG operations can move across certain phase boundaries. However, only certain phase boundaries can be crossed; the singularity involved in planon condensation is localized to a selected plane and is hence a "subsystem" singularity, not one in the full 3D bulk. A useful way to think about the condensation process is to use the fact that the Ising cage-net model can be obtained by gauging the planar \(Z_{2}\) symmetries of a subsystem symmetry protected topological (SSPT) model protected by such symmetries [31]. The planons being condensed correspond to the symmetry charges of the planar symmetries in the SSPT model. Hence the condensation of the planons in a given plane corresponds to breaking / removing that planar symmetry and reducing the size of the model. On the other hand, if we want to increase the size of the system by adding a plane at \(z=0\), we need to add the planar symmetry and the corresponding planar state back to the SSPT model and're-gauge' the planar symmetry. ## VI RG via planar linear depth circuit The planar linear depth circuit we construct for the Ising cage-net model is a direct generalization of a RG scheme that maps product states to ground states of a string-net model, introduced by Liu Y. _et al._[29]. In Sec. VI.1, we review this RG procedure for the string-net models. We describe carefully an initialization step that is nontrivial for non-abelian string-net models, which was not discussed in detail in Ref. [29]. In Sec. VI.2, we describe the RG scheme as a linear depth circuit for the Ising cage-net model. We will see that the initialization step is also important and nontrivial. ### String-net RG In this section, we will first describe an important step in the RG procedure - the 'controlled gate' which adds a plaquette to the string-net wave-function. After that, we will describe the full RG procedure starting from the string-net wave-function on the minimal lattice on a torus and then adding plaquetes row by row. A brief review of the string-net models is given in Appendix B.1. #### vi.1.1 Adding plaquettes via the controlled gate The controlled gate can be used to add a plaquette to the string-net wave-function. We present the definition and properties of the gate in this sub-section. Computational details of the results discussed here can be found in Appendix D. Suppose that on a trivalent lattice, a plaquette is added by adding an edge (the red edge in the diagrams below), and we want to extend the string-net wave-function from the original lattice to that including this new plaquette. When the edge is added, it is not entangled with the rest of the lattice and is in the state \(|0\rangle\). To merge the added edge into the lattice, first, map it to \(\sum_{s}\frac{d_{s}}{\sqrt{D}}|s\rangle\) where \(D\) is the total quantum dimension of the string-net. \[|0\rangle\mapsto\sum_{s}\frac{d_{s}}{\sqrt{D}}|s\rangle \tag{18}\] Then, we use this edge as the control to draw loops around the added plaquette. More specifically, we can represent the controlled gate \(G_{p}=\sum_{s}G_{p}^{s}\) graphically as in Eq. (19). The action of \(G_{p}^{s}\) is similar to the action of \(B_{p}^{s}\) which adds a loop \(s\) to a plaquette, but for the graphical evaluation of \(G_{p}^{s}\), we treat the control edge as if it is in the state \(|0\rangle\), i.e. \[G_{p}^{s}\] (19) \[=\delta_{ss^{\prime}}\sum_{\begin{subarray}{c}\alpha,\beta, \gamma,\\ \delta,\varepsilon,\eta,\tau\end{subarray}}F_{ss^{\prime}\alpha}^{\ell_{1} \ell_{1}0}F_{ss^{\prime}\alpha}^{\bar{\ell}_{1}a\ell_{1}}F_{ss^{\prime}\gamma }^{\bar{\ell}_{2}\ell_{2}a^{*}}F_{s\gamma^{\prime}\delta}^{\bar{\ell}_{2}b \ell_{2}}F_{s\delta\delta^{*}\varepsilon}^{\bar{\ell}_{3}\ell_{3}b^{*}}F_{s \delta^{*}\varepsilon}^{\bar{\ell}_{3}\ell_{3}}F_{s\eta^{*}\tau}^{\bar{\ell}_{4 }\epsilon\epsilon^{*}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ where the red line with an arrow marks the control edge. We carry out the explicit graphical evaluation in Appendix D.1. Note that \(G_{p}^{s}\) can be defined on any polygonal plaquette. \(G_{p}^{s}\) is not a unitary on the full Hilbert space, but only between subspaces. More specifically, it is an isometry from \(\mathcal{V}_{p,s}^{\text{SN}}\) to \(\mathcal{H}_{p,s}^{\text{SN}}\), both of which involve the DOF around a plaquette \(p\). In \(\mathcal{V}_{p,s}^{\text{SN}}\), the control edge is set to \(\ket{s}\) while the other edges come from the string-net wave-function on the lattice with the control edge missing (pretending that it is set to \(\ket{0}\)). The vertices containing the control edge, then, involve configurations like \[\tikzfig{fig:cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cpt_cptpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cpt_cptpt_cptpt_cpt_cptpt_cpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cpt_cptpt_cptpt_cpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptpt_cptptpt_cpt_cptpt_cptptpt_cptpt_cptpt_cptpt_cptpt_cptptpt_cptpt_cptpt_cptpt_cptpt_cptptpt_cptpt_cptpt_cptpt_cptpt_cptptpt_cptptpt_cptpt Step 2 is also of linear depth. The minimal lattice has only one plaquette. In step 2, we add more plaquettes to the lattice using the controlled gate introduced in Sec. VI.1.1. The plaquettes cannot be added all at once, because the controlled gates commute only when they do not act on each other's control edge. A linear depth circuit is hence needed to add all the plaquettes to the square-octagon lattice. A particular sequence for adding these plaquettes is shown in Fig. 13. Firstly, all the square plaquettes (red circles) can be added at the same time because they do not overlap with each other. The small circle indicates the control edge while the big circle indicates the action of \(G_{p}^{s}\). Secondly, we add the square-octagon lattice in row one (labeled (1) in Fig. 13). All controlled gates in row one commute with each other so they can be added in one step. Then we add row two, row three, etc., until the next to last row (labeled \((L_{y}-1)\) in Fig. 13). For the last row, we need to choose the control edges side ways because we need un-entangled edges to be used as control edge. Due to this change, the plaquettes in the last row need to be added sequentially as the controlled gates do not commute any more. As shown in the figure, we can add them in the order of (green labels) (1), (2),..., \((L_{x}-1)\). We do not need to act in the last plaquette (labeled \(\tilde{p}\)) as the constraint due to the last plaquette is already implied by that of the largest plaquette that we started from combined with all the small plaquettes added so far. Therefore, at this point, we have finished the linear depth RG procedure that starts from a product state and maps it to the the string-net wave-function on the square-octagon lattice. ### Ising cage-net In this section, we use the controlled gate of Eq. (19) to build up the RG circuit to enlarge an Ising cage-net ground state on the three-torus by one layer. We will start, in Sec. VI.2.1, by introducing finite depth circuits that grow cages on the cage-net ground state. They serve as the building blocks of the full planar linear depth RG circuit, which we discuss in Sec. VI.2.2. #### vi.2.1 Adding cages via the controlled gate In 2D, we have seen that a plaquette can be added to the string-net wave function, via the controlled gates, after an edge is added to the lattice. We can extend this procedure to 3D cage-net states. Suppose that we start with the Ising cage-net ground state on the truncated cubic lattice (Fig. 7) and add a plane in the \(xy\) direction. At each point where the added plane bisects the \(z\) direction edges, an octahedron is added, as shown in Fig. 14, to ensure the trivalent structure in each of the coupled planes. In the added plane, octagonal plaquettes fill in the space between the octabndrons. Every edge of the added octahedrons carries a three dimensional Hilbert space spanned by \(\{|0\rangle,|1\rangle,|2\rangle\}\). We start with these edges all set to the state \(|0\rangle\). The principal edges on the octagons each carry a five dimensional Hilbert space spanned by \(\{|00\rangle,|02\rangle,|20\rangle,|22\rangle,|11\rangle\}\), which is a subspace of the tensor product Hilbert space of two three dimensional DOFs \(\{|0\rangle,|1\rangle,|2\rangle\}\otimes\{|0\rangle,|1\rangle,|2\rangle\}\) that come from the two intersecting planes. We start with these principal edges in the state \(|00\rangle\). Figure 12: The initialization step in the RG circuit for generating the string-net wave-function. Left: pick three edges around a vertex and map them into one of the ground states of the string-net on the minimal lattice. Right: grow the minimal structure by copying the string states \(|i\rangle\) and \(|j\rangle\) along non-contractible loops so that they reach the full extent of the lattice. Figure 13: Adding loops to plaquettes in step 2 of the RG circuit for generating the string-net wavefunction. The state has been initialized into one of the ground states on the minimal lattice (black lines). First, loops are added to the square plaquettes (shown in red) in a single step. Then, loops are added to octagon plaquettes in row (1), (2),... \((l_{y}-1)\) sequentially. For the last row, loops are added to octagon plaquette in column (1), (2),...., \((L_{x}-1)\) sequentially. No action is needed in the last plaquette \(\tilde{p}\). We describe first the process to add one cube into the new layer, which consists of two steps: 1. add the octahedrons to the cage-net wave-function; 2. grow a cage structure in the upper truncated cube of Fig. 14. In step one, we first need to copy the state of the bisected \(z\)-principal edge onto some of the octahedron edges so that the vertex rules are satisfied at the octahedrons' vertices. Suppose the bisected edge is in the state \(|xy\rangle\). The copying process can be achieved with the controlled gates \(\sum_{xy}|xy\rangle\langle xy|\otimes|x\rangle\langle 0|\) and \(\sum_{xy}|xy\rangle\langle xy|\otimes|y\rangle\langle 0|\) as indicated by the blue and green arrows in Fig. 15. Then, we add the square plaquettes to the cage-net wave-function. This can be done as described in the previous section on how to add a square plaquette to the doubled-Ising string-net wave function, as the square plaquettes remain unaffected when the doubled-Ising layers are coupled into Ising cage-net. More specifically, for each square plaquette, we pick an edge in the state \(|0\rangle\) as the control edge, map it to \(\sum_{s}\frac{d_{s}}{\sqrt{D}}|s\rangle\), and use it as the control in the controlled gate \(G_{p}\) that adds loops into the plaquette. Step 2, which adds a cage structure to the cube, is more complicated. As shown in Fig. 16, first we add loops to the bottom and top faces and then to the side faces. More specifically, first we pick a principal edge on the bottom face in the state \(|00\rangle\) as the control. We will use the convention where the first \(|0\rangle\) comes from the \(xy\) plane while the second \(|0\rangle\) comes from the vertical \(xz\) and \(yz\) planes. Map the control edge as \[|00\rangle\mapsto\sum_{s}\frac{d_{s}}{\sqrt{D}}\left|s0\right\rangle, \tag{26}\] Note that this takes the controlled edge out of the five dimensional subspace of \(\{|00\rangle,|02\rangle,|20\rangle,|22\rangle,|11\rangle\}\) but keeps it in the nine dimensional space of \(\{|0\rangle,|1\rangle,|2\rangle\}^{\otimes 2}\). This will also happen to other principal edges as we implement the procedure, but at the end of the process of growing a cube, all principal edges will be back to the five dimensional subspace. Now, using the \(|s\rangle\) state as the control, apply the controlled gate to the bottom face \(p_{b}\) and top face \(p_{t}\) as \[G_{p_{b}}^{0}+G_{p_{b}}^{2}+\frac{1}{\sqrt{2}}G_{p_{b}}^{1}B_{p_{t}}^{1} \tag{27}\] as shown in Fig. 16 (a). Note that \(G_{p_{b}}^{s}\) and \(B_{p_{t}}^{s}\) act on the first part of the principal edges (the part that comes from horizontal planes). After these controlled gates, the projector on the control edge \(|0\rangle\langle 0|\) (the first part) gets mapped to \[\begin{split}\left(|0\rangle\langle 0|\right)_{\rm ct}& \mapsto\sum_{ss^{\prime}}\frac{d_{s}d_{s^{\prime}}}{D}\left(|s \rangle\langle s^{\prime}|\right)_{\rm ct}\\ &\mapsto B_{p_{b}}^{0}+B_{p_{b}}^{2}+B_{p_{b}}^{1}B_{p_{t}}^{1}, \end{split} \tag{28}\] where in deriving the last line, we used the fact that the top face is part of the original cage-net wave-function Figure 16: Growing a cage structure in an added cube. (a) First, using an edge from the bottom face (colored green) as control, add loops to the bottom and top faces, (b) then use the edges on the side faces (colored green) as control to add loops to the side face. Figure 14: Insertion of an \(xy\)-plane bisects a cube in the original cage-net lattice into two cubes. Each intersection point between the \(xy\)-plane and the \(z\)-principal edges is expanded into an octahedron to preserve the trivalent structure in the \(xy\), \(yz\) and \(zx\) planes. Figure 15: ‘Copying’ the states on the bisected \(z\)-principal edges onto edges of the added octahedron to satisfy vertex rules in the \(xz\) and \(yz\) planes. The copying process can be performed by controlled gates of the form \(\sum_{xy}|xy\rangle\langle xy|\otimes|x\rangle\langle 0|\) and \(\sum_{xy}|xy\rangle\langle xy|\otimes|y\rangle\langle 0|\), indicated by the arrows pointing from the control to the target. and \(B^{0}_{p_{t}}=B^{2}_{p_{t}}=1\). Note that it might seem that the operator in Eq. (27) is not unitary as \(B^{1}_{p}\) is not. But since \(B^{1}_{p_{t}}{B^{1}_{p_{t}}}=B^{0}_{p_{t}}+B^{2}_{p_{t}}=2\), the action of the operator restricted to the ground space of the original cage-net model is indeed unitary. Next, we need to add loops to the side faces. To do this, we take the principal edges on the bottom face, which are now in the states \(\ket{s0}\) and send them to \(\ket{s\alpha_{s}}\), where \(\alpha_{s}\) comes from the \(xz\) or \(yz\) planes and \(\alpha_{s}=0\) if \(s\) is even, \(\alpha_{s}=1\) if \(s\) is odd. This brings the principal edges on the bottom face back to the five dimensional Hilbert space. Then map the \(\ket{\alpha_{s}}\) states to \[\ket{0}\mapsto\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{2}\right),\ \ket{1}\mapsto\ket{1} \tag{29}\] Use the \(\ket{\alpha_{s}}\) states as the control to draw loop on the side faces by applying \(\sum_{\alpha_{s}}G^{\alpha_{s}}_{p_{s}}\) as shown in Fig. 16 (b) to each side face. Let us see how the Hamiltonian terms in Eq. (28) transforms. We show the step by step calculation for the third term \(B^{1}_{p_{t}}B^{1}_{p_{t}}\). The \(B^{1}_{p_{t}}\) part is not affected by the transformation and will be omitted from the following equation. Let us focus on the transformation induced by on principal edge. We label the two three-dimensional DOFs on the principal edge as \(1\) and \(2\) respectively, where \(1\) comes from the bottom face whose state is labeled by \(s\) and \(2\) comes from the side face whose state is labeled by \(\alpha_{s}\). \[\begin{split}&\left[(P^{0}_{1}+P^{2}_{1})B^{1}_{p_{b}}P^{1}_{1}+P ^{1}_{1}B^{1}_{p_{b}}(P^{0}_{1}+P^{2}_{1})\right]\otimes\left(\ket{0}\bra{0} \right)_{2}\\ \mapsto&\frac{1}{\sqrt{2}}(P^{0}_{1}+P^{2}_{1})B^{1}_ {p_{b}}P^{1}_{1}\otimes\left(\ket{0}_{2}+\ket{2}\right)_{2}{}_{2}\langle 1 |\\ &+\frac{1}{\sqrt{2}}P^{1}_{1}B^{1}_{p_{b}}(P^{0}_{1}+P^{2}_{1}) \otimes\left|1\right\rangle_{2}\left({}_{2}\langle 0|+{}_{2}\langle 2|\right) \\ \mapsto&\frac{1}{\sqrt{2}}(P^{0}_{1}+P^{2}_{1})B^{1}_ {p_{b}}P^{1}_{1}\otimes\left(P^{0}_{2}+P^{2}_{2}\right)B^{1}_{p_{s}}P^{1}_{2} \\ &+\frac{1}{\sqrt{2}}P^{1}_{1}B^{1}_{p_{b}}(P^{0}_{1}+P^{2}_{1}) \otimes P^{1}_{2}B^{1}_{p_{s}}\left(P^{0}_{2}+P^{2}_{2}\right)\end{split} \tag{30}\] The result is the product of \(B^{1}_{p_{b}}\) and \(B^{1}_{p_{s}}\) projected onto the five dimensional subspace of the principal edge, as promised. This works for all side faces. Similar calculations can be carried out for the first two terms in Eq. (28). If we put everything together and omit the projection onto the five dimensional subspace of the principal edges, we see the Hamiltonian terms in Eq. (28) becomes \[\left(B^{0}_{p_{b}}+B^{2}_{p_{b}}\right)\prod_{p_{s}}\left(B^{0}_{p_{s}}+B^{2 }_{p_{s}}\right)+B^{1}_{p_{b}}B^{1}_{p_{t}}\prod_{p_{s}}B^{1}_{p_{s}}, \tag{31}\] which is a sum over the desired plaquette terms on the bottom and side faces as well as the cube term on the cube. In the RG circuit to be discussed in the next section, we need to grow cubes in the same row at the same time. This works in a similar way as growing a single cube and we describe the procedure here. First, as shown in Fig. 17 which illustrates the situation with two cubes in the row, a new plane is added which bisects the row of cubes into two. Octahedrons are added to the intersection points to preserve the trivalent structure in the coupled \(xy\), \(yz\) and \(zx\) planes. The 'copying' process illustrated in Fig. 15 is then used to restore vertex rules at the vertices of the octahedrons and then the square plaquettes in the octahedrons are added to the cage-net wave-function. Figure 17: Adding a row of cubes to the cage-net state, step \(1\): the inserted \(xy\)-plane bisects the cubes into two; octahedrons are added at the intersection point. Figure 18: Adding a row of cubes to the cage-net state, step \(2\): (a) first, we simultaneously add loops to the bottom and the top faces of all cubes in the row; (b), use the edges on the side face (colored green) as control to add loops to all the side faces at the same time. The next step is illustrated in Fig. 18, which adds cage structures to a whole row of cubes at the same time. This is done by first picking the principal edge in, for example, the \(x\) direction and use them as controls to add loops in the bottom and top faces as described above for each cube in the row (Fig. 18 (a)). The operations in each cube commute with that in another cube, and hence they can be done all at the same time. Next, loops are added to the side faces using the principal edges on the bottom face as control, as shown in Fig. 18 (b). Again, the operations on each side face commute with each other, so they can be done at the same time. As a result of this process, all the cubes in the row are now added to the cage-net wavefunction. Note that the process illustrated in Fig. 18 applies to the first row in the added plane. When we try to add subsequent rows, some of the side faces would have been added to the cage-net state already. Those side faces can be treated in the same way as the top face. That is, apply \(B_{p_{s}}^{1}\) in step Fig. 18 (a) when the \(x\)-principal edge is in the state \(\ket{10}\), instead of applying \(\sum_{\alpha_{s}}G_{p_{s}}^{\alpha_{s}}\) controlled by the bottom principal edge of the side face in the state \(\ket{s\alpha_{s}}\). A similar procedure applies to the cubes in the last row of the added plane as well, which have to be added one by one. #### v.2.2 RG circuit - Ising cage-net The processes for adding single cubes and a row of cubes are building blocks for the full RG circuit that adds a full plane to the cage-net state. Similar to the case of the doubled-Ising, we first need to initialize the added plane into proper eigenstates of the non-local logical operators before adding the local structures of cubic cages (plaquettes in the case of doubled-Ising). A commuting set of logical operators of the Ising cage-net ground space can be chosen to be generated by the string-operators of \(\psi,\bar{\psi}\) planons in each \(\mu\nu\) plane along the \(\mu\) and \(\nu\) directions respectively. We can choose the original cage-net state (before adding the plane) to be an eigenstate of all such logical operators. The added \(xy\) plane can be initialized into an eigenstate of \(\psi^{x}\), \(\psi^{y}\), \(\bar{\psi}^{x}\) and \(\bar{\psi}^{y}\) on that plane. The circuit described in the last section on how to add cubic cages and plaquette terms to the wave-function does not affect these nonlocal logical operators. Therefore, the resulting cage-net state after the RG circuit remains an eigenstate of all the \(\psi,\bar{\psi}\) logical operators. But the choice of the eigenvalue for the \(\psi,\bar{\psi}\) logical operators is not arbitrary as the operators are related to each other and hence their eigenvalues are constrained. In Ref. [19], we study carefully the relations among these operators, which allowed us to derive the ground state degeneracy of the Ising cage-net model. The relations are listed below. For derivation, see the discussion in section VII of Ref. [19]. For \(\{\mu,\nu,\lambda\}=\{x,y,z\}\) \[\prod_{i}\left(\psi\bar{\psi}\right)_{\mu\lambda}^{\mu}(\nu=i) \prod_{j}\left(\psi\bar{\psi}\right)_{\nu\lambda}^{\nu}(\mu=i)=1\] \[r_{\mu\nu}(\lambda=i)\bar{r}_{\mu\nu}(\lambda=i)=1,\forall i, \forall\{\mu,\nu\}\] \[r_{\mu\nu}(\lambda=i)r_{\mu\nu}(\lambda=i+1)=1,\forall i,\forall \{\mu,\nu\} \tag{32}\] where \(r_{\mu\nu}=\frac{1}{2}\left(1+\psi_{\mu\nu}^{\mu}+\psi_{\mu\nu}^{\nu}-\psi_{ \mu\nu}^{\mu}\psi_{\mu\nu}^{\nu}\right)\), \(\bar{r}_{\mu\nu}=\frac{1}{2}\left(1+\bar{\psi}_{\mu\nu}^{\mu}+\bar{\psi}_{\mu \nu}^{\nu}-\bar{\psi}_{\mu\nu}^{\mu}\bar{\psi}_{\mu\nu}^{\nu}\right)\). As we started from a ground state of the cage-net model, the original set of \(\psi,\bar{\psi}\) operators satisfy the relations in Eq. (32). When we add a new \(xy\)-plane, we need to make sure that after the new \(\psi_{xy}^{x}\), \(\psi_{xy}^{y}\), \(\bar{\psi}_{xy}^{x}\), \(\bar{\psi}_{xy}^{y}\) operators are added to the original set, the total set still satisfy the relations in Eq. (32). This can be guaranteed when the added string-operators satisfy \[\psi_{xy}^{x}\bar{\psi}_{xy}^{x}=1,\ \psi_{xy}^{y}\bar{\psi}_{xy}^{y}=1 \tag{33}\] \[r_{xy}=\bar{r}_{xy}=\pm 1 \tag{34}\] The choice of \(\pm 1\) in the last relation depends on whether \(r_{xy}(z=i)=1\) or \(-1\) in the original set. Compared to the eigenstates listed in Appendix C.1, \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{1}\), \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{5}\), \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{9}\) satisfy the relations in Eq. (33) and \(r_{xy}=1\) while \(|\psi\bar{\psi}_{\rm min}^{\rm D.I.}\rangle\) satisfies the relations in Eq. (33) and \(r_{xy}=-1\). Therefore, we can initialize the added layer into one of these states. In particular, consider the added \(xy\)-plane in Fig. 19. Each red ball represents an octahedron. The added DOF are initially set to be either in state \(\ket{0}\) (on edges of the octahedron) or \(\ket{00}\) (on principal edges). Now initialize the trivalent lattice in the \(xy\)-plane into one of \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{1}\), \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{5}\), \(|\Psi_{\rm min}^{\rm D.I.}\rangle_{9}\) and \(|\psi\bar{\psi}_{\rm min}^{\rm D.I.}\rangle\) following the procedure described in Fig. 12. This linear depth process set up the stage for the next step of the RG circuit: adding cage structures to the cubes. Figure 19: Inserting an \(xy\)-plane into the original cage-net lattice. Each red ball represents an octahedron. The new principal edges are shown in blue. Now we can use the procedure described in the last section to add cage structures to the cubes. As shown in Fig. 20, on top of the minimal structure set up in the initialization step (red lines), cage structures are added to the cubes in the 1st row, the 2nd row,... the \((L_{y}-1)\)th row in each step. In the last row, cage structures are added to the cube in the 1st column, 2nd column,..., \((L_{x}-1)\)th column in each step. No action is required in the last cube. This process has depth \(\sim(L_{x}+L_{y})\) and completes the addition of a new layer into the cage-net wave-function. ## VII Relating condensation and linear-depth circuits via gapped boundaries ### General discussion In Sec. V, we discussed the RG process in terms of condensation of planons. In Sec. VI, we discussed the RG process in terms of a linear depth circuit. In this section, we show that these two are closely related to each other by understanding each in terms of gapped boundaries. We first consider a gapped boundary between a 2D topological order and vacuum. If an excitation moves from the bulk to the boundary, it may become trivial in the sense that it can be destroyed by a local operator on the boundary. This phenomenon is referred to as condensation at the boundary. On the other hand, some excitations remain non-trivial as they approach the boundary. These phenomena can be characterized precisely in a category-theoretic language [32; 33; 34; 35]; in the abelian case, this amounts to specifying a maximal subset of bosons that can simultaneously condense at the boundary [36; 37; 38; 39]. It is believed the universality class of a gapped boundary is fully determined by its category-theoretic characterization. The above discussion allows us to _define_ distinct types of anyon condensation (to vacuum) in a precise way, as distinct types of gapped boundaries (to vacuum). Such a definition is natural if we view the vacuum as a condensate of certain anyons in the 2D topological order. For instance, creating a puddle of anyon condensate within the bulk 2D topological order amounts to creating a puddle of trivial state (vacuum) separated from the bulk by a gapped boundary. This discussion, and the definition of anyon condensation in terms of gapped boundaries, can be generalized to gapped boundaries between arbitrary 2D topological orders. In the context of generalized foliated RG, we consider condensation of planons. Condensation of a single planon can similarly be associated with - and defined in terms of - certain gapped boundaries between two fracton orders, with the property that the boundary should be transparent to mobile excitations away from the selected plane where the condensation occurs. It will be an interesting problem for future work to fully characterize those boundaries between fracton phases that correspond to planon condensation. We note that there has been some related prior work discussing gapped boundaries of fracton models in terms of condensation [40; 41]. It turns out that the kind of linear-depth circuits considered here can also be associated with a type of gapped boundary. A linear depth circuit has the general form \(\mathcal{U}=\prod_{\ell=1}^{K}U_{\ell}\) where each layer \(U_{\ell}\) consists of a number of local unitary gates with non-overlapping support, and the number of layers \(K\) is proportional to the linear system size \(L\). In general, \(U_{\ell}\) can contain gates acting across the entire system. However, for the circuits we employed for RG, each layer \(U_{\ell}\) only contains gates acting in a lower dimensional subsystem of the entire system, such as the rows in Figs. 13 and 20. Such circuits are much more restrictive than generic dense linear-depth circuits, particularly because they preserve the area law when acting on a state. We call this class of circuits _sequential circuits_. Again we first focus on the 2D case, where as we have discussed, sequential circuits can be used to generate topologically ordered ground states from an initial product state (the topological "vacuum"). In order to avoid complications associated with periodic boundary conditions, we make a simplification as compared to the circuits discussed in Sec. VI; namely, we work with an infinite system and consider circuits that generate a disc Figure 20: Adding cage structures to the cubes in step 2 of the RG circuit for the cage-net state. The red lines indicate the minimal lattice state determined by the initialization step. Cage structures are added to the cubes in the 1st row, the 2nd row,... the \((L_{y}-1)\)th row in each step. In the last row, cage structures are added to the cube in the 1st column, 2nd column,..., \((L_{x}-1)\)th column in each step. No action is required in the last cube. of 2D topological order from vacuum. If desired, the size of the disc can later be taken to infinity. This allows us to drop the initialization step, whose role is to take care of the non-trivial ground state degeneracy on a 2-torus. We can also drop the final linear-depth sequence of gates needed to stitch two gapped boundaries together in a manner consistent with periodic boundary conditions. With these simplifications, the circuits operate in the following way. We slice the 2D space into 1D concentric circles surrounding the center of the disc, and order these subspaces according to their radial coordinate. The \(\ell\)th layer of the circuit is assumed to be supported near (but not entirely within) the \(\ell\)th circle. After applying some number of layers of the circuit, one is left with a disc of topological order which has a gapped boundary to the vacuum region which has not yet been acted on by the circuit. Then, the next layer in the circuit acts only within the vicinity of the one-dimensional gapped boundary between the topological order and the vacuum. The action of the unitary in this layer is to "grow" the topological order by a small amount, pushing the gapped boundary further into the vacuum region. Continuing in this way allows one to grow the topologically ordered region arbitrarily. Based on the above, given a sequential circuit, we can associate the universality class of the gapped boundary to vacuum which emerges when the circuit is truncated at some radius. This association is well-defined in the following sense. We can define a truncation of the circuit \(\bar{\mathcal{U}}=\sum_{\ell=1}^{K_{0}}V_{\ell}\) where \(K_{0}<K\). This will create a disc of topological order with a particular gapped boundary to vacuum. Now, consider a different truncation \(\bar{\mathcal{U}}^{\prime}=\sum_{\ell=1}^{K_{0}}V_{\ell}\) where each \(V_{\ell}\) again consists of non-overlapping gates such that \(V_{\ell}=U_{\ell}\) for \(\ell\) sufficiently less than \(K_{0}\), but the layers near the boundary may differ. By definition, the two truncated circuits differ only by a finite-depth circuit near the boundary. But a 1D finite depth circuit cannot change the universality class of the gapped boundary, _i.e._ it cannot change the set of anyons which can condense on the boundary. So the gapped boundary type is independent of how the sequential circuit is truncated. We note this conclusion only holds for truncations that are compatible with the 1D layer structure of concentric circles; the key property is that the truncation only cuts through a finite number of 1D layers, which is bounded above as the size of the disc increases. We emphasize that this discussion can be generalized to gapped boundaries between two different 2D topological orders. That is, given two topological orders referred to as A and B that admit a gapped boundary, an A-ground-state can be converted into a B-ground-state by applying a sequential circuit. Or, if we apply a truncated version of the same sequential circuit, we can create a puddle of B within the bulk topological order A, separated by a gapped boundary whose universality class does not depend on how the circuit is truncated. In formulating the generalized foliated RG in terms of quantum circuits, we apply sequential circuits within 2D layers of a 3D fracton model. Truncating such a sequential circuit (along its 1D layer structure) results in a gapped boundary between two different fracton orders, where some of the mobile excitations may condense along the layer where the circuit is applied. This is how we described planon condensation above, and thus we propose that planon condensation and applying 2D sequential circuits are different ways to realize the same operation in generalized foliated RG. ### Condensation in the Ising cage-net circuit In accordance with the above discussion, we now identify the type of gapped boundary that is associated with the sequential circuits used to create Ising cage-net model. To accomplish this, we are going to apply the circuit only to a finite disc-shaped region within a plane; we will not take the limit that the size of the disc goes to infinity. Inside the region, we get the fracton order as expected. Outside of the region, the added degrees of freedom remain unentangled. There is a gapped boundary between the two sides. We show that the gapped boundary and the region outside can be obtained by condensing bosonic plonons starting from a complete fractonic state. First, let's see how a similar relation works in the doubled-Ising string-net state. We imagine a very large disc of string-net state, and we ignore the curvature of the disc's boundary to simplify the following discussion. Recall that in the RG circuit, the plaquettes are added row by row. Suppose that we stop the process at row \(i\). The boundary between row \(i\) and row \(i+1\) is a smooth boundary on the lattice. As the Hamiltonian terms remain commuting throughout the process, the boundary is gapped. The gapped boundary can be induced by the condensation of 'fluxon excitations'[22]\(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) on the boundary and beyond. To see that, consider a string-operator of the form shown in Fig. 21, which consists of a string segment above the lattice, a parallel segment under the lattice and the two are connected by segments that vertically go through the lattice plane. Note that, while embedded in the 3D space, the string-operator is a closed loop, from the 2D perspective, it ends at the locations where the string goes through the lattice plane and can create excitations at those points. In particular, such string-operators in general violate the plaquette term at their ends, as the plaquette terms correspond to a loop operator that links with the string-operator and the linking generates nontrivial action. Therefore, in the bulk of the string-net state, the string-operator generates 'fluxon excitations' at its ends. In the doubled-Ising model, there are two string-operators of this type, corresponding respectively to a loop of string type 1 and a loop of string type 2. The two string-operators generate the \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) excitations, respectively. If the string-operator ends (goes vertically through the lattice plane) outside of the smooth boundary (Fig. 21), there are no more plaquette terms to violate and the string-operator does not generate any excitations. Detailed calculations can be found in Appendix C.2. Therefore, the \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) excitations condense on the boundary and beyond, thus demonstrating the connection between anyon condensation and the linear depth circuit for the doubled-Ising string-net state. The situation is very similar in the Ising cage-net model. The RG circuit is again implemented row by row in a sequential manner. Suppose that we stop the process at row \(i\), there will be a gapped boundary between row \(i\) and row \(i+1\). As shown in Fig. 22, like for the string-nets, a vertical loop operator that goes through the lattice plane at two points generates planon excitations \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) in the bulk of the cage-net state (in rows \(j\leq i\)). Beyond row \(i\), however, it does not generate any excitations and hence the \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) are condensed. This agrees with the RG procedure driven by condensation described in Sec. V. Therefore, the process of sequential application in the linear depth circuit can be interpreted as moving the boundary between the cage-net state and the condensed state, hence enlarging or shrinking the fracton order in the plane. ## VIII Summary and discussion In this paper, we studied the renormalization group transformation for the Ising cage-net model and found that the system size of the Ising cage-net model can be decreased / increased by condensing / uncondensing planon excitations near a 2D plane, or correspondingly through a so-called sequential circuit which preserves the area law and whose depth scales with the linear size of the plane. We argued that these two ways of carrying out the RG are closely related through gapped boundaries. We call this procedure the generalized foliated RG, because the previously defined foliated RG, under which the X-cube and related models are fixed points,[9] fits into this new definition as a special case. On the one hand, the system size of the X-cube can be decreased / increased by condensing / uncondensing a lineon dipole or fracton dipole on a given plane (both these excitations are planons). Or, the RG procedure can be carried out with a linear depth circuit in the same plane. One way to construct the linear depth circuit is to use the finite depth circuit discussed for the original foliation scheme[9] to decouple a layer of toric code out of the X-cube model, and then disentangled the toric code into product state Figure 21: Condensation of the \(\psi\bar{\psi}\) and the \(\sigma\bar{\sigma}\) fluxons on the smooth boundary of the doubled-Ising model. The vertex details are omitted. The dashed lines represent the unentangled edges. An open ended fluxon string-operator is constructed from a loop of \(s\)-string that passes through the lattice plane vertically at a plaquette. If the plaquette (for example, the one labeled \(p\)) lies within the doubled-Ising region, it creates a fluxon excitation. If the plaquette (for example, the one labeled \(p^{\prime}\)) falls outside the string-net region, then no excitation is generated. Thus, all fluxons condense on the smooth boundary. For computational details on the condensation, see Appendix C.2. Figure 22: Condensation of the \(\psi\bar{\psi}\) and the \(\sigma\bar{\sigma}\) fluxon excitations in the half \(xy\)-plane (shown in blue) in the Ising cage-net. If the end of the the fluxon string operator falls within the Ising cage-net region (for example at the plaquette \(p\)), a fluxon excitation is created. If the end falls outside of the Ising cage-net region (for example at the plaquette \(p^{\prime}\)), then no excitation is generated. Therefore, both \(\psi\bar{\psi}\) and \(\sigma\bar{\sigma}\) planons condense on the boundary. with a linear depth circuit. Altogether this is a linear depth circuit. Alternatively, we can use a circuit similar to that discussed in Sec. VI to remove cage structures in a plane row by row and hence removing a plane from the X-cube model. On the other hand, the generalized foliated RG allows a broader class of RG operations. Indeed, the Ising cage-net model is not a fixed point of the original foliated RG as can be seen from its ground state degeneracy calculation [19]. We recall that the original foliated RG led to an associated notion of foliated fracton phases (see Appendix A for a definition), with the key property that two systems related by a foliated RG operation lie within the same foliated fracton phase. Similarly, we expect that there exists a notion of _generalized foliated fracton phase_ (GFF phase), again with the key property that two systems related by a generalized foliated RG operation lie in the same GFF phase. GFF phases should be a coarser equivalence relation on quantum systems than foliated fracton phases, because a broader class of RG operations are allowed. We do not currently know how to give a definition of GFF phases along the lines of those in Appendix A; however, one possibility is to give a definition based on circuit equivalence of ground states, where one allows certain linear depth circuits supported on planes. In Sec. IV, we pointed out that the original foliated RG contains certain unnatural restrictions, while the generalized foliated RG seems to be more natural. Therefore, we expect that GFF phases are correspondingly a more natural concept than foliated fracton phases as originally defined, so it will be important to revisit what we have learned about foliated fracton phases. In particular, several invariants have been devised for foliated fracton phases as originally defined, including those based on fractional excitations and entanglement entropy [15; 10]. Now, with a new notion of GFF phases, we need to reconsider the question of what quantities remain invariant under the new equivalence relation, and which models belong to the same GFF phase and which do not. For example, we can ask whether the twisted foliated fracton model proposed in Ref. [13] is still in a different phase than the X-cube model or not under the new definition. Finally, we want to comment that the generalized foliation defined in this paper makes the discussion of type I fracton models more in-line with that of Subsystem Symmetry Protected Topological (SSPT) phases with planar symmetry in e.g. Ref. [42; 43; 44]. In the definition of'strong SSPT' in these papers, when a decoupled layer with planar symmetry is added to the bulk of the system, the planar symmetry can be combined with an existing planar symmetry in the system, which corresponds to the condensation of the composite of the symmetry charges from the decoupled plane and a planar symmetry charge in the bulk of the system. The'strong SSPT' orders discussed in these papers hence may become nontrivial (twisted) foliated fracton orders when the planar symmetries are gauged. ###### Acknowledgements. We are indebted to inspiring discussions with Dave Aasen, Kevin Slagle, Nathan Seiberg, and Dominic Williamson, and helpful correspondence with Fiona Burnell and Michael Levin. Z.W., X.M. and X.C. are supported by the National Science Foundation under award number DMR-1654340, the Simons Investigator Award (award ID 828078) and the Institute for Quantum Information and Matter at Caltech. X.C. is also supported by the Walter Burke Institute for Theoretical Physics at Caltech. The research of MH is supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES) under Award number DE-SC0014415. This work is also partly supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651438, XC and ZW; 651440, MH and DTS). The work of MH on general aspects of the generalized foliated RG (Sections IV, V and VII) was supported by the DOE BES project, while his work on the RG in the Ising cage-net model (Sec. VI) was supported by the Simons Foundation. X.C. wants to thank the Institute for Advanced Study at Tsinghua University for hospitality when the paper was written.
2309.05881
On the pre- and post-positional semi-random graph processes
We study the semi-random graph process, and a variant process recently suggested by Nick Wormald. We show that these two processes are asymptotically equally fast in constructing a semi-random graph $G$ that has property ${\mathcal P}$, for the following examples of ${\mathcal P}$: - ${\mathcal P}$ is the set of graphs containing a $d$-degenerate subgraph, where $d\ge 1$ is fixed; - ${\mathcal P}$ is the set of $k$-connected graphs, where $k\ge 1$ is fixed. In particular, our result of the $k$-connectedness above settles the open case $k=2$ of the original semi-random graph process. We also prove that there exist properties ${\mathcal P}$ where the two semi-random graph processes do not construct a graph in ${\mathcal P}$ asymptotically equally fast. We further propose some conjectures on ${\mathcal P}$ for which the two processes perform differently.
Pu Gao, Hidde Koerts
2023-09-12T00:04:30Z
http://arxiv.org/abs/2309.05881v1
# On the pre- and post-positional semi-random graph processes ###### Abstract We study the semi-random graph process, and a variant process recently suggested by Nick Wormald. We show that these two processes are asymptotically equally fast in constructing a semi-random graph \(G\) that has property \(\mathcal{P}\), for the following examples of \(\mathcal{P}\): * \(\mathcal{P}\) is the set of graphs containing a \(d\)-degenerate subgraph, where \(d\geq 1\) is fixed; * \(\mathcal{P}\) is the set of \(k\)-connected graphs, where \(k\geq 1\) is fixed. In particular, our result of the \(k\)-connectedness above settles the open case \(k=2\) of the original semi-random graph process. We also prove that there exist properties \(\mathcal{P}\) where the two semi-random graph processes do not construct a graph in \(\mathcal{P}\) asymptotically equally fast. We further propose some conjectures on \(\mathcal{P}\) for which the two processes perform differently. Introduction The semi-random graph process is a single player game initially suggested by Peleg Michaeli, and formally introduced by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4]. In this game, a graph is iteratively constructed from an empty graph on \(n\) vertices, denoted by \([n]=\{1,2,\ldots,n\}\). Every round, one edge is added to the graph. The first end-vertex of the edge, \(u\), is chosen uniformly at random (u.a.r.) from all the vertices in \([n]\). Given the choice of \(u\), the other end-vertex \(v\) is chosen strategically by the player (either deterministically, or by some random strategy). The semi-random graph process is part of a larger category of random processes where a player has limited power of choice among a set of random options. This category of combinatorial random processes traces its origins to the work of Azar, Broder, Karlin and Upfal [1] on placing \(n\) balls into \(n\) bins. They showed that if the player can choose from two u.a.r. selected bins rather than just one, there exists a strategy to decrease the expected number of balls in the fullest bin by an exponential factor. Similar load-balancing schemes have been investigated by Mitzenmacher [16]. Another well-known example of such random processes is the so-called Achlioptas graph process, suggested by Dimitris Achlioptas during a Fields Institute workshop. Instead of adding a single edge picked u.a.r. every round as in the classical Erdos-Renyi random graph process [6], he suggested that every round the player is offered \(k\geq 2\) such edges, and one of the \(k\) edges can be chosen and added to the graph. The Achlioptas graph process was first investigated by Bohman and Frieze [5], who showed that allowing the player to choose from \(k\geq 2\) edges enables the player to delay the appearance of a giant component. In the seminal paper on the semi-random graph process, Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] provided asymptotic upper and lower bounds on the number of rounds needed to achieve certain objectives a.a.s. (asymptotically almost surely, see Section 2 for a precise definition), including having minimum degree at least \(k\geq 1\), having clique number \(k\geq 1\), and being \(k\)-connected. Additionally, they demonstrated how the semi-random graph process can be used to model other random graph models. Specifically, they established how to couple the semi-random process to the Erdos-Renyi random graph model, the \(k\)-out model, and the min-degree process. Further research by Behague, Marbach, Pralat and Rucinski [2] gave tight asymptotic bounds for the minimum number of rounds needed to construct a graph that contains a subgraph isomorphic to a fixed graph \(H\) based on the degeneracy of \(H\). Moreover, they generalised the semi-random graph process to hypergraphs, and similarly showed tight bounds for constructing a fixed \(s\)-uniform hypergraph. In terms of spanning subgraphs, Ben-Eliezer, Gishboliner, Hefetz and Krivelevich [3] showed that one can construct any fixed bounded-degree spanning subgraph a.a.s. in linear time. Moreover, MacRury, Pralat and the first author [10, 11, 12] obtained bounds on the minimum number of rounds needed to construct a graph with a perfect matching or a Hamilton cycle. The upper bound on the minimum number of rounds required for the construction of a Hamiltonian graph was further improved by Frieze and Sorkin [7]. Recently, Gamarnik, Kang and Pralat [9] have found bounds for the number of rounds needed to force the appearance of cliques and independent sets, and to ensure the graph has at least a given chromatic number. Pralat and Singh [18] have recently also considered the properties of minimum degree, the existence of a perfect matching and the existence of a Hamilton cycle in a generalisation of the semi-random graph process, where each round the player is presented with \(k\) random vertices. ### Two semi-random processes Recently, Nick Wormald proposed (via personal contact) an alternative version of the semi-random graph process. Instead of the first vertex being chosen u.a.r. in each round and the second vertex being chosen according to some strategy, he proposed switching this order. That is, the first vertex in each round is chosen strategically by the player, whereas the second vertex is chosen u.a.r. We refer to this new model as the _pre-positional semi-random graph process_, and the original model by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] as the _post-positional semi-random graph process_. By a simple coupling argument, it is easy to see that the post-positional process can construct a graph in \(\mathcal{P}\) at least as fast as the pre-positional process, for any graph property \(\mathcal{P}\) (See Lemma 3.1 in Section 3). The interesting question arising from comparing these two processes is, whether the post-positional process significantly outperforms the pre-positional process in constructing a member of \(\mathcal{P}\). Perhaps a little surprisingly, for quite many properties \(\mathcal{P}\), these two processes perform equally well. However, we also give an example of \(\mathcal{P}\) where the post-positional process construct a graph in \(\mathcal{P}\) significantly faster. ### Main results Our first main result concerns the minimum number of rounds required to construct a \(k\)-connected graph. **Theorem 1.1**.: _Let \(k\geq 1\) be fixed. For every \(\epsilon>0\), a.a.s. there exists a real number \(\alpha_{k}\) such that the following hold:_ 1. _no strategy in a post-positional or pre-positional semi-random graph process can construct a_ \(k\)_-connected graph in at most_ \((\alpha_{k}-\epsilon)n\) _rounds;_ 2. _there exist strategies in post-positional and pre-positional semi-random graph processes that construct a_ \(k\)_-connected graph in at most_ \((\alpha_{k}+\epsilon)n\) _rounds._ **Remark 1.2**.: 1. _Theorem_ 1.1 _was proved to be true for the post-positional semi-random graph process for_ \(k\geq 3\) _by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman, and Stojakovic_ _[_4_]__. We settled the open case_ \(k=2\)_._ 2. _The theorem confirms that the two variants of the semi-random graph process perform asymptotically equally well on constructing_ \(k\)_-connected graphs._ 3. _The constant_ \(\alpha_{1}\) _is set to be 1. The real numbers_ \(\alpha_{k}\) _for_ \(k\geq 2\) _are derived from a system of differential equations and follow from applying Wormald's differential equation method (See [20]). The first few values are_ \[\alpha_{2} =\ln 2+\ln(1+\ln 2),\] \[\alpha_{3} =\ln((\ln 2)^{2}+2(1+\ln 2)(1+\ln(1+\ln 2))),\] _as calculated in [14]._ Next, we show that the two processes perform equally well in constructing graphs with a given small subgraph. A graph \(H\) is said to be \(d\)-degenerate if each subgraph of \(H\) contains a vertex of degree at most \(d\). In their seminal paper, Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman, and Stojakovic [4] considered the number of rounds needed to construct a fixed size \(d\)-degenerate graph as a subgraph. They showed the following upper bound in the post-positional process. **Theorem 1.3** ([3, Theorem 1.10]).: _Let \(H\) be a fixed \(d\)-degenerate graph, and let \(f:\mathbb{N}\to\mathbb{R}\) be a function such that \(\lim_{n\to\infty}f(n)=\infty\). Then there exists a strategy in the post-positional process such that the resulting graph \(G\) contains a subgraph isomorphic to \(H\) in a.a.s. \(f(n)\cdot n^{(d-1)/d}\) rounds._ They conjectured that this bound is tight if \(d\geq 2\), which was subsequently shown by Behague, Marbach, Pralat, and Rucinski [2]. We show that the same bounds hold for the pre-positional process. **Theorem 1.4**.: _Let \(H\) be a fixed \(d\)-degenerate graph, and let \(f:\mathbb{N}\to\mathbb{R}\) be a function such that \(\lim_{n\to\infty}f(n)=\infty\). Then a.a.s. the following hold:_ 1. _If_ \(d\geq 2\)_, no strategy in a post-positional or pre-positional semi-random graph process can construct a graph containing a copy of_ \(H\) _in at most_ \(n^{(d-1)/d}/f(n)\) _rounds;_ 2. _there exist strategies in post-positional and pre-positional semi-random graph processes that construct a graph containing a copy of_ \(H\) _in at most_ \(f(n)\cdot n^{(d-1)/d}\) _rounds._ Theorem 1.4 immediately gives the following corollary. **Corollary 1.5**.: _Let \(H\) be a fixed graph containing a cycle, and \(f:\mathbb{N}\to\mathbb{N}\) a function such that \(\lim_{n\to\infty}f(n)=\infty\). Then a.a.s. the following hold:_ 1. _no strategy in a post-positional or pre-positional semi-random graph process can construct a graph containing an_ \(H\)_-minor in at most_ \(n^{1/2}/f(n)\) _rounds;_ 2. _there exist strategies in post-positional and pre-positional semi-random graph processes that construct a graph containing an_ \(H\)_-minor in at most_ \(f(n)\cdot n^{1/2}\) _rounds._ Proof.: For (a), it suffices to show that a.a.s. \(G_{t}\) is acyclic for \(t\leq n^{1/2}/f(n)\) in any post-positional process. Suppose \(G_{t}\) has a cycle. Considering only the edges (each of which joins a square and a circle) that make up the cycle, there must exist a square which lands on a vertex that has already received either a square or a circle earlier. For every \(t\leq n^{1/2}/f(n)\), the probability that \(u_{t}\) lands on a vertex that has already received a square or a circle (there are \(2(t-1)\) of them) is \(O(t/n)\) and hence, the probability that \(G_{t}\) contains a cycle for some \(t\leq n^{1/2}/f(n)\) is bounded by \(\sum_{t\leq n^{1/2}/f(n)}O(t/n)=o(1)\). Part (b) follows from considering the \(1\)-subdivision of \(H\), that is the graph obtained from \(H\) by subdividing each edge in \(E(H)\) exactly once, and noting that this subdivision is \(2\)-degenerate. The bound then directly follows from Theorem 1.4. Next, we investigate the performance of the two processes in constructing a graph containing a large bipartite subgraph. **Theorem 1.6**.: _Suppose \(m=o(n^{2})\). Let \(\mathcal{P}\) be the set of graphs on \([n]\) that contain a bipartite subgraph with \(m\) edges. Then, the minimum number of rounds required to construct a graph in \(\mathcal{P}\) is a.a.s. \((1+o(1))m\) in both the pre-positional and the post-positional processes._ While the proof for Theorem 1.6 is straightforward (See Section 6), it is interesting to see if the theorem fails to hold when \(m=\Theta(n^{2})\). We think containing a bipartite subgraph with \(\Omega(n^{2})\) edges might be an increasing property (see its definition in Section 2) for which the post-positional process outperforms the pre-positional process. However, proving it seems not an easy task. We make the following conjecture. **Conjecture 1.7**.: _Suppose \(m\geq cn^{2}\) for some fixed \(c>0\). Let \(\mathcal{P}\) be the set of graphs on \([n]\) that contain a bipartite subgraph with \(m\) edges. There exists \(\delta_{c},\eta_{c}>0\) such that a.a.s. there is a strategy in a post-positional process that construct a graph in \(\mathcal{P}\) in less than \(\eta_{c}n^{2}\) rounds, whereas no strategy in a pre-positional process can construct a graph in \(\mathcal{P}\) within \((\eta_{c}+\delta_{c})n^{2}\) rounds._ Finally, we give an example of non-increasing \(\mathcal{P}\) where the two processes perform very differently. **Theorem 1.8**.: _Let \(\mathcal{P}\) be the set of multigraphs on \([n]\) that contains an induced simple \((n-1)\)-cycle. Then, a.a.s. no pre-positional process can produce a multigraph in \(\mathcal{P}\), whereas, a.a.s. a post-positional process can construct a multigraph in \(\mathcal{P}\) in \(O(n\log n)\) rounds._ ## 2 Notation For a graph \(G\), we denote its vertex and edge sets by \(V(G)\) and \(E(G)\) respectively. We denote the degree of a vertex \(v\in V(G)\) in graph \(G\) by \(\deg_{G}(v)\). We use \(\delta(G)\) and \(\Delta(G)\) to denote the minimum and maximum degrees of a graph respectively. For a set \(S\subseteq V(G)\) of vertices, we use \(G[S]\) for the subgraph induced by set \(S\) in graph \(G\). The open and closed neighbourhoods of a vertex \(v\in V(G)\) in graph \(G\) will be denoted by \(N_{G}(v)\) and \(N_{G}[v]\) respectively. Both variants of the semi-random graph processes are a single-player game in which a multi-graph is iteratively constructed in a sequence of rounds. Because all the graph properties we consider are invariant under adding multi-edges and loops, we generally consider the underlying simple graph. Notably, we define the degree of a vertex in the multi-graph to be the number of distinct neighbours, not including itself. That is, \(\deg_{G}(v)=|N_{G}(v)\setminus\{v\}|\) for each vertex \(v\in V(G)\). Moreover, we will use the previously introduced notation for simple graphs for the graphs generated by the process as well. In each round of the semi-random graph process (either variant), a single edge is added to the graph. We will denote the graph obtained after \(\ell\) rounds by \(G_{\ell}\). The initial graph, \(G_{0}\), is an empty graph with vertex set \([n]\). In the \(t^{\text{th}}\) round, we construct graph \(G_{t}\) from graph \(G_{t-1}\) as follows. Let \(u_{t}\) be a vertex picked u.a.r. from \([n]\). We say that vertex \(u_{t}\) is hit in round \(t\). We choose a vertex \(v_{t}\in[n]\) according to some strategy, and add edge \(u_{t}v_{t}\) to graph \(G_{t-1}\) to obtain graph \(G_{t}\). The strategy can be a function of \(u_{t}\) in the post-positional variant, and must be independent of \(u_{t}\) in the pre-positional variant. Note that if \(u_{t}=v_{t}\) the new edge is a loop, and if \(G_{t-1}\) already contained \(u_{t}v_{t}\) the new edge is a multi-edge. Thus, \(V(G_{t})=V(G_{t-1})=[n]\), and \(E(G_{t})=E(G_{t-1})\cup\{u_{t}v_{t}\}\). Additionally, we refer to \(u_{t}\) as a square, and \(v_{t}\) as a circle in round \(t\), as introduced by Gao, MacRury and Pralat [11]. Each edge in graph \(G_{t}\) then connects a square and a circle in the round that it is added. We denote a graph \(G\) having property \(\mathcal{P}\) by \(G\in\mathcal{P}\). We say that a graph property \(\mathcal{P}\) is _increasing_ if for every \(G\in\mathcal{P}\), \(H\in\mathcal{P}\) provided that \(G\subseteq H\). Note that by this definition, if \(G_{t}\in\mathcal{P}\) for some \(t>0\) and a monotone graph property \(\mathcal{P}\), it follows that \(G_{t^{\prime}}\in\mathcal{P}\) for all \(t^{\prime}\geq t\) as well. Except for the example in Theorem 1.8, all properties investigated in this paper are increasing properties. If \(\mathcal{P}\) is increasing, it is sufficient to construct a graph \(G_{t}\) which has a subgraph \(G^{\prime}\) in \(\mathcal{P}\). In some rounds, given \(G_{t-1}\) (and vertex \(u_{t}\) in the post-positional process), we may choose vertex \(v_{t}\) arbitrarily and not use the edge \(u_{t}v_{t}\) for the construction of \(G^{\prime}\). We will consider such a round a _failure round_. Allowing failure rounds in some cases leads to algorithms that are easier to analyse. In Section 7 where a non-increasing property is studied, we cannot simply ignore "undesirable" edges to make use of failure rounds. We say an event \(A=A_{n}\) occurs asymptotically almost surely (a.a.s.) in \(G_{t}\) if \(\mathbb{P}(A_{n})\to 1\) as \(n\to\infty\). Unless specified otherwise, all asymptotic notation relates to \(n\), i.e. \(o(1)\) implies a function that tends to \(0\) as \(n\to\infty\). ## 3 Pre- and post-positional processes In this section we prove that the post-positional process can construct a graph in \(\mathcal{P}\) at least as fast as the pre-positional process, for any graph property \(\mathcal{P}\). **Lemma 3.1**.: _Let \(n\geq 2\) and \(\mathcal{P}\subseteq 2^{\binom{[n]}{2}}\). For every \(t\geq 0\), the probability that there exists a pre-positional strategy to construct \(G_{s}\in\mathcal{P}\) for some \(s\leq t\) is at most the probability that there exists a post-positional strategy to construct \(G_{s}\in\mathcal{P}\) for some \(s\leq t\)._ Proof.: We can couple the two processes so that no matter which strategy the pre-positional process uses, the post-positional process can copy the moves and can stop the process at the same time as the pre-positional one. Let \((u_{i})_{i\geq 0}\) be a sequence of i.i.d. copies of \(u\) chosen u.a.r. from \([n]\). Present \(u_{i}\) to be the \(i\)-th square for both processes. For each \(t\geq 1\), let \(v_{t}\) be the choice of the \(t\)-th circle by the pre-positional process. Note that the choice of \(v_{t}\) depends only on \(\{u_{i},v_{i},1\leq i\leq t-1\}\). The post-positional process simply copies the choices of \(v_{t}\) for every \(t\), which are valid moves given its definition. Thus, the two processes always terminate at the same time. Thanks to Lemma 3.1, it suffices to prove Theorem 1.1(a) and 1.4(a) for post-positional processes, and prove Theorem 1.1(b), 1.4(b) and Theorem 1.6 for pre-positional processes. We prove Theorem 1.1 in Section 4, Theorem 1.4 in Section 5, and Theorem 1.6 in Section 6, and Theorem 1.8 in Section 7. ## 4 \(k\)-Connectivity: proof of Theorem 1.1 A connected graph \(G\) is said to be \(k\)-connected if it remains connected when removing fewer than \(k\) vertices. In their seminal paper, Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] provide tight asymptotic bounds for the minimum number of rounds needed in the post-positional process to produce a \(k\)-connected graph for all \(k\geq 3\). Their lower bounds follow directly by coupling with a well-known random graph process called the \(k\)-min process. By Lemma 3.1, these lower bounds are valid for the pre-positional process as well. As a warming up, we will go through their argument and show how it also works directly in the pre-positional setting. ### Min-degree process: proof of Theorem 1.1(a) The min-degree process is a variant on the classical random graph process and was introduced and first studied by Wormald [20]. In the min-degree process, \(G_{0}\) is an edgeless graph on \([n]\). Given \(G_{t}\), choose a vertex \(u\) of minimum degree in \(G_{t}\) u.a.r., and subsequently choose a vertex \(v\) not adjacent to vertex \(u\) in graph \(G_{t}\) u.a.r. Graph \(G_{t+1}\) is then constructed by adding edge \(uv\) to graph \(G_{t}\). Recall \(\alpha_{k}\) in Theorem 1.1. Wormald used his differential equation method to prove that the minimum \(t\) where \(G_{t}\) has minimum degree at least \(k\) is a.a.s. \(\alpha_{k}n\), for each \(k\geq 2\). We denote the graph property of having minimum degree \(k\) by \(\mathcal{D}_{k}\). Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] have since studied adapted versions of the min-degree process as modelled by the semi-random graph process. By choosing \(v_{t}\) u.a.r. from all vertices of minimum degree not adjacent to \(u_{t}\) in graph \(G_{t}\), the resulting semi-random graph process is contiguous to the min-degree process. That is, asymptotically the two processes are equivalent. We refer to this strategy as \(\mathcal{S}_{\min}\). Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic additionally considered strategies without the restrictions on \(v_{t}\) and \(u_{t}\) to be non-adjacent in \(G_{t}\) and \(u_{t}\) and \(v_{t}\) to be distinct. They showed that each of these strategies are optimal in ensuring graph \(G_{t}\) having minimum degree \(k\) in as few rounds as possible when taking \(n\) to infinity, and each asymptotically require \(\alpha_{k}n\) rounds (\(k\geq 2\)). Each of these strategies thus obtains a graph in \(\mathcal{D}_{k}\) in asymptotically the same number of rounds as the min-degree process. We first provide a formal definition of strategy \(\mathcal{S}_{\min}\). For each round \(t\), distribution function \(f_{t}\) is defined as follows. Let \(Y_{t,\min}=\{v\in[n]\,|\,\deg_{G_{t-1}}(v)=\delta(G_{t-1})\}\). Then, given \(u_{t}\) chosen u.a.r. from \([n]\), if \(Y_{t,\,\min}\setminus N_{G_{t-1}}[u_{t}]=\emptyset\), the round is considered a failure round. Otherwise, vertex \(v_{t}\) is chosen u.a.r. from \(Y_{t,\,\min}\setminus N_{G_{t-1}}[u_{t}]\). By this formulation, strategy \(\mathcal{S}_{\min}\) does not create loops nor multi-edges. Let \(G_{\min}(n,m)\) be the graph on \(n\) vertices with \(m\) edges generated by the min-degree process. To show that strategy \(\mathcal{S}_{\min}\) can be used to model the min-degree process for \(m=o(n^{2})\) with a.a.s. \(o(m)\) failure rounds, Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] look at an auxiliary strategy where \(v_{t}\) is chosen u.a.r. from all minimum degree vertices. This strategy thus does not take the neighbourhood of \(u_{t}\) into account. They then show that the number of multi-edges and loops is asymptotically bounded by \(o(m)\), which directly bounds the failure rounds of strategy \(\mathcal{S}_{\min}\) as well. We note that this auxiliary strategy is also valid in the pre-positional process, where the first vertex is chosen u.a.r. from the vertices of minimum degree. Hence, the pre-positional process can also model the min-degree process with a.a.s. \(o(m)\) failure rounds. Since having minimum degree \(k\) is a prerequisite for being \(k\)-connected for \(n>k\), this immediately implies Theorem 1.1(a) for \(k\geq 2\). The case \(k=1\) is trivial, as a connected graph has at least \(n-1\) edges and thus no strategy can build a connected graph in at most \((1-\epsilon)n\) rounds. ### Proof of Theorem 1.1(b) We consider the set of \(k\)-connected graphs on \([n]\), which is an increasing property. It is convenient to define some notation to assist the proof of Theorem 1.1(b). For an increasing property \(\mathcal{P}\), a strategy \(\mathcal{S}\) (\(\mathcal{S}\) may be a pre-positional or a post-positional strategy), and a real number \(0<q<1\), let \(\tau_{\mathcal{P}}(\mathcal{S},q,n)\) be the minimum value \(t\geq 0\) such that \(\mathbb{P}\left[G_{t}\in\mathcal{P}\right]\geq q\); recalling that \(n\) is the number of vertices in \(G_{t}\). If no such value \(t\) exists, we say that \(\tau_{\mathcal{P}}(\mathcal{S},q,n)=\infty\). Let \(\tau_{\mathcal{P}}(q,n)\) denote the minimum value of \(\tau_{\mathcal{P}}(\mathcal{S},q,n)\) over all possible strategies \(\mathcal{S}\). We are interested in the asymptotic value of \(\tau_{\mathcal{P}}(q,n)\) when probability \(q\) approaches \(1\). Therefore, we define \[\tau_{\mathcal{P}}:=\lim_{q\,\uparrow\,1}\limsup_{n\to\infty}\frac{\tau_{ \mathcal{P}}(q,n)}{n},\] where the limit exists since \(\mathcal{P}\) is increasing. This definition is useful for studying linear-time strategies (strategies that a.a.s. builds a graph in \(\mathcal{P}\) in \(\Theta(n)\) rounds), which is the case for \[\mathcal{P}=\mathcal{C}_{k}:=\{G\subseteq\binom{[n]}{2}:\text{ $G$ is $k$-connected}\}.\] To show Theorem 1.1(b), it suffices to prove that in the pre-positional process, \[\tau_{\mathcal{C}_{k}}\leq\alpha_{k}\quad\text{for every fixed $k\geq 1$}.\] Let \(k\)_-min process_ be the process of applying strategy \(\mathcal{S}_{\min}\) until obtaining a graph with minimum degree at least \(k\). Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] proved Theorem 1.1 for the case \(k\geq 3\) in the post-positional process. Their proof is based on a slightly modified variant of the \(k\)-min process tailored for multigraphs and builds on a proof by Kang, Koh, Ree and Luczak [14]. The strategy \(\mathcal{S}_{\min}^{*}\) underlying their modified process is identical to the strategy for the \(k\)-min process as long as the graph is simple, and simplifies the analysis in the semi-random graph process. The proof shows that the graph resulting from the modified \(k\)-min process is a.a.s. \(k\)-connected for all \(k\geq 3\). This proof cannot be directly extended to \(k<3\), as Kang, Koh, Ree and Luczak [14] showed that the graph resulting from the \(k\)-min process is only a.a.s. connected for \(k\geq 3\). Strategy \(\mathcal{S}_{\min}^{*}\) chooses vertex \(v_{t}\) u.a.r. from all vertices of \(V(G_{t-1})\setminus\{u_{t}\}\) that have the smallest number of distinct neighbours. This strategy can be modelled in the pre-positional process by the following strategy: we choose \(u_{t}\) u.a.r. from all vertices that have the smallest number of distinct neighbours, and consider the round a failure if \(u_{t}=v_{t}\). The probability of any given round being a failure round is thus \(1/n\). Hence, the number of additional rounds needed to cover the additional failure rounds is a.a.s. \(o(n)\). Hence, it immediately gives the following lemma. **Lemma 4.1**.: _In both the pre-positional and the post-positional process, \(\tau_{\mathcal{C}_{k}}=\alpha_{k}\) for all fixed \(k\geq 3\)._ Moreover, note that the case \(k=1\) is trivial in the post-positional process. Namely, we observe that one can build a forest containing \(m\leq n-1\) edges in exactly \(m\) rounds. In each round, we simply choose \(v_{t}\) that lies in a different component of \(G_{t-1}\) from \(u_{t}\). Hence, we can build a spanning tree in \(n-1\) rounds, which is obviously optimal. The following lemma shows that the pre-positional process requires asymptotically the same number of rounds to construct a connected graph. **Lemma 4.2**.: \(\tau_{\mathcal{C}_{1}}=1\) _in the pre-positional process._ Proof.: It is obvious that \(\tau_{\mathcal{C}_{1}}\geq 1\), since a connected graph on \([n]\) has at least \(n-1\) edges. Recall that \(u_{t}\) is the vertex uniformly chosen from \([n]\), and \(v_{t}\) is the vertex strategically chosen by the player. For the upper bound, we consider a strategy \(\mathcal{S}\) which chooses \(v_{t}\) u.a.r. from the smallest component (if there is a tie, pick an arbitrary smallest component). If \(u_{t}\) lands in a different component, we add edge \(u_{t}v_{t}\). Otherwise we consider the round a failure round. Each successfully added edge then decreases the number of components in the graph by one. We analyse the process with this strategy in a number of phases. Let phase \(i\) be defined as the rounds in which the number of components in the graph decreases from \(\frac{n}{2^{i-1}}\) to \(\frac{n}{2^{i}}\). Thus, there are \(\log_{2}n\) such phases. We note that phase \(i\) consists of \(\frac{n}{2^{i-1}}-\frac{n}{2^{i}}=\frac{n}{2^{i}}\) non-failure rounds, and a number of failure rounds. Let \(T_{i}\) be the total number of rounds in phase \(i\), and let \(f_{i}\) be the number of failure rounds in phase \(i\). Thus, \(T_{i}=\frac{n}{2^{i}}+f_{i}\). Next, we observe that the smallest component in any round in phase \(i\) contains at most \(2^{i}\) vertices. The probability that a round is a failure round in phase \(i\) is thus at most \(2^{i}/n\). Couple the process with the following experiment; consider a sequence of i.i.d. Bernoulli random variables with success probability \(1-2^{i}/n\). We terminate the sequence once we have observed \(n/2^{i}\) successes. Let \(\mathcal{T}_{i}\) denote the random variable corresponding to the number of Bernoulli random variables in the sequence before it terminates. We observe that \(T_{i}\) is stochastically dominated by \(\mathcal{T}_{i}\). By the negative binomial distribution, it follows that \(\mathbb{E}[\mathcal{T}_{i}]=(n/2^{i})/(1-2^{i}/n)\). Hence, \((n/2^{i})/(1-2^{i}/n)\). Then, as \(T_{i}=\frac{n}{2^{i}}+f_{i}\), we find that for all \(i\leq\log_{2}(n)-1\): \[\mathbb{E}[f_{i}]\leq\frac{\frac{n}{2^{i}}}{1-\frac{2^{i}}{n}}-\frac{n}{2^{i}} =1+O\left(\frac{2^{i}}{n}\right)=O(1).\] For the last phase (i.e. \(\log_{2}(n)-1<i\leq\log_{2}n\)) only a single successful round is needed, and as the failure probability is at most \(1/2\), it follows that for all \(i\) it holds that \(\mathbb{E}[f_{i}]=O(1)\). Therefore, \(\mathbb{E}[\sum_{i\leq\log_{2}n}f_{i}]=O(\log_{2}n)\), and thus By Markov's inequality, a.a.s. \(\sum_{i\leq\log_{2}n}f_{i}=O(\log^{2}n)\). Hence, total number of rounds needed to ensure the graph is connected is a.a.s. at most \[\sum_{i>0}T_{i}=n-1+\sum_{i=1}^{\log_{2}n}f_{i}=(1+o(1))n.\] Therefore, \(\tau_{\mathcal{C}_{1}}\leq 1\) in the pre-positional process, as desired. Thus, asymptotically, the number of required rounds to ensure connectivity is equal between the pre- and post-positional processes. In this section we prove tight asymptotic bounds for the final open case in both the pre- and post-positional processes, \(k=2\). The best bound previously known for the post-positional process, as observed by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4], is the tight upper bound for \(k=3\). That is, \(\tau_{\mathcal{C}_{2}}\leq\tau_{\mathcal{C}_{3}}\). They also gave a lower bound, based on the 2-min process. The 2-min process aims to ensure that each vertex has degree at least two as fast as possible, a prerequisite for 2-connectedness. Using a known result by Wormald [19, 20] on the min-degree process, they showed that the 2-min process a.a.s. takes \((\ln 2+\ln(\ln 2+1))+o(1))n\) rounds to complete. Hence, \(\tau_{\mathcal{C}_{2}}\geq\ln 2+\ln(\ln 2+1)\) in the post-positional process. Note that as the 2-min process can be modelled by the pre-positional process as well, it similarly holds that \(\tau_{\mathcal{C}_{2}}\geq\ln 2+\ln(\ln 2+1)\) in the pre-positional process. In this section we show a novel upper bound for the pre-positional process, which asymptotically matches the known lower bound. Note that by Lemma 3.1, this directly gives an asymptotically tight upper bound for the post-positional process as well. **Lemma 4.3**.: \(\tau_{\mathcal{C}_{2}}=\ln 2+\ln(\ln 2+1)\) _in both the pre- and post-positional processes._ That is, the minimum number of rounds required for a semi-random process to build a 2-connected graph on \(n\) vertices is asymptotic to \((\ln 2+\ln(\ln 2+1))n\) in both processes. As a result of Lemma 4.3, and the previous analysis of existing proofs for bounds on \(\tau_{\mathcal{C}_{k}}\) for \(k\geq 1\), it follows that the property of \(k\)-connectedness requires asymptotically the same number of rounds in the pre- and post-positional processes. #### 4.2.1 Overview For the upper bound, our approach differs significantly from the strategy used by Ben-Eliezer, Hefetz, Kronenberg, Parczyk, Shikhelman and Stojakovic [4] to prove the tight upper bounds for \(k\)-connectedness for \(k\geq 3\). Namely, while their approach is predominantly probabilistic, we use a more structural approach. Our strategy is based on analysing the structure of the maximal \(2\)-connected components of the graph resulting from the \(2\)-min process. In the first phase, we use the \(2\)-min process to obtain a graph in which each vertex has degree at least \(2\). We show that a.a.s. most of the vertices in this graph will be contained in relatively large \(2\)-connected subgraphs. This moreover allows us to conclude that the graph contains \(o(n)\) maximal \(2\)-connected subgraphs. In the second phase, the aim is to ensure that the graph becomes connected. We bound the number of components by the number of maximal \(2\)-connected subgraphs, recalling that the graph has \(o(n)\) such subgraphs after the first phase. As such, by adding edges between components, we can quickly ensure the graph becomes connected. In the third phase, we then want to make the graph \(2\)-connected. We achieve this by considering a tree structure on the maximal \(2\)-connected subgraphs, and showing that by balancing this tree, we can efficiently eliminate cut-vertices. We show that the second and third phases both take \(o(n)\) steps a.a.s. Therefore, the first phase, consisting of the \(2\)-min process, dominates the total number of rounds in the process of building a \(2\)-connected graph on \([n]\). In Section 4.2.2, we first introduce the purely structural definitions and results we will use. Section 4.2.3 then builds upon these structural results to analyse the random process given by our strategy. #### 4.2.2 Supporting structural results In this section we restate the conventional definitions of blocks and block graphs (see for instance [15]). **Definition 4.1** (Block).: Let \(B\subseteq V(G)\) be a maximal set of vertices such that for any two vertices \(x,y\in B\) with \(xy\not\in E(G)\), in order to separate vertex \(x\) from vertex \(y\), it is necessary to remove at least \(2\) vertices from \(G\). Then \(B\) is called a block. Note that by this definition, each block in a graph either induces a maximal \(2\)-connected subgraph, an edge, or an isolated vertex. Moreover, when considering connected graphs on at least \(2\) vertices, each block thus induces a maximal \(2\)-connected subgraph or an edge. Based on this definition, we can then decompose a graph \(G\) into such blocks. **Definition 4.2** (Block decomposition).: Let \(\mathcal{B}(G)\subseteq\mathcal{P}(V(G))\) denote the set of all blocks of graph \(G\). Then \(\mathcal{B}(G)\) is called the block decomposition of graph \(G\). We observe that by the definition of blocks, for each edge \(uv\in E(G)\) in a graph \(G\) there exists a unique block \(B\in\mathcal{B}(G)\) such that \(u,v\in B\). Moreover, by the maximality of the blocks, the block decomposition \(\mathcal{B}(G)\) is unique. Note that \(\mathcal{B}(G)\) is generally not a partition of \(V(G)\). However, each pair of blocks shares at most one vertex, as given in the following proposition. **Proposition 4.4** (Konig, [15, Theorem XIV.7]).: _Let \(G\) be a graph. Then, for each pair of blocks \(B_{1},B_{2}\in\mathcal{B}(G)\), it holds that \(|B_{1}\cap B_{2}|\leq 1\)._ **Definition 4.3** (Block graph).: Let \(G\) be a graph. Then let \(G_{\mathcal{B}}\) be the graph defined by \(V(G_{\mathcal{B}})=\mathcal{B}(G)\) and \(E(G_{\mathcal{B}})=\{B_{1}B_{2}\,|\,B_{1}\cap B_{2}\neq\emptyset\}\). Then graph \(G_{\mathcal{B}}\) is called the block graph of graph \(G\). For a graph \(G\) to be 2-connected, it must hold that \(\mathcal{B}(G)=\{V(G)\}\). We aim to use the blocks and their relative structure in a graph to identify moves in a semi-random process which join multiple blocks together into a single larger block. If a semi-random edge \(u_{t}v_{t}\) joins two blocks then we call the addition of such an edge an _augmentation_. A natural augmentation to consider is to join two blocks \(B_{i}\) and \(B_{j}\) where there is a path between \(B_{i}\) and \(B_{j}\) in \(G_{\mathcal{B}}\). If \(u_{t}\) and \(v_{t}\) are not themselves cut-vertices, this augmentation will immediately join all blocks along the path into a single block. To that purpose, we want to consider a tree structure on the blocks. The traditional such structure, called the block-cut tree of a graph, was originally introduced independently by Gallai [8], and Harary and Prins [13]. **Definition 4.4** (Block-cut tree).: Let \(G\) be a connected graph, and let \(S\) be the set of cut vertices of graph \(G\). Then, the graph \(T\), given by \(V(T)=\mathcal{B}(G)\cup S\) and \(E(T)=\{vB\,|\,v\in S,B\in\mathcal{B}(G),v\in B\}\), is a tree and called the block-cut tree of graph \(G\). We consider a structure similar to the block-cut tree, based on the block graph. Instead of including the cut-vertices in the tree, we take a spanning tree on the block graph. This ensures that we only have to work with blocks, while still providing the desired tree structure. To that aim, we introduce the following definition. **Definition 4.5** (Reduced block tree).: Let \(G_{\mathcal{B}}\) be the block graph of a connected graph \(G\). Then, a spanning tree \(T_{\mathcal{B}}\) of graph \(G_{\mathcal{B}}\) is called a reduced block tree of graph \(G\). A reduced block tree can equivalently be constructed recursively. Let \(v\in V(G)\) be a cut-vertex in a connected graph \(G\), and let \(G_{1}\) and \(G_{2}\) be the induced subgraphs of graph \(G\) such that \(V(G_{1})\cup V(G_{2})=V(G)\), \(E(G_{1})\cup E(G_{2})=E(G)\), and \(V(G_{1})\cap V(G_{2})=\{v\}\). We note that as vertex \(v\) is a cut-vertex, each block \(B\in\mathcal{B}(G)\) is contained in either graph \(G_{1}\) or graph \(G_{2}\). Therefore, \(\mathcal{B}(G_{1})\cup\mathcal{B}(G_{2})=\mathcal{B}(G)\). Let \(T_{\mathcal{B}_{1}}\) and \(T_{\mathcal{B}_{2}}\) be reduced block trees for graphs \(G_{1}\) and \(G_{2}\) respectively. Then, we can construct a reduced block tree for graph \(G\) with block decomposition \(\mathcal{B}(G)\) by joining trees \(T_{\mathcal{B}_{1}}\) and \(T_{\mathcal{B}_{2}}\) with a single edge from a vertex in \(T_{\mathcal{B}_{1}}\) representing a block containing vertex \(v\) to a vertex in \(T_{\mathcal{B}_{2}}\) also representing a block containing vertex \(v\). We observe that by Definition 4.5, the reduced block tree of a graph is generally not unique. This occurs when a vertex is contained in at least three blocks, and the block graph thus contains a clique of size at least 3. **Proposition 4.5**.: _Let \(T_{\mathcal{B}}\) be a reduced block tree of a connected graph \(G\). For \(v\in V(G)\), the set \(\{B\in V(T_{\mathcal{B}})\,|\,v\in B\}\) induces a (connected) subtree in \(T_{\mathcal{B}}\)._ Proof.: Suppose not. Let \(S\subseteq V(T_{\mathcal{B}})\) be the set of all blocks \(B\in V(T_{\mathcal{B}})\) such that \(v\in B\). Then the set \(S\) induces a disconnected subgraph in tree \(T_{\mathcal{B}}\). Let \(C_{1}\) and \(C_{2}\) be two components of this induced subgraph \(T_{\mathcal{B}}[S]\). Moreover, let \(P\) be a shortest path between sets \(V(C_{1})\) and \(V(C_{2})\) in \(T_{\mathcal{B}}\), and let blocks \(B_{1},B_{2}\in S\) be the endpoints of this path \(P\) such that \(B_{1}\in V(C_{1})\) and \(B_{2}\in V(C_{2})\). We note that \(P\) has length at least 2. Then, as \(P\) is a shortest such path, none of the internal vertices of \(P\) are contained in \(S\). Hence, the corresponding blocks do not contain vertex \(v\). Let \(G_{P}\) be the subgraph of \(T_{\mathcal{B}}\) induced by the internal vertices of path \(P\). Additionally, let \(S_{P}\subseteq V(G)\) be the set of all vertices of graph \(G\) contained in at least one of the blocks in \(G_{P}\). We observe that by the definition of path \(P\), subgraph \(G_{P}\) contains blocks adjacent to blocks \(B_{1}\) and \(B_{2}\), respectively, in the tree \(T_{\mathcal{B}}\). Therefore, \(B_{1}\cap S_{P},B_{2}\cap S_{P}\neq\emptyset\). Moreover, by Proposition 4.4 we find that \(B_{1}\cap B_{2}=\{v\}\). Therefore, as \(v\not\in S_{P}\), there exist vertices \(v_{1}\in B_{1}\cap S_{P}\) and \(v_{2}\in B_{2}\cap S_{P}\). Then, because blocks \(B_{1}\) and \(B_{2}\) are by definition connected, there exists a \(v-v_{1}\) path \(P_{1}\) in block \(B_{1}\) and a \(v-v_{2}\) path \(P_{2}\) in block \(B_{2}\). Similarly, the set \(S_{P}\) induces a connected subgraph in \(G\), and thus contains a \(v_{1}-v_{2}\) path \(P^{\prime}\). We note that the union of the paths \(P_{1}\), \(P_{2}\) and \(P^{\prime}\) gives a subgraph of \(G\) containing a cycle \(C\) containing vertex \(v\). We note that the cycle \(C\) is \(2\)-connected and hence is contained in a block \(B_{C}\). Moreover, as this cycle contains at least \(2\) vertices of block \(B_{1}\), by Proposition 4.4, we find that \(B_{1}=B_{C}\). Analogously, it follows that \(B_{2}=B_{C}\). However, this implies that \(B_{1}=B_{2}\), contradicting these blocks being in different components \(C_{1}\) and \(C_{2}\). By this contradiction, we conclude that the proposition holds. **Proposition 4.6**.: _Let \(T_{\mathcal{B}}\) be a reduced block tree of a connected graph \(G\) with \(\delta(G)\geq 2\). Let \(B\in\mathcal{B}\) be a block such that \(B=\{u,v\}\). Then there exist distinct blocks \(B_{u},B_{v}\in\mathcal{B}\) adjacent to \(B\) in \(T_{\mathcal{B}}\) such that \(u\in B_{u}\) and \(v\in B_{v}\)._ Proof.: Because \(\delta(G)\geq 2\), there exists another edge \(uw\in E(G)\). Hence, as each edge is contained in a block, there exists a block \(B^{\prime}\in\mathcal{B}\) such that \(u\in B^{\prime}\) and \(B^{\prime}\neq B\). It then follows from Proposition 4.5 that there exists a block \(B_{u}\in\mathcal{B}\) such that \(u\in B_{u}\) and \(B_{u}\) adjacent to \(B\) in \(T_{\mathcal{B}}\). Analogously, there exists a block \(B_{v}\in\mathcal{B}\) adjacent to \(B\) in \(T_{\mathcal{B}}\) such that \(v\in B_{v}\). By the maximality of block \(B\), it follows that \(v\not\in B_{u}\) and \(u\not\in B_{v}\). Hence, \(B_{u}\neq B_{v}\), as desired. **Corollary 4.7**.: _Let \(T_{\mathcal{B}}\) be a reduced block tree of a connected graph \(G\) with \(\delta(G)\geq 2\). Then each leaf in \(T_{\mathcal{B}}\) corresponds to a \(2\)-connected block in graph \(G\) of at least \(3\) vertices._ Proof.: By Proposition 4.6, blocks of size \(2\) cannot be leaves in \(T_{\mathcal{B}}\). Then, by the definition of blocks, the result follows. **Proposition 4.8**.: _Let \(G\) be a connected graph such that \(|B|<n/4\) for all blocks \(B\in\mathcal{B}(G)\), and let \(T_{\mathcal{B}}\) be a corresponding reduced block tree. Then there exists a vertex \(B^{*}\in V(T_{\mathcal{B}})\) and a colouring \(\phi:V(T_{\mathcal{B}})\setminus\{B^{*}\}\to\{\text{red},\,\text{blue}\}\) such that all components of \(T_{\mathcal{B}}-B^{*}\) are monochromatic and that for \(S_{\text{red}}=\{v\in B\setminus B^{*}\,|\,B\in\mathcal{B},\phi(B)=\text{red}\}\) and \(S_{\text{blue}}=\{v\in B\setminus B^{*}\,|\,B\in\mathcal{B},\phi(B)=\text{ blue}\}\) it holds that \(|S_{\text{blue}}|\leq|S_{\text{red}}|\leq 3|S_{\text{blue}}|\)._ Proof.: Firstly, we note by Proposition 4.5 that \(S_{\text{red}}\cap S_{\text{blue}}=\emptyset\) and hence \(V(G)\) is partitioned by the sets \(B^{*}\), \(S_{\text{red}}\), and \(S_{\text{blue}}\). Therefore, \(|B^{*}|+|S_{\text{red}}|+|S_{\text{blue}}|=n\). Assume that the proposition does not hold. Then, let \(B^{*}\in V(T_{\mathcal{B}})\) and \(\phi:V(T_{\mathcal{B}})\to\{\text{red},\text{blue}\}\) be a vertex and a colouring respectively such that all components of \(T_{\mathcal{B}}-B^{*}\) are monochromatic, \(|S_{\text{red}}|\geq|S_{\text{blue}}|\), subject to which \(|S_{\text{red}}|\) is minimised. We note that as it concerns a counterexample, we must have \(|S_{\text{red}}|>3|S_{\text{blue}}|\). We observe that as \(|B^{*}|<n/4\), \(T_{\mathcal{B}}-B^{*}\) is non-empty. Therefore, due to \(|S_{\text{red}}|\geq|S_{\text{blue}}|\), \(T_{\mathcal{B}}-B^{*}\) contains at least one red component. Suppose that \(T_{\mathcal{B}}-B^{*}\) contains exactly one red component. Then, because \(T_{\mathcal{B}}\) is a tree, vertex \(B^{*}\) has exactly one red neighbour \(B^{\prime}\in V(T_{\mathcal{B}})\) in \(T_{\mathcal{B}}\). Then consider using vertex \(B^{\prime}\) instead of vertex \(B^{*}\), uncolouring \(B^{\prime}\) and colouring \(B^{*}\) blue. Let \(\phi^{\prime}\) denote the resulting new colouring, and let \(S^{\prime}_{\text{red}}\) and \(S^{\prime}_{\text{blue}}\) be the sets of vertices in \(G\) corresponding to \(\phi^{\prime}\). We note that as blocks \(B^{*}\) and \(B^{\prime}\) both contain less than \(n/4\) vertices, it holds that \(|S^{\prime}_{\text{red}}|>|S_{\text{red}}|-n/4\) and \(|S^{\prime}_{\text{blue}}|<|S_{\text{blue}}|+n/4\). Moreover, we note that by the maximality of blocks, \(B^{*}\setminus B^{\prime},B^{\prime}\setminus B^{*}\neq\emptyset\), and hence \(|S^{\prime}_{\text{red}}|<|S_{\text{red}}|\) and \(|S^{\prime}_{\text{blue}}|>|S_{\text{blue}}|\). If \(|S^{\prime}_{\text{red}}|>|S^{\prime}_{\text{blue}}|\), the new colouring \(\phi^{\prime}\) is more balanced, and thus contradicts the minimality of \(|S_{\text{red}}|\). Therefore, it holds that \(|S^{\prime}_{\text{red}}|<|S^{\prime}_{\text{blue}}|\). Because we assumed that \(|S_{\text{red}}|>3|S_{\text{blue}}|\), and as \(|B^{*}|+|S_{\text{red}}|+|S_{\text{blue}}|=n\), it follows that \(|S_{\text{blue}}|\leq n/4\). Thus, \(|S^{\prime}_{\text{blue}}|<|S_{\text{blue}}|+n/4\leq n/2\). But then, as \(|B^{\prime}|+|S^{\prime}_{\text{red}}|+|S^{\prime}_{\text{blue}}|=n\), it follows that \[|S^{\prime}_{\text{red}}| =n-|S^{\prime}_{\text{blue}}|-|B^{\prime}|\] \[>n-\frac{n}{2}-\frac{n}{4}\] \[=\frac{n}{4}.\] Then, inverting the colours red and blue results in a colouring satisfying all the conditions of the proposition, contradicting \(T_{\mathcal{B}}\) being a counterexample. Hence, we may assume that forest \(T_{\mathcal{B}}-B^{*}\) contains at least \(2\) red components. Then let \(C_{1},C_{2},\ldots,C_{\ell}\) be the red components of \(T_{\mathcal{B}}-B^{*}\), and let \(S_{1},S_{2},\ldots,S_{\ell}\) be defined by \(S_{i}=\{v\in B\setminus B^{*}\,|\,B\in C_{i}\}\) for \(i\in[\ell]\). Then, by Proposition 4.5, the sets \(S_{1},S_{2},\ldots,S_{\ell}\) partition set \(S_{\text{red}}\). Suppose that there exists an index \(i\in[\ell]\) such that \(|S_{i}|>|S_{\text{blue}}|\). Then, recolouring all blue components red, and recolouring component \(C_{i}\) blue leads to sets \(S^{\prime}_{\text{red}}\) and \(S^{\prime}_{\text{blue}}\) such that, as \(\ell\geq 2\), \(\min(|S^{\prime}_{\text{red}}|,|S^{\prime}_{\text{blue}}|)>\min(|S_{\text{ red}}|,|S_{\text{blue}}|)\). Thus, as \(|S^{\prime}_{\text{red}}|+|S^{\prime}_{\text{blue}}|=|S_{\text{red}}|+|S_{ \text{blue}}|\), by possibly inverting the colours, we find a more minimal counterexample. Hence, we may assume that \(|S_{i}|\leq|S_{\text{blue}}|\) for all \(i\in[\ell]\). Then, as \(|S_{\text{red}}|=\sum_{i=1}^{\ell}|S_{i}|\), we find that \(|S_{\text{red}}|\leq\ell|S_{\text{blue}}|\). Therefore, as \(|S_{\text{red}}|>3|S_{\text{blue}}|\), it holds that \(\ell>3\). Similarly, suppose that there exists an index \(i\in[\ell]\) such that \(|S_{i}|<(|S_{\text{red}}|-|S_{\text{blue}}|)/2\). Then clearly recolouring component \(C_{i}\) blue contradicts the minimality of \(|S_{\text{red}}|\). Hence, we may assume that \(|S_{i}|\geq(|S_{\text{red}}|-|S_{\text{blue}}|)/2\) for all \(i\in[\ell]\). Then, as \(|S_{\text{red}}|=\sum_{i=1}^{\ell}|S_{i}|\), we find that \(|S_{\text{red}}|\geq\ell\cdot(|S_{\text{red}}|-|S_{\text{blue}}|)/2\). It then follows that, because \(\ell>3\), \(|S_{\text{red}}|\leq\frac{\ell}{\ell-2}|S_{\text{blue}}|\). But then, as \(\frac{\ell}{\ell-2}<3\) for \(\ell>3\), we conclude that vertex \(B^{*}\) and colouring \(\phi\) do not form a counterexample. Thus, we conclude that the proposition holds. #### 4.2.3 Building \(2\)-connected semi-random graphs In this section, we describe our strategy and analyse the corresponding process for building a \(2\)-connected semi-random graph, and obtain the tight upper bound of \(\tau_{\mathcal{C}_{2}}\) in the pre-positional process as in Lemma 4.3. Our strategy consists of three phases. In the first phase, we use the \(2\)-min process as described in Section 4.1. The following proposition shows useful properties of the resulting graph. **Proposition 4.9**.: _Let \(G\) be the semi-random graph resulting from the \(2\)-min process. Then, a.a.s., \(G\) contains \(o(n)\) vertices that are contained in \(2\)-connected induced subgraphs of order at most \(\sqrt{\ln n}\) in graph \(G\)._ Proof.: Let \(X\) be the number of vertices contained in \(2\)-connected induced subgraphs of order at most \(\sqrt{\ln n}\). We note that it suffices to show that \(\mathbb{E}[X]=o(n)\). Moreover, let \(Y_{\ell}\) denote the number of \(2\)-connected induced subgraphs of order \(\ell\) for \(1\leq\ell\leq\sqrt{\ln n}\). Thus, by linearity of expectation, \(\mathbb{E}[X]\leq\sum_{\ell=1}^{\sqrt{\ln n}}\ell\mathbb{E}[Y_{\ell}]\). For \(1\leq\ell\leq\sqrt{\ln n}\), let \(Z_{\ell}\) denote the number of induced subgraphs of order \(\ell\) with at least \(\ell\) edges. Because each \(2\)-connected graph contains at least as many edges as the vertices, it follows immediately that \(Y_{\ell}\leq Z_{\ell}\), and thus, \(\mathbb{E}[X]\leq\sum_{\ell=1}^{\sqrt{\ln n}}\mathbb{E}[\ell Z_{\ell}]\). Hence it suffices to show that \(\sum_{\ell=1}^{\sqrt{\ln n}}\mathbb{E}[\ell Z_{\ell}]=o(n)\). Let \(1\leq\ell\leq\sqrt{\ln n}\), and fix \(S\subseteq[n]\) such that \(|S|=\ell\). Let \(p_{S}\) be the probability that \(G[S]\) contains at least \(\ell\) edges. Note that \(\mathbb{E}[Z_{\ell}]=\sum_{S\in\binom{[n]}{\ell}}p_{S}\). Next, we estimate \(p_{S}\). We first split the \(2\)-min process into two phases. The first phase ends after the step where the last isolated vertex becomes incident with an edge, and thus the second phase starts with a graph with minimum degree one. We further split each phase into subphases for analysis. Specifically, for the first phase we define subphases \(\alpha_{1},\alpha_{2},\ldots\) such that \(\alpha_{i}\) consists of the steps where \(\frac{n}{2^{i}}<|\{v\in V(G)\,|\deg(v)=0\}|\leq\frac{n}{2^{i-1}}\) for \(i\in\{1,2,\ldots\}\). We note that these subphases are well defined, as by the definition of the first phase of the \(2\)-min process, the number of isolated vertices is strictly decreasing. We then define subphases \(\beta_{1},\beta_{2},\ldots\) of the second phase the \(2\)-min process such that subphase \(\beta_{i}\) consists of the steps where \(\frac{n}{2^{i}}<|\{v\in V(G)\,|\,\deg(v)=1\}|\leq\frac{n}{2^{i-1}}\) for \(i\in\{1,2,\ldots\}\). Note that some of the subphases might be empty, e.g. subphase \(\beta_{1}\) is empty if the number of vertices with degree \(1\) at the beginning of the second phase is already smaller than \(n/2\). We observe that there are \(\log_{2}n\) subphases of both phases of the \(2\)-min process. To bound \(p_{S}\), we first choose a set \(T\) of \(\ell\) edges from \(\binom{S}{2}\). There are thus \(\binom{\binom{I}{2}}{\ell}\leq\binom{\ell^{2}}{\ell}\) choices for set \(T\). Then we determine an ordering for the edges in \(T\). There are \(\ell!\) ways to fix such an ordering. Fixing an ordering \(e_{1},\ldots,e_{\ell}\), we bound the probability that these edges are added to \(G\) in this order. The probability that a specific edge \(xy\in T\) is added in a specified step in subphase \(\alpha_{i}\) (and \(\beta_{i}\)) is at most \(2\cdot\frac{2^{i-1}}{n}\cdot\frac{1}{n}=\frac{2^{i}}{n^{2}}\), since the first vertex of the edge is chosen u.a.r. from the isolated vertices, of which there are at most \(n/2^{i-1}\), and the second vertex is chosen u.a.r. from all vertices. The factor \(2\) accounts for whether \(x\) or \(y\) is the square or the circle of the edge (note that due to the structure of the \(2\)-min process, sometimes only one of the two may be relevant). Let \(\ell_{\alpha_{i}}\) and \(\ell_{\beta_{i}}\) be the number of edges of \(e_{1},\ldots,e_{\ell}\) that are added in subphases \(\alpha_{i}\) and \(\beta_{i}\) respectively. Let \(\boldsymbol{\ell}_{\alpha}=(\ell_{\alpha_{i}})_{i\geq 0}\) and \(\boldsymbol{\ell}_{\beta}=(\ell_{\beta_{i}})_{i\geq 0}\). Note that the number of isolated vertices decreases by at least \(1\) in each step of the first phase of the \(2\)-min process. Thus the number of steps in subphase \(\alpha_{i}\) is at most \(\frac{n}{2^{i-1}}-\frac{n}{2^{i}}=\frac{n}{2^{i}}\). Thus, given \(\boldsymbol{\ell}_{\alpha}\) and \(\boldsymbol{\ell}_{\beta}\), there are at most \(\prod_{i}\binom{n/2^{i}}{\ell_{\alpha_{i}}}\binom{n/2^{i}}{\ell_{\beta_{i}}}\) ways to specify steps in the \(2\)-min process where edges in \(T\) are added. Combining all, we have the following bound on \(p_{S}\): \[p_{S}\leq\binom{\ell^{2}}{\ell}\ell!\sum_{\mathbf{\ell}_{\alpha},\mathbf{\ell}_{\beta}} \left(\prod_{i=1}^{\log_{2}n}\binom{n/2^{i}}{\ell_{\alpha_{i}}}\binom{n/2^{i}}{ \ell_{\beta_{i}}}\left(\frac{2^{i}}{n^{2}}\right)^{\ell_{\alpha_{i}}+\ell_{ \beta_{i}}}\right),\] where the first summation is over all choices for \(\mathbf{\ell}_{\alpha}\) and \(\mathbf{\ell}_{\beta}\) such that \(\sum_{i=1}^{\log_{2}n}\left(\ell_{\alpha_{i}}+\ell_{\beta_{i}}\right)=\ell\). Using \(\binom{n/2^{i}}{\ell_{\alpha_{i}}}\leq(n/2^{i})^{\ell_{\alpha_{i}}}\) and \(\binom{n/2^{i}}{\ell_{\beta_{i}}}\leq(n/2^{i})^{\ell_{\beta_{i}}}\), we then obtain \[p_{S}\leq\binom{\ell^{2}}{\ell}\ell!n^{-\ell}\sum_{\mathbf{\ell}_{\alpha},\mathbf{ \ell}_{\beta}}1.\] The set of \(\{(\mathbf{\ell}_{\alpha},\mathbf{\ell}_{\beta})\,|\,\sum_{i=1}^{\log_{2}n}\left(\ell _{\alpha_{i}}+\ell_{\beta_{i}}\right)=\ell\}\) corresponds to the set of weak integer compositions of \(\ell\) into \(2\log_{2}n\) parts of non-negative integers, and thus has cardinality \(\binom{\ell+2\log_{2}n-1}{2\log_{2}n-1}\leq\binom{\ell+2\log_{2}n}{\ell}\). Hence, it follows that \[\mathbb{E}[X] \leq\sum_{\ell=1}^{\sqrt{\ln n}}\mathbb{E}[\ell Z_{\ell}]\] \[=\sum_{\ell=1}^{\sqrt{\ln n}}\left(\ell\cdot\sum_{S\in\binom{[n]} {\ell}}p_{S}\right)\] \[\leq\sum_{\ell=1}^{\sqrt{\ln n}}\ell\binom{n}{\ell}\binom{\ell^{2 }}{\ell}\ell!n^{-\ell}\binom{\ell+2\log_{2}n}{\ell}.\] Using \(\binom{n}{\ell}\leq n^{\ell}/\ell!\), \(\binom{\ell^{2}}{\ell}\leq(e\ell)^{\ell}\) and \(\binom{\ell+2\log_{2}n}{\ell}\leq(e(\ell+\log_{2}n)/\ell)^{\ell}\leq(10\log_{2 }n/\ell)^{\ell}\) (as \(\ell\leq\sqrt{\ln n}\)), we then obtain \[\mathbb{E}[X]\leq\sum_{\ell=1}^{\sqrt{\ln n}}\ell(10e\log_{2}n)^{\ell}=\exp \left(\sqrt{\ln n}\ln\log_{2}n+O(\sqrt{\ln n})\right)=o(n),\] as desired. **Corollary 4.10**.: _Let \(G\) be the semi-random graph resulting from the \(2\)-min process. Then, a.a.s., \(G\) contains \(o(n)\) maximal \(2\)-connected induced subgraphs._ Proof.: Consider the set \(\mathcal{T}:=\{(v,B):v\in B,B\in\mathcal{B}(G)\}\). Using the block-cut tree structure (Definition 4.4), it follows that \(|\mathcal{T}|\leq n+|\mathcal{B}|-1\). Moreover, the sets \(\mathcal{T}_{B}:=\{(v^{\prime},B^{\prime})\in T:B^{\prime}=B\}\) for \(B\in\mathcal{B}(G)\) partition \(\mathcal{T}\). Let \(B_{1},\ldots,B_{\ell}\) be the set of blocks of size at least \(\sqrt{\ln n}\). Then, by Proposition 4.9, \(\sum_{1\leq i\leq\ell}|\mathcal{T}_{B_{i}}|\leq|\mathcal{T}|\leq n+\ell+o(n)\). However, \(|\mathcal{T}_{B_{i}}|\geq\sqrt{\ln n}\) for every \(i\), and thus it follows then that \(\ell\sqrt{\ln n}\leq n+\ell+o(n)\). Thus it follows that \(\ell=o(n)\), as desired. The resulting graph thus contains \(o(n)\) blocks of size at least \(3\). Because we have not bounded the number of blocks consisting of \(2\) vertices, we will use Corollary 4.7 and the other structural results in Section 4.2.2 to ensure the graph becomes \(2\)-connected. Let \(G_{1}\) be the graph obtained after the first phase, i.e. the graph resulting from the \(2\)-min process. In the second phase, we add semi-random edges to make \(G_{1}\) connected. The following proposition shows that we can achieve this a.a.s. with \(o(n)\) additional semi-random edges. **Proposition 4.11**.: _A.a.s. \(G_{1}\) can be made connected by the addition of \(o(n)\) semi-random edges._ Proof.: By Corollary 4.10, \(G_{1}\) contains \(o(n)\) maximal \(2\)-connected induced subgraphs. We claim that each vertex not contained in a \(2\)-connected induced subgraph is contained in a component that contains a \(2\)-connected induced subgraph. Suppose not. Then \(G_{1}\) must contain a tree component, contradicting the fact that the minimum degree of \(G_{1}\) is at least two. Hence the number of components of graph \(G_{1}\) is bounded from above by the number of maximal \(2\)-connected induced subgraphs, and therefore is \(o(n)\). By choosing \(v_{t}\) to be one of the vertices in the smallest component, each semi-random edge has a probability of at least \(1/2\) to decrease the number of components. Hence, by standard concentration arguments, \(G_{1}\) can be made connected in \(o(n)\) additional rounds. Let \(G_{2}\) be the graph obtained after the second phase. In the third phase, we ensure that \(G_{2}\) becomes \(2\)-connected by adding \(o(n)\) semi-random edges. **Proposition 4.12**.: _A.a.s. \(G_{2}\) can be made \(2\)-connected by the addition of \(o(n)\) semi-random edges._ Proof.: Let \(\mathcal{B}\) be the block decomposition of \(G_{2}\) and \(T_{\mathcal{B}}\) be a reduced block tree of \(G_{2}\). By Corollary 4.7, each leaf in \(T_{\mathcal{B}}\) is a \(2\)-connected block. Thus, by Corollary 4.10, \(T_{\mathcal{B}}\) a.a.s. contains \(o(n)\) leaves. First consider the case that \(\mathcal{B}\) contains a block \(B^{*}\) such that \(|B^{*}|\geq n/4\). We consider the following strategy. Take an arbitrary enumeration \(B_{1},\ldots,B_{h}\) of all leaf blocks of \(T_{\mathcal{B}}\). For each \(1\leq j\leq h\), we will add a semi-random edge between \(B_{j}\) and \(B^{*}\) in increasing order of \(j\). Suppose these semi-random edges have already been added between \(B_{i}\) and \(B^{*}\) for all \(i<j\). Let \(B_{j}B_{1}^{\prime}B_{2}^{\prime}\ldots B_{\ell}^{\prime}B^{*}\) be the unique path from \(B_{j}\) to \(B^{*}\) in \(T_{\mathcal{B}}\). Moreover, let \(x\) be the unique vertex in \(B_{j}\cap B_{1}^{\prime}\), and \(y\) the unique vertex in \(B_{\ell}^{\prime}\cap B^{*}\). Note that possibly \(x=y\). Then, in each subsequent round \(t\), we choose \(v_{t}\) to be an arbitrary vertex in \(B_{j}\setminus\{x\}\). If \(u_{t}\) is contained in \(B^{*}\setminus\{y\}\), we add the edge \(u_{t}v_{t}\). If instead square \(u_{t}\) is not contained in \(B^{*}\setminus\{y\}\), we consider the round a failure. Note that in each round, the probability of the second vertex landing in \(B^{*}\setminus\{y\}\) is \((|B^{*}|-1)/n\geq 1/4-o(1)\), and as a.a.s. \(T_{\mathcal{B}}\) contains \(o(n)\) leaves, the number of rounds required to add semi-random edges between \(B^{*}\) and all \(B_{1},\ldots,B_{h}\) is \(o(n)\) in expectation. Let \(G_{2}^{\prime}\) be the graph resulting from the addition of the \(h\) semi-random edges as described above. Then, for each leaf block \(B\), \(G_{2}^{\prime}\) contains two vertex-disjoint paths from \(B\) to \(B^{*}\) Namely, one path via the blocks on the path between \(B\) and \(B^{*}\) in \(T_{\mathcal{B}}\), and the other being the edge that was added between \(B\) and \(B^{*}\). Because this holds for all leaves, using Proposition 4.6, the resulting graph is 2-edge-connected. Moreover, as each block is on a cycle with \(B^{*}\) and a leaf, and as the blocks of size at least 3 are 2-connected, for each cut-vertex \(v\) it follows that graph \(G_{2}^{\prime}-v\) contains one large component containing \(B^{*}\setminus\{v\}\), and all other components are of the form \(B\setminus\{v\}\) where \(B\in\mathcal{B}\) is a block of size at least 3. We note that these blocks \(B\) such that \(B\setminus\{v\}\) is a component for some cut-vertex \(v\in[n]\) correspond exactly to the blocks that are leaves in the block-cut tree (Definition 4.4), but not in \(T_{\mathcal{B}}\). By argumentation analogous to that used in the proof of Corollary 4.7, all such blocks \(B\) are 2-connected. Hence, by Proposition 4.10, there are \(o(n)\) such blocks. Moreover, each such a block contains at most one cut-vertex. We then use the following strategy to absorb these cut-vertices. We iteratively consider pairs \((B,v)\) where \(v\in B\) is a cut-vertex and \(B\in\mathcal{B}\) a block such that \(B-v\) is a component when removing \(v\). As noted earlier, there are \(o(n)\) such pairs. If \(|B\setminus\{v\}|\leq n/2\), we choose \(v_{t}\in B\setminus\{v\}\) arbitrarily. With probability at least \(1/2\), \(u_{t}\in[n]\setminus B\). Similarly, if \(|B\setminus\{v\}|<n/2\), we choose \(v_{t}\in[n]\setminus B\), and with probability at least \(1/2-o(1)\), \(u_{t}\in B\setminus\{v\}\). In either case, \(v\) no longer separates block \(B\) from the rest of the graph. Note that as this described the only configuration of remaining cut-vertices in the graph, eliminating all such pairs eliminates all cut-vertices. Since there are \(o(n)\) such pairs, the total number of rounds needed to absorb all such cut-vertices is \(o(n)\) in expectation. It thus takes at most \(o(n)\) rounds in total in expectation to ensure that the graph becomes 2-connected. Standard concentration inequalities such as Chernoff bounds then immediately imply that also a.a.s. it takes \(o(n)\) rounds to extend \(G_{2}\) to a 2-connected graph. Hence we may assume that each block \(B\) in \(\mathcal{B}\) is of size strictly smaller than \(n/4\). We use a different strategy in this case. Instead of adding edges from leaves to a single block, we will consider balancing the tree into two subforests. We will then add edges between the leaves in one forest and vertices in the other forest, and vice versa. Let vertex \(B^{*}\in V(T_{\mathcal{B}})\), colouring \(\phi:V(T_{\mathcal{B}})\setminus\{B^{*}\}\to\{\text{red, blue}\}\), and sets \(S_{\text{red}}\) and \(S_{\text{blue}}\) be as given by Proposition 4.8. For each \(v\in B^{*}\) let \(T_{\mathcal{B},v}\) denote the components of \(T_{\mathcal{B}}-v\) that contain a block containing \(v\). Thus, \(T_{\mathcal{B},v}\) denotes the blocks \(B\) where \(v\) is the last cut-vertex on the path from \(B\) to \(B^{*}\) in \(T_{\mathcal{B}}\). We refer to \(T_{\mathcal{B},v}\) as the branch rooted at \(v\). Moreover, let \(S_{\mathcal{B},v}\) denote \(\bigcup_{B\in V(T_{\mathcal{B},v})}B\setminus B^{*}\). That is, \(S_{\mathcal{B},v}\) is the set of all vertices contained in blocks in \(T_{\mathcal{B},v}\) except for vertex \(v\) itself. If \(|S_{\mathcal{B},v}|\leq n/8\), we say branch \(T_{\mathcal{B},v}\) is small. Otherwise we say \(T_{\mathcal{B},v}\) is big. Finally, for all leaf blocks \(B\), let \(v_{B}\) denote the vertex that block \(B\) has in common with the next block on the path from \(B\) to \(B^{*}\) in \(T_{\mathcal{B}}\). We first consider the leaves of \(T_{\mathcal{B}}\) contained in small branches. Take two arbitrary enumerations \(B_{1},B_{2},\ldots,B_{h_{1}}\) and \(R_{1},R_{2},\ldots,R_{h_{2}}\) of all blue and red leaf blocks of \(T_{\mathcal{B}}\) contained in small branches respectively. We will iteratively add edges between \(B_{j}\) and \(S_{\text{red}}\) in increasing order of \(j\), and analogously between \(R_{j}\) and \(S_{\text{blue}}\). Suppose that semi-random edges have already been added between \(B_{i}\) and \(S_{\text{red}}\) for all \(i<j\). Let \(T_{\mathcal{B},v}\) be the branch containing leaf \(B_{j}\). We then choose \(v_{t}\) to be an arbitrary vertex in \(B_{j}\setminus\{v_{B_{j}}\}\). Because \(|B_{j}|\geq 2\), such a choice for \(v_{t}\) always exists. Then, if \(u_{t}\) lands in \(S_{\text{red}}\setminus S_{\mathcal{B},v}\), we add edge \(u_{t}v_{t}\). Otherwise we consider the round a failure. Analogously, for \(R_{j}\) the red leaf in a small branch \(T_{\mathcal{B},v}\) with the lowest index that has not previously received a circle, we choose \(v_{t}\) in \(R_{j}\setminus\{v_{R_{j}}\}\). If \(u_{t}\) is contained in \(S_{\text{blue}}\setminus S_{\mathcal{B},v}\), we add the edge \(u_{t}v_{t}\), and otherwise we consider the round a failure. Then, as tree \(T_{\mathcal{B}}\) has \(o(n)\) leaves, there are \(o(n)\) blue and \(o(n)\) red leaves. Moreover, by Proposition 4.8\(|S_{\text{blue}}|\leq|S_{\text{red}}|\leq 3|S_{\text{blue}}|\), and \(|S_{\text{red}}|+|S_{\text{blue}}|\geq 3n/4\). Thus, the probability that a vertex from \(S_{\text{red}}\setminus S_{\mathcal{B},v}\) is chosen u.a.r. where \(T_{\mathcal{B},v}\) is a small branch, is at least \(3n/8-n/8=n/4\). Similarly, the probability that a vertex from \(S_{\text{blue}}\setminus S_{\mathcal{B},v}\) is chosen u.a.r. where \(T_{\mathcal{B},v}\) a small branch, is at least \(3n/16-n/8=n/16\). Hence, the expected number of rounds needed to add edges to all leaf blocks in small branches is \(o(n)\). Next, we consider the leaf blocks in big branches. We first note that there are at most 7 big branches. We use a similar strategy as for the small branches, but drop the requirement that \(u_{t}\) and \(v_{t}\) must be in distinct branches. Again take two arbitrary enumerations \(B_{1},B_{2},\ldots,B_{h_{3}}\) and \(R_{1},R_{2},\ldots,R_{h_{4}}\) of all blue and red leaf blocks of \(T_{\mathcal{B}}\) contained in big branches respectively. Suppose that semi-random edges have already been added between \(B_{i}\) and \(S_{\text{red}}\) for all \(i<j\). We then choose \(v_{t}\) to be an arbitrary vertex in \(B_{j}\setminus\{v_{B_{j}}\}\). Because \(|B_{j}|\geq 2\), such a choice for \(v_{t}\) always exists. If \(u_{t}\) lands in \(S_{\text{red}}\), we add edge \(u_{t}v_{t}\). Otherwise, we consider the round a failure. The strategy for red leaf blocks is analogous. Because the probability that a vertex from \(S_{\text{red}}\) is chosen u.a.r. is at least \(3/8\), and the probability that a vertex from \(S_{\text{blue}}\) is chosen u.a.r. is at least \(3/16\), it also takes \(o(n)\) rounds in expectation to add edges to all leaf blocks in big branches. After all leaves in both small and big branches have received an edge, there exist two internally vertex-disjoint paths from each leaf block \(B\) to \(B^{*}\). Namely, as all of the edges we added have one red and one blue endpoint, each blue leaf has a path which only contains blue vertices and a vertex in \(B^{*}\), and a path that starts with the added edge, and then only contains red vertices and one vertex in \(B^{*}\). Analogously, there exist two such paths from each red leaf. As these two paths do not share their endpoint in leaf \(B\), and as each leaf is 2-connected by Corollary 4.7, set \(B\setminus\{v_{B}\}\) does not contain any cut-vertices. We note that again the resulting graph is 2-edge-connected. We then use the same strategy as in the case where there exists a block of size at least \(n/4\) to eliminate all the cut-vertices that separate individual blocks from the rest of the graph. Recall that this strategy a.a.s. takes \(o(n)\) rounds. Let \(G^{\prime\prime}_{2}\) be the resulting graph. We then observe that no vertex in \([n]\setminus B^{*}\) is a cut-vertex in graph \(G^{\prime\prime}_{2}\). Hence, we consider a cut-vertex \(v\in B^{*}\). First suppose that the branch rooted at \(v\) is empty. We observe that by Proposition 4.6 it then holds that \(|B^{*}|\geq 3\). But then, \(B^{*}\) is 2-connected, contradicting \(v\) being a cut-vertex. Next suppose that the branch rooted at \(v\) is small. We note that for each vertex in \(S_{\mathcal{B},v}\) there exists a path to a leaf of branch \(T_{\mathcal{B},v}\) contained in \(S_{\mathcal{B},v}\). As each such a leaf has an edge to another branch, and as \(B^{*}\) is either 2-connected or isomorphic to \(K_{2}\), it follows that \(G^{\prime\prime}_{2}-v\) is connected. Hence, \(v\) is not a cut-vertex. Finally, suppose that the branch rooted at \(v\) is big. In this case \(v\) may indeed be a cut-vertex. Namely, if \(T_{\mathcal{B},v}\) contains multiple components of different colours, each of the edges added to the leafs in the branch could have both endpoints within the branch. To deal with such cut-vertices, we use a two-step strategy. In the first step, we want to ensure that the subgraph induced by \(S_{\mathcal{B},v}\) becomes connected. We achieve this using the standard strategy of choosing \(v_{t}\) in the smallest component in the subgraph induced by \(S_{\mathcal{B},v}\). If \(u_{t}\) lands in a different component of this subgraph, we add edge \(u_{t}v_{t}\), otherwise we consider the round a failure. We note that as each component of \(T_{\mathcal{B},v}\) contains at least one leaf of \(T_{\mathcal{B}}\), by Corollary 4.10, \(T_{\mathcal{B},v}\) contains at most \(o(n)\) components. As \(|S_{\mathcal{B},v}|>n/8\), the probability of adding successfully adding an edge in this first step is at least \(1/16\). Then, by standard concentration inequalities, this step a.a.s. takes \(o(n)\) rounds as well. We note that in the resulting graph, cut-vertex \(v\) then separates two components, given by vertex sets \(S_{\mathcal{B},v}\) and \([n]\setminus(S_{\mathcal{B},v}\cup\{v\})\). In the second step of the strategy, we connect these two components by a single edge. By again choosing \(v_{t}\) in the smaller of the two components, and considering the round a failure if \(u_{t}\) does not land in the other component. As the probability of a failure round is thus at most \(1/2\), by standard concentration inequalities, the number of rounds in this step is \(O(1)\). We then note that there are at most \(7\) vertices \(v\in B^{*}\) such \(T_{\mathcal{B},v}\) is big. Hence, the total number of rounds to ensure that each of these cut-vertices is absorbed is a.a.s. \(o(n)\). Because the resulting graph then thus no longer contains any cut-vertices, the graph is \(2\)-connected. Thus, in any case, we can ensure that graph \(G_{2}\) becomes \(2\)-connected in a.a.s. \(o(n)\) rounds, as desired. Combining the analysis of these individual phases then results in the following lemma. **Lemma 4.13**.: \(\tau_{\mathcal{C}_{2}}\leq\ln 2+\ln(\ln 2+1))\) _in the pre-positional process._ Proof.: The lemma directly follows from Propositions 4.9 and 4.12, and the fact that the \(2\)-min process requires \((\ln 2+\ln(\ln 2+1))+o(1))n\) rounds. Lemma 4.3 then follows directly from Lemmas 3.1, and 4.13, and the known lower bound given by the \(2\)-min process. This then completes the proof of Theorem 1.1; it follows directly from Lemmas 4.1, 4.2, and 4.3. ## 5 Degenerate subgraphs: proof of Theorem 1.4 Part (a) follows by [2, Theorem 1.2] and Lemma 3.1. For part (b), we consider the pre-positional process. Our proof is similar to the proof of Theorem 1.3, but requires slightly more careful analysis due to the difference in power between the pre- and post-positional processes. Let \(g(n)\) be a function such that \(g(n)\to\infty\) as \(n\to\infty\). We prove that there exist a pre-positional strategy which construct an \(H\)-subgraph a.a.s. in at most \(g(n)2^{|V(H)|}\cdot n^{(d-1)/d}\) rounds. Note that this immediately implies part (b) as \(|V(H)|\) is fixed, and we may take \(g(n)=f(n)2^{-|V(H)|}\). We proceed by induction on \(|V(H)|\). We note that the statement holds directly if \(|V(H)|=1\). Suppose \(H\) is a \(d\)-degenerate graph with \(m\geq 2\) vertices, and assume that the statement holds for all fixed \(d\)-degenerate graphs \(H^{\prime}\) such that \(|V(H^{\prime})|<m\). Let \(v\in V(H)\) such that \(\deg_{H}(v)\leq d\). Consider the graph \(H^{\prime}:=H-v\). Then, by the inductive hypothesis, there exists a pre-positional strategy which a.a.s. constructs a graph \(G^{\prime}\) containing a copy of \(H^{\prime}\) in at most \(T:=g(n)2^{m-1}\cdot n^{(d-1)/d}\) rounds. Let \(C^{\prime}\) be the copy of \(H^{\prime}\) constructed in \(G^{\prime}\). For each vertex \(u\in N_{H}(v)\), let \(u^{\prime}\in[n]\) be the corresponding vertex in \(C^{\prime}\), and let \(N^{\prime}:=\{u^{\prime}\,:\,u\in N_{H}(v)\}\). The strategy is then to grow a star from each vertex in \(N^{\prime}\). We do the following subsequently for each \(u^{\prime}\in N^{\prime}\). Given \(u^{\prime}\in N^{\prime}\), choose \(u_{t}\) to be \(u^{\prime}\) for \(g(n)2^{m}/(2d)\cdot n^{(d-1)/d}\) subsequent rounds. Let \(S_{u^{\prime}}\) be the set of vertices \(w\in[n]\setminus V(C^{\prime})\) such that \(w=v_{t}\) for at least one of the \(g(n)2^{m}/(2d)\cdot n^{(d-1)/d}\) rounds. Then, by standard concentration arguments, and as \(|V(C^{\prime})|\) is fixed, a.a.s. \(|S_{u^{\prime}}|\) is at least \(g(n)2^{m}/(4d)\cdot n^{(d-1)/d}\). Let \(G\) be the graph resulting from growing such stars for all \(u^{\prime}\in N^{\prime}\). We then consider the probability that a vertex \(w\in[n]\setminus V(C^{\prime})\) is contained in all such sets, that is \(\mathbb{P}\left(w\in\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{\prime}}\right)\). As the construction of \(\{S_{u^{\prime}}\}_{u^{\prime}\in N^{\prime}}\) is mutually independent, \[\mathbb{P}\left(w\in\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{ \prime}}\right) \geq\Pi_{u^{\prime}\in N^{\prime}}\mathbb{P}\left(w\in S_{u^{ \prime}}\right)\] \[=\Pi_{u^{\prime}\in N^{\prime}}\left(\frac{|S_{u}|}{n-|V(C^{ \prime})|}\right)\] \[>\Pi_{u^{\prime}\in N^{\prime}}\left(\frac{|S_{u}|}{n}\right)\] \[\geq\Pi_{u^{\prime}\in N^{\prime}}\left(\frac{g(n)2^{m}}{4d} \cdot\frac{n^{(d-1)/d}}{n}\right)\] \[=\Pi_{u^{\prime}\in N^{\prime}}\left(\frac{g(n)2^{m}}{4d}\cdot \frac{1}{n^{1/d}}\right)\] \[\geq\left(\frac{g(n)2^{m}}{4d}\cdot\frac{1}{n^{1/d}}\right)^{d}\] \[=\left(\frac{g(n)2^{m}}{4d}\right)^{d}\cdot\frac{1}{n}.\] Let \(X:=\left|\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{\prime}}\right|\) be a random variable. Then, \(\mathbb{E}[X]\geq(g(n)2^{m}/4d)^{d}\cdot(n-|V(C^{\prime})|)/n\), and hence, by standard concentration arguments, as \(\lim_{n\to\infty}g(n)=\infty\), a.a.s. \(\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{\prime}}\) is non-empty. Let \(z\in\bigcap_{u^{\prime}\in N^{\prime}}S_{u^{\prime}}\). Consider the subgraph of \(G\) given by extending \(C^{\prime}\) with vertex \(z\) and the edges between \(z\) and \(N^{\prime}\); this subgraph is isomorphic to \(H\), as desired. Moreover, the number of rounds to construct \(H\) is bounded by up to \(g(n)2^{m-1}\cdot n^{(d-1)/d}\) rounds to construct \(H^{\prime}\), together with up to \(g(n)2^{m}/(2d)\cdot n^{(d-1)/d}\) rounds to grow each of the stars. Thus, in total the construction of \(H\) requires a.a.s. up to \[g(n)2^{m-1}\cdot n^{(d-1)/d}+|N^{\prime}|\cdot\frac{g(n)2^{m}}{2 d}\cdot n^{(d-1)/d} \leq\frac{g(n)2^{m}}{2}\cdot n^{(d-1)/d}+d\cdot\frac{g(n)2^{m}}{2 d}\cdot n^{(d-1)/d}\] \[=g(n)2^{m}\cdot n^{(d-1)/d}\] rounds as desired. Dense bipartite subgraphs: proof of Theorem 1.6 The lower bound is trivial, as constructing any such subgraph requires \(m\) edges. For the upper bound, by Lemma 3.1 it suffices to consider the pre-positional process. Let \(A:=\left[\lceil\sqrt{m}\rceil\right]\) and \(B=[n]\setminus A\). We construct a simple bipartite subgraph with bipartition \((A,B)\) with at least \(m\) edges. For a vertex \(v\in A\), let \(\deg_{G_{t}}(v,B)\) denote the number of distinct neighbours of \(v\) in \(B\). Our strategy consists of \(\lceil\sqrt{m}\rceil\) phases. In the \(i^{\text{th}}\) phase, we consistently choose \(v_{t}\) to be vertex \(i\). The phase terminates once \(\deg_{G_{t}}(i,B)=\lceil\sqrt{m}\rceil\). We consider a round a failure if \(u_{t}\in A\cup N_{G_{t-1}}(i)\). We observe that the probability of such a failure round is at most \(2\lceil\sqrt{m}\rceil/n\). Moreover, because \(m=o(n^{2})\), we observe that this probability is \(o(1)\). Moreover, once all phases have terminated, we note that the bipartition \((A,B)\) forms a bipartite subgraph with at least \(\lceil\sqrt{m}\rceil\cdot\lceil\sqrt{m}\rceil\geq m\) non-parallel edges, as desired. Since each round has a probability of being a failure round of \(o(1)\), and as there are \(\lceil\sqrt{m}\rceil\) phases, each of which requires \(\lceil\sqrt{m}\rceil\) successful rounds, the total needed number of rounds is a.a.s. \((1+o(1))\cdot\lceil\sqrt{m}\rceil\cdot\lceil\sqrt{m}\rceil=(1+o(1))m\), following a standard concentration argument. ## 7 Large induced cycles: proof of Theorem 1.8 Recall that the semi-random graph processes allow the creation of multi-edges. This will be useful to construct an induced \((n-1)\)-cycle in the post-positional process. **Lemma 7.1**.: _There exists a post-positional strategy that constructs an induced \((n-1)\)-cycle a.a.s. in \(O(n\ln n)\) rounds._ Proof.: Our strategy aims to construct an induced cycle on the vertices \(\{1,2,\ldots,n-1\}\). We designate vertex \(n\) as the _sink vertex_. That is, if we cannot add a useful edge given \(u_{t}\), we choose \(v_{t}\) to be \(n\). Hence, by removing vertex \(n\), we obtain a graph which only contains desired edges. The first time that \(u_{t}\) lands on a vertex \(v\in[n-1]\), we choose \(v_{t}\) to be \(v+1\) (unless \(v=n-1\), then we choose \(u_{t}\) to be \(1\)). Any subsequent time \(u_{t}\) lands on \(v\), we choose \(v_{t}\) to be \(n\). Note that once we have landed at least once on each vertex in \([n-1]\), we have constructed an induced spanning cycle on the set \([n-1]\), as desired. Hence, an induced \((n-1)\)-cycle is constructed once each vertex in \([n-1]\) is hit at least once, and this takes a.a.s. \(O(n\log n)\) steps by the coupon collector's problem (see [17, Theorem 5.13]). We complete the proof of Theorem 1.8 by showing that a.a.s. no pre-positional strategy can construct an induced \((n-1)\)-cycle. To obtain an induced \((n-1)\)-cycle, we first need to construct an induced path on \(n-1\) vertices. Suppose that one has constructed such an induced path \(P\), which includes all vertices other than \(w\in[n]\), after step \(t_{0}\). By the definition of an induced path, \([n]-w\) induces exactly \(n-2\) edges, which form an \((n-1)\)-path. **Claim 7.1**.: A.a.s. \(w\) has \(\Theta(n)\) distinct neighbours in \([n]-w\). Proof.: \(G_{t_{0}}\) contains at least \(n-1\) edges and thus \(t_{0}\geq n-1\). The distribution of the \(n-1\) squares in the first \(n-1\) steps is the same as is that of uniformly throwing \(n-1\) balls into \(n\) bins. By the standard Poisson approximation argument, the number of vertices receiving at least three squares is a.a.s. \(\Theta(n)\). These vertices must all be adjacent to \(w\) since \([n]-w\) induces an \((n-1)\)-path. It follows immediately that a.a.s. the only possible induced \((n-1)\)-cycle that can be constructed is on \([n]-w\). Observe that the only way to construct an induced \((n-1)\)-cycle on \([n]-w\) is that 1. \(\{u_{t},v_{t}\}=\{u,v\}\) for some \(t\geq t_{0}+1\), where \(u\) and \(v\) are the two ends of \(P\); 2. for all \(t_{0}<s<t\), \(\{u_{s},v_{s}\}\neq\{u,v\}\); and 3. for all \(t_{0}<s<t\), if \(v_{s}\neq w\) then \(u_{t}\) must be \(w\). Considering the first step \(t\) after \(t_{0}\) that \(v_{t}\neq w\). If \(v_{t}\notin\{u,v\}\) then by (c), \(u_{t}\) must be \(w\), which occurs with probability \(1/n\). If \(v_{t}\in\{u,t\}\) then by (a,b), \(u_{t}\) must be either \(\{u,v\}\setminus\{v_{t}\}\) or \(w\). The probability of this is \(2/n\). Hence, the probability that \(P\) can be completed into an induced \((n-1)\)-cycle is \(O(1/n)\). Hence, there does not exist a strategy that a.a.s. constructs an induced \((n-1)\)-cycle in the pre-positional process, as desired.
2310.20592
Strongly Magnetized Tidal Disruption Event Disks via Stream Injection in GRMHD
Magnetically arrested accretion disks (MADs) around a rapidly rotating black hole (BH) have been proposed as a model for jetted tidal disruption events (TDEs). However, the dynamics of strongly magnetized disks in a more realistic simulation which can mimic the chaotic dynamics during a TDE have previously been unexplored. Here we employ global GRMHD simulations of a pre-existing MAD disk interacting with an injected TDE stream with impact parameter $\beta\equiv R_t/R_p=4-7$ to investigate how strongly magnetized TDEs differ from the standard MAD picture. We demonstrate for the first time that a MAD or semi-MAD state can be sustained and jets powered by the BH spin are produced in a TDE. We also demonstrate that the strength of the self-intersection shock depends on how dense the disk is relative to the stream, or the density contrast $f_\rho=\rho_d/\rho_s$. The jet or funnel can become significantly tilted (by $10-30^\circ$) due to the self-intersection outflow when $f_\rho \leq 0.1$. In models with a powerful jet and $f_\rho\leq 0.01$, the tilted jet interacts with and ultimately tilts the disk by as much as 23 degrees from the incoming stream. We illustrate that as $f_\rho$ increases, the tilt of the jet and disk is expected to realign with the BH spin once $f_\rho \geq 0.1$. We illustrate how the tilt can rapidly realign if $f_\rho$ increases rapidly and apply this to TDEs which have shown X-ray evolution on timescales of days-weeks.
Brandon Curd, Richard Anantua, Hayley West, Joaquin Duran
2023-10-31T16:30:02Z
http://arxiv.org/abs/2310.20592v1
# Strongly Magnetized Tidal Disruption Event Disks via Stream Injection in GRMHD ###### Abstract Magnetically arrested accretion disks (MADs) around a rapidly rotating black hole (BH) have been proposed as a model for jetted tidal disruption events (TDEs). However, the dynamics of strongly magnetized disks in a more realistic simulation which can mimic the chaotic dynamics during a TDE have previously been unexplored. Here we employ global GRMHD simulations of a pre-existing MAD disk interacting with an injected TDE stream with impact parameter \(\beta\equiv R_{t}/R_{p}=4-7\) to investigate how strongly magnetized TDEs differ from the standard MAD picture. We demonstrate for the first time that a MAD or semi-MAD state can be sustained and jets powered by the BH spin are produced in a TDE. We also demonstrate that the strength of the self-intersection shock depends on how dense the disk is relative to the stream, or the density contrast \(f_{\rho}=\rho_{d}/\rho_{s}\). The jet or funnel can become significantly tilted (by \(10-30^{\circ}\)) due to the self-intersection outflow when \(f_{\rho}\leq 0.1\). In models with a powerful jet and \(f_{\rho}\leq 0.01\), the tilted jet interacts with and ultimately tilts the disk by as much as 23 degrees from the incoming stream. We illustrate that as \(f_{\rho}\) increases, the tilt of the jet and disk is expected to realign with the BH spin once \(f_{\rho}\geq 0.1\). We illustrate how the tilt can rapidly realign if \(f_{\rho}\) increases rapidly and apply this to TDEs which have shown X-ray evolution on timescales of days-weeks. keywords: accretion, accretion discs - black hole physics - MHD - gamma-rays: galaxies - X-rays: galaxies ## 1 Introduction When a star wanders too close to its central black hole (BH), the tidal forces from the BH exceed the self gravity of the star and the star is subsequently disrupted into a stream of stellar material (Hills, 1975; Rees, 1988; Phinney, 1989; Evans & Kochanek, 1989). The bound portion of the stream ultimately returns to the BH, delivering mass to the pericenter radius at the fall back rate (\(\dot{M}_{\rm fb}\)) which falls off as \((t/t_{\rm fb})^{-5/3}\), where \(t_{\rm fb}\) is the orbital period of the most bound portion of the stream (or the fall back time). This leads to emission which also drops off as \((t/t_{\rm fb})^{-5/3}\) since the energy available for dissipation is provided by the kinetic energy of the stream. The transient, which is known as a tidal disruption event (TDE), is typically detectable for months-years. The dynamics governing the properties of the stream and subsequent emission depend on the stellar mass, eccentricity, pericenter radius, and compressibility of the star. The tidal radius of the star is given by, \[R_{t}/r_{g}=47m_{6}^{-2/3}m_{\star}^{-1/3}r_{\star}, \tag{1}\] where \(m_{6}=M_{\rm BH}/10^{6}\,M_{\odot}\) is the mass of the SMBH, \(m_{\star}=M_{\star}/M_{\odot}\) is the mass of the disrupted star, and \(r_{\star}=R_{\star}/R_{\odot}\) is its radius. For the typical TDE, the orbit is parabolic (\(e=1\)). For zero age main sequence stars the radius for complete disruption depends on the compressibility and occurs at \(\sim 0.9R_{t}\) for \(\gamma=5/3\) and at \(\gtrsim 2R_{t}\) for \(\gamma=4/3\)(Guillochon & Ramirez-Ruiz, 2013; Mainetti et al., 2017), though it is larger for evolved stars (Golightly et al., 2019). Several works have addressed the initial disruption of the star and evolution of the stream over a broad parameter space (Carter & Luminet, 1982; Evans & Kochanek, 1989; Kochanek, 1994; Lodato et al., 2009; Brassart & Luminet, 2010; Stone et al., 2013; Coughlin & Nixon, 2015; Coughlin et al., 2016; Steinberg et al., 2019; Ryu et al., 2020). TDEs have been discovered in the X-ray, optical/UV, and radio (see Komossa, 2015; Alexander et al., 2020; Gezari, 2021 for a review). While disk formation is expected in TDEs, what powers the emission is still unclear, for instance either turbulent accretion or shocks could explain the emission at different stages in the evolution. The presence of outflows, possibly launched by an accretion disk (Strubbe & Quataert, 2009; Coughlin & Begelman, 2014; Metzger & Stone, 2016), has been inferred in many cases due to radio emission (Alexander et al., 2016, 2017) and TDEs have also been observed to launch jets (Bloom et al., 2011; Burrows et al., 2011; Zauderer et al., 2011; Cenko et al., 2012; Brown et al., 2015). More recently, a handful of TDEs have been observed during the rise to peak (Holoien et al., 2019, 2020; Hinkle et al., 2021; Hammerstein et al., 2023). This bounty of observations is expected to grow significantly once the Large Synoptic Survey Telescope (LSST, Ivezic et al., 2019; Bricman and Gomboc, 2020) comes online, but theory has yet to fully describe the range of observational properties exhibited by TDEs. Jetted TDEs have observational properties that present a particularly complicated puzzle. For instance, _Swift_ J1644+57 showed rapid viability following the turn on with quasi-periodic oscillations (QPOs) at \(\sim 200\) s (Reis et al., 2012), long period viability at \(\sim 10^{6}\) s with the period increasing over the course of the transient (Saxton et al., 2012), and a rapid drop in the X-ray flux at the \(\sim 500\) days after the initial trigger (Zauderer et al., 2013). A similar drop off in the X-ray flux was seen in _Swift_ J2058+05 after several months (Pasham et al., 2015). Magnetically arrested accretion disks (MADs, Narayan et al., 2003) are thought to provide a physical explanation for both the presence of a relativistic jets and variability in jetted TDEs. The large magnetic flux required for a MAD is thought to be sourced by either poloidal field lines in a fossil disk (Kelley et al., 2014; Tchekhovskoy et al., 2014; Teboul and Metzger, 2023) or conversion of toroidal field lines to poloidal through a dynamo effect (Liska et al., 2020). However, general relativistic radiation magnetohydrodynamics (GRRMHD) simulations of thin MADs have not shown complete jet turn off, potentially due to magnetic pressure support of the disk at low accretion rates (Avara et al., 2016; Curd and Narayan, 2023; Liska et al., 2022). Thus, the rapid shut off in X-ray flux is difficult to explain in a MAD state unless simulations are unable to capture magnetic diffusion due to their relatively short duration (typically several days). Disk formation in TDEs may result in a different disk structure than the standard advection dominated accretion disk (ADAF, Abramowicz et al., 1988; Narayan and Yi, 1995), which has been assumed in some studies (Dai et al., 2018; Curd and Narayan, 2019). Several numerical studies of disk formation have demonstrated the presence of shocks and outflows as well as long lasting asymmetric structure (Ramirez-Ruiz and Rosswog, 2009; Guillochon and Ramirez-Ruiz, 2013; Shiokawa et al., 2015; Bonnerot et al., 2016; Sadowski et al., 2016; Hayasaki et al., 2016; Bonnerot and Lu, 2020; Bonnerot et al., 2021; Curd and Narayan, 2022; Steinberg and Stone, 2022; Ryu et al., 2023). Furthermore, the eccentricity of material sourced from the stream is difficult to dissipate which, in the majority of studies, leads to an eccentric accretion disk and spiral shocks. For instance, the most realistic smooth particle hydrodynamics simulations to date found imperfect circularization as the final disk remains mildly eccentric with \(e\approx 0.3\)(Bonnerot and Lu, 2020; Bonnerot et al., 2021). A long duration simulation (\(2t_{\rm fb}\)) by Ryu et al. (2023) demonstrated that shocks may dominate the energy budget of the TDE and the disk may remain highly eccentric with \(e\sim 0.5-0.6\). However, recent RHD simulations with adaptive mesh refinement find that the inner disk was able to reach \(e<0.2\) after more than 30 days (Steinberg and Stone, 2022), which is substantially longer than disk formation simulations with similar parameters. It is worth noting that GRMHD and GRRMHD simulations were unable to reach the magnetic flux that is required for the MAD state due to the weak magnetic flux provided by the stream as well as the chaotic disk formation (Sadowski et al., 2016; Curd, 2021). As there are no current simulations of eccentric MADs nor TDE disk formation simulations which result in a MAD, it is unclear how MADs in TDEs may differ from the standard thick accretion disk. The primary question we address in this work is whether or not TDE disks can maintain the magnetic flux required for the MAD state. Although Kelley et al. (2014) demonstrated that the stream can trap some magnetic flux, how much magnetic field threaded the BH can not be seen in their local simulations. Global simulations are needed in order to observe field lines advecting onto the BH horizon. Furthermore, the self intersection outflow is quasi spherical thus the force that it applies to the inner disk and jet is not symmetrical (e.g. Jiang et al., 2016). This suggests that the jet, during strong self intersection, will experience an asymmetric lateral force about the jet axis. One might expect strong perturbation of the jet, and potentially the disk due its interaction with the jet. In this work, we investigate MAD or strongly magnetized TDE disks in GRMHD using a novel approach to overcome the computational difficulties in simulating the large and small scale structures, as well as long time scales, required to study TDE disks in a global simulation. We assume a BH mass of \(10^{6}M_{\odot}\) and stellar mass of \(1M\odot\) in each simulation. We also study the effects of spin and use \(a_{\star}=0\) and \(a_{\star}=0.9\). We skip the initial disk formation process and assume it resulted in the existence of a circularized, small scale MAD disk, which we use as the initial conditions for each simulation. We then inject a magnetized stream with a fall back rate appropriate for a given time in the TDE evolution. We set the pericenter radius of the stream such that the self intersection radius is on the order of \(50r_{g}\), where \(r_{g}\) is the gravitational radius (defined in Section 4). Since GRMHD simulations are scale free, the most important parameter in our simulations is the ratio between the density of the pre-existing disk and injected stream (or the density contrast, which we define in Section 2). We evolve each simulation for \(\sim 0.87-4\) days and study the disk and jet properties during the interaction between the disk and stream. This paper is organized as follows. In Section 2, we discuss how the density contrast evolves in a simplified model of the TDE stream and accretion disk and illustrate potential consequences on the dynamics. In Section 3, we describe the numerical methods used to perform the GRMHD simulations. In Section 4, we define calculations used to analyze the simulations. In Section 5, we discuss core results and provide visualizations of each simulation. We discuss how our results can describe jetted TDEs in Section 6 and we conclude in Section 7. ## 2 Density contrast in TDEs Following Stone et al. (2013), we define the fallback time as \[t_{\rm fb}=3.5\times 10^{6}{\rm sec}\ m_{6}^{1/2}m_{\star}^{-1}r_{\rm s}^{3/2}. \tag{2}\] Following the rise to peak, the mass fallback rate follows a power law \[\dot{M}_{\rm fb}\sim\dot{M}_{\rm peak}\left(\frac{t}{t_{\rm fb}}\right)^{-5/3}, \tag{3}\] where \[\frac{\dot{M}_{\rm peak}}{\dot{M}_{\rm Edd}}\sim 133m_{6}^{-3/2}m_{*}^{2}r_{*}^{-3 /2} \tag{4}\] is the peak mass fallback rate in units of the Eddington mass accretion rate (defined later in Equation 16). Note we set \(\eta=0.1\), \(k=1\), and \(n=0\) in each of the expressions for simplicity such that there is no dependence on \(\beta\). The simulations presented in this work demonstrate that the density contrast is \[f_{\rho}(t,r)=\frac{\rho_{d}(t,r)}{\rho_{s}(t,r)}, \tag{5}\] leads to different dynamics in a TDE, where \(\rho_{d}\) is the mass density of the pre-existing disk and \(\rho_{s}\) that of the injected stream. Namely, the self-intersection outflow can be diminished if the stream's orbit is changed during its interaction with the disk. At the start of the TDE evolution, \(f_{\rho}<1\). Even in the simulation presented by Steinberg and Stone (2022), the circularized disk clearly remains less dense than the stream by roughly an order of magnitude. Depending on how the disk mass, scale, and geometry evolves, the quantity \(f_{\rho}\) could conceivably exceed unity at late times. Here we discuss how evolution of \(f_{\rho}\) could be relevant in TDEs. To describe the stream, we assume that its density is related to the fallback rate by the expression \[\rho_{s}(t,r)=\frac{\dot{M}_{\rm fb}(t)}{\pi H_{s}(r)^{2}v_{s}(r)}, \tag{6}\] where \(H_{s}\) is the stream height and \(v_{s}\approx\sqrt{2GM_{\rm BH}/r}\) is the free-fall velocity, which is roughly the speed of the incoming stream. For simplicity, we assume the stream height takes the form \(H_{s}=(r/R_{p})R_{*}/R_{p}\). To approximate the evolution of the disk, we assume that \(t\geq t_{\rm fb}\) such that the initial disk mass is \(M_{d}(t=t_{\rm fb})=0.1M_{*}\). We then approximate the disk mass by accounting for mass accreted by the BH over time \[\dot{M}_{d}(t)=\dot{M}_{\rm fb}(t)-\dot{M}_{\rm BH}(t). \tag{7}\] Here we assume \(\dot{M}_{\rm BH}=f_{\rm acc}\dot{M}_{\rm fb}\), and use a fiducial value of \(f_{\rm acc}=0.1\). This assumption is motivated by Curd (2021), which found a mass accretion rate of \(\sim 10\%\) of the fallback rate. This assumption may not hold for long term evolution as the disk mass builds up (e.g. Metzger, 2022). The disk mass then evolves as \[M_{d}(t)=M_{d,0}+(1-f_{\rm acc})\int_{t_{\rm fb}}^{t}\dot{M}_{\rm fb}(t)dt \tag{8}\] We assume that the gas density follows a power-law with radius of \(\rho_{d}(r,t)=\rho_{d,0}(t){(r/r_{H})}^{-1}\), where \(r_{H}\) is the horizon radius and \(\rho_{d,0}(t)\) is the maximum density of the disk at time \(t\). This profile is appropriate for a MAD disk (Chatterjee and Narayan, 2022), but is also similar to that of the TDE disk in Andalman et al. (2022). The density for a disk of outer radius \(R_{d}\) is obtained by \[\rho_{d}(t,r)=\frac{M_{d}(t)}{2\pi r(R_{d}^{2}-r_{H}^{2})} \tag{9}\] Here we assume a spherical distribution at all mass accretion rates. At low accretion rates, the disk may collapse into a disk geometry with scale-height \(h_{d}\) which may have radial dependence. We set \(\rho_{d}(r)=0\) for \(r<r_{H}\) and \(r>R_{d}\). Although we have performed simulations in which the accretion disk is geometrically thick, in part because we cannot sufficiently resolve small scale-height disks, our simulations do demonstrate the impact that the density contrast has on the stream dynamics. We believe that this effect should be similar in a thin system. Furthermore, the incoming stream is expected to be aligned with the disk since the disk tends to remain roughly aligned with the initial angular momentum of the star and does not precess (Andalman et al., 2022). We show an example of \(f_{\rho}\) over time using our assumed disk and stream evolution in Figure 1. In a scenario where a circularized accretion disk forms, there is not a cleared path for the stream to flow along towards pericenter. Instead, the circularized disk will exert ram pressure on the stream with an acceleration \(a_{\rm ram}\propto f_{\rho}\), effectively braking it. At low \(f_{\rho}\), the stream will be effectively unperturbed. However, as \(f_{\rho}\) approaches unity, the ram pressure may completely prevent the stream from reaching pericenter. Instead, the stream may mix with the disk as it rapidly dissipates orbital energy similar to Steinberg and Stone (2022). As we show in this work, the self intersection becomes weaker as \(f_{\rho}\) increases, which leads to dynamic changes in the disk and jet/corona. Such evolution could be responsible for state transitions and delayed outflows, which have occurred in several TDEs. Here we have ignored the possibility of disk collapse, but we discuss how this may change TDE evolution in the context of \(f_{\rho}\) in Section 6. We note that the evolution and size of the disk is a vital component of our asserted scenario. For instance, we have neglected the possibility of an extended envelope existing beyond \(R_{\rm circ}\) as in Metzger (2022). In addition, we assume that \(\dot{M}_{\rm BH}\) is proportional to \(\dot{M}_{\rm fb}\) at all times. While this is based on simulation results, global simulations have yet to cover the full range of TDE evolution. In models such as Metzger (2022), bound material within the disk will also drain into the BH after an accretion time. See Metzger (2022) for a description. Figure 1: Here we illustrate how \(f_{\rho}\) evolves in our simple TDE disk model with a BH mass \(m_{6}=1\) and stellar mass \(m_{*}=1\). Note that we set \(f_{\rm acc}=0.1\) and \(\beta=1\) for simplicity. We show the initial \(f_{\rho}\) for each \(a_{*}=0.9\) simulation in Table 1 based on \(\dot{M}_{\rm inj}\) (horizontal dashed lines). As \(f_{\rho}\) increases, the stream will dissipate more of its orbital energy in its interaction with the disk. As we describe in Section 5, the self intersection shock weakens as a result. ## 3 Numerical methods We present a suite of 3D numerical simulations of MAD TDE disks carried out with the GRRMHD code, koral(Sadowski et al., 2013, 2014, 2015; Sadowski & Narayan, 2015). Using a mesh-based, finite-difference method in a stationary Kerr space-time, koral solves the conservation equations of GRMHD: \[(\rho u^{\mu})_{;\mu} =0, \tag{10}\] \[(T^{u}_{\ \nu})_{;\mu} =0, \tag{11}\] where \(\rho\) is the gas density in the comoving fluid frame, \(u^{\mu}\) are the components of the gas four-velocity as measured in the "lab frame", \(T^{\mu}_{\ \nu}\) is the MHD stress-energy tensor in the "lab frame": \[T^{\mu}_{\ \nu}=(\rho+u_{g}+p_{g}+b^{2})u^{\mu}u_{\nu}+(p_{g}+\frac{1}{2}b^{2}) \delta^{\mu}_{\ \nu}-b^{\mu}b_{\nu}. \tag{12}\] Here \(u_{g}\) and \(p_{g}=(\gamma_{g}-1)u_{g}\) are the internal energy and pressure of the gas in the comoving frame, and \(b^{\mu}\) is the magnetic field four-vector which is evolved following the ideal MHD induction equation (Gammie et al., 2003). We adopt \(\gamma=5/3\) in this work. The code can handle radiation as well, but we choose to study pure GRMHD in this work to lower computational costs. We evolve the fluid in modified Kerr-Schild coordinates with the inner radius of the simulation domain inside of the BH horizon. The radial grid cells are spaced logarithmically, and we choose inner and outer radial bounds \(R_{\rm min}<r_{H}\) (with 4 cells within the horizon) and \(R_{\rm max}=5\times 10^{4}\,r_{g}\). We also use a full \(2\pi\) in azimuth and set \(\varphi_{\rm min}=-\pi\) and \(\varphi_{\rm max}=\pi\). We choose outflow boundary conditions at both the inner and outer radial bounds, reflective boundary conditions at the top and bottom polar boundaries, and periodic boundary conditions in \(\varphi\). In each simulation, we employ a resolution \(N_{r}\times N_{\theta}\times N_{\varphi}=256\times 144\times 144\). Specifics of the grid are given in Appendix A. In order to study a strongly magnetized disk which resembles a TDE disk, we first initialize and evolve a MAD disk before introducing the TDE stream. Similar to the fossil disk scenario proposed by Tchekhovskoy et al. (2014) and Kelley et al. (2014), this setup relies on the pre-existing disk to obtain the poloidal field required by a MAD. Our setup differs in that we skip the rise to peak and the interaction between the stream and fossil disk. Instead, we assume that the TDE has already obtained magnetic flux from the fossil disk and formed a circularized MAD accreting at a super-Eddington rate. We then inject a TDE stream into the simulation domain as in Curd (2021), allow the stream and pre-existing MAD to interact, and study how the presence of a TDE stream changes the dynamics compared to a typical MAD system. We note that our methods are similar to that of Chan et al. (2019), but they study systems where the disk and stream are misaligned initially and the disk is geometrically thin. The BH mass is set to \(10^{6}M_{\odot}\), though this only sets the units since GRMHD is scale free. We start with a torus of gas in hydrostatic equilibrium threaded by a large-scale poloidal magnetic field and its angular momentum aligned with the BH spin axis (or \(z\)-axis). From the torus initial conditions, the magnetorotational instability naturally develops and drives accretion onto the BH, which ultimately drags in magnetic field which saturates at the MAD state. We perform two such initial simulations (one for each BH spin) and evolve this initial stage for \(15,000t_{g}\), which is long enough for the magnetic field to saturate. We give additional details of the initial torus and time evolution of our initial setup in Appendix B. The simulation state for each BH spin after the initial evolution before stream injection is shown in Figure 2. To inject the stream, we assume the stream resulted from the disruption of a \(1M_{\odot}\) star on a parabolic trajectory (eccentricity \(e=1\)) around a \(10^{6}M_{\odot}\) BH and follow the injection methodology described in Curd (2021) with a few modifications. We reproduce relevant expressions from Curd (2021) below for completeness. We describe the disruption in terms of the impact parameter, \(\beta\), which is defined as the ratio between the tidal radius and pericenter separation such that \(\beta\equiv R_{t}/R_{p}\). We choose \(\beta=4\) for BH spin \(a_{*}=0\) models and \(\beta=7\) for \(a_{*}=0.9\). This gives a self-intersection radius (ignoring interaction between the stream and disk) of \(\sim 50\,r_{g}\) for all models. We apply the 'frozen in' approximation to estimate the spread in binding energy Stone et al. (2013): \[\Delta\epsilon\approx 4.3\times 10^{-4}\frac{m_{6}^{1/3}m_{*}^{2/3}}{r_{*}}c^{2}. \tag{13}\] We set the binding energy of the stream to that of the most bound component, \(\epsilon_{\rm inj}=\epsilon_{\rm mb}=\epsilon_{*}-\Delta\epsilon/2\). Here \(\epsilon_{*}\) is the initial orbital binding energy of the star, which is zero since we assume a parabolic orbit. We note that this is not accurate for late times in a TDE and \(\epsilon\) of incoming material will slowly approach zero, but we maintain this assumed binding energy for all simulations for simplicity. The orbit of the disrupted star is assumed to be aligned with the equatorial plane of the BH spin vector. For each simulation we fix \(\dot{M}_{\rm inj}\) (and correspondingly \(\rho_{\rm inj}\)) to be constant since the simulation time is much shorter than the fallback time. We set the gas temperature \(T_{\rm inj}=10^{5}\) Figure 2: Initial simulation state for each BH spin. We show the gas density (colors), velocity (streamlines), and jet boundary (\(\sigma=1\), pink line). K, gas pressure \(p_{\rm inj}=k_{B}T_{\rm inj}/\mu_{\rm gas}m_{p}\)1, and injection radius \(R_{\rm inj}=250\,r_{g}\). Due to resolution limitations, we assume \((H/R)_{\rm inj}=0.05\), which subtends only 6 cells in \(\vartheta\) and 2 cells in \(\varphi\) in our grid. The angular momentum is fixed to the value corresponding to the pericenter radius of the TDE stream \(l=\sqrt{2R_{\rm p}}\), from which we obtain the \(\varphi\) velocity \(v^{\varphi}=l/R_{\rm inj}\). The total velocity is then set by Footnote 1: Here \(k_{B}\) is the Boltzmann constant, \(m_{p}\) is the mass of a proton, and \(\mu_{\rm gas}\) is the mean molecular weight assuming Solar metallicity. \[v_{\rm inj}=\sqrt{\frac{2}{R_{\rm inj}}+2c_{\rm inj}}\;, \tag{14}\] from which we obtain the radial velocity, \(v^{r}=-\sqrt{(v_{\rm inj})^{2}-(v^{\varphi})^{2}}\). We inject a weak toroidal magnetic field with the stream by setting \[B_{\rm inj}^{r}=\frac{p_{\rm inj}\beta_{\rm,inj}}{\sqrt{g^{rr}}}\cos\left( \frac{|\vartheta-\pi/2|}{(H/R)_{\rm inj}}\pi\right), \tag{15}\] where \(\beta_{\rm,inj}=10^{-3}\) is the ratio magnetic and gas pressure in the injection cells. The other field components are set to \(B_{\rm inj}^{g}=B_{\rm inj}^{\varphi}=0\). ## 4 Definitions In this section, we discuss the units adopted throughout the text and provide brief descriptions of quantities used to study the KORAL simulation data. Throughout this work, we use gravitational units to describe physical parameters. For distance we use the gravitational radius \(r_{g}\equiv GM_{\rm BH}/c^{2}\) and for time we use the gravitational time \(t_{g}\equiv GM_{\rm BH}/c^{3}\), where \(M_{\rm BH}\) is the mass of the BH. Often, we set \(G=c=1\), so the above relations would be equivalent to \(r_{g}=t_{g}=M_{\rm BH}\). Occasionally, we restore \(G\) and \(c\) when we feel the lips to keep track of physical units. We adopt the following definition for the Eddington mass accretion rate: \[\dot{M}_{\rm Edd}=\frac{L_{\rm Edd}}{\eta_{\rm NT}c^{2}}, \tag{16}\] where \(L_{\rm Edd}=1.25\times 10^{38}\,(M_{\rm BH}/M_{\odot})\,{\rm erg\,s^{-1}}\) is the Eddington luminosity, \(\eta_{\rm NT}\) is the radiative efficiency of a thin disk around a BH with spin parameter \(a_{*}\) (which is often referred to as the Novikov-Thorne efficiency): \[\eta_{\rm NT}=1-\sqrt{1-\frac{2}{3r_{\rm ISCO}}}, \tag{17}\] and \(r_{\rm ISCO}=3+Z_{2}-\sqrt{(3-Z_{1})(3+Z_{1}+2Z_{2})}\) is the radius of the Innermost Stable Circular Orbit (ISCO, Novikov & Thorne, 1973) in the Kerr metric, where \(Z_{1}=1+(1-a_{*})^{1/3}\left((1+a_{*})^{1/3}+(1-a_{*})^{1/3}\right)\) and \(Z_{2}=\sqrt{3a_{*}^{2}+Z_{1}^{2}}\). For \(a_{*}=0\) and \(0.9\), the efficiency is \(\eta_{\rm NT}=5.72\%\) and \(15.58\%\). We compute the net mass inflow rate as \[\dot{M}(r)=-\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}\rho\,u^{r}d\vartheta d\varphi. \tag{18}\] The magnetic flux is computed as \[\Phi(r)=-\frac{1}{2}\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}|B^{r}(r)|d\vartheta d\varphi, \tag{19}\] where \(B^{r}\) is the radial component of the magnetic field. The total energy flux (excluding the rest mass flux) is computed as \[L(r)=-\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}(T^{r}_{\,t}+\rho u^{r})d\vartheta d\varphi. \tag{20}\] We track the time evolution of the mass accretion rate, magnetic flux, and jet power through unitless quantities evaluated at the BH horizon. We track the accretion of mass onto the BH in each simulation in Eddington units \[\dot{m}=\frac{\dot{M}(r_{H})}{\dot{M}_{\rm Edd}}. \tag{21}\] We quantify the magnetic field strength at the BH horizon through the normalized magnetic flux parameter (Tchekhovskoy et al., 2011) \[\phi=\frac{\Phi(r_{H})}{\sqrt{\dot{M}(r_{H})}}. \tag{22}\] For geometrically thick disks the MAD state is achieved once \(\phi_{\rm BH}\sim 40-50\)(see e.g. Tchekhovskoy et al., 2011, 2012). Since the majority of the escaping energy leaves the system through the jet in MAD disks, we quantify the jet power via the total efficiency at the BH horizon \[\eta=\frac{L(r_{H})}{\dot{M}(r_{H})}. \tag{23}\] To determine the driving factor for angular momentum transport, we measure the effective viscosity \[\alpha_{\rm eff}=\frac{u^{r}u^{\varphi}}{c_{s}^{2}}, \tag{24}\] Reynolds viscosity \[\alpha_{\rm Rey}=\frac{\widehat{T}_{\rm Rey}^{\varphi\dot{\varphi}}}{p_{b}+p_ {g}}, \tag{25}\] and Maxwell viscosity \[\alpha_{\rm Max}=\frac{\widehat{T}_{\rm Max}^{\varphi\dot{\varphi}}}{p_{b}+p_ {g}}. \tag{26}\] Here \(\widehat{T}^{\varphi\dot{\varphi}}\) is the average orthonormal \(r,\ \varphi\) component of the stress-energy tensor measured in the fluid frame, \(c_{s}\) is the sound speed, and \(p_{b}=b^{2}/2\) is the magnetic pressure. Note \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & \(a_{*}\) & \(\beta\) & \(\dot{M}_{\rm inj}\) & \(f_{\rho 0}\) & \(t_{\rm start}\) & \(t_{\rm end}\) \\ & & & (\(\dot{M}_{\rm Edd}\)) & & (\(10^{4}t_{g}\)) & (\(10^{4}t_{g}\)) \\ \hline m00f0.3b4 & 0 & 4 & 1 & 0.3 & 0 & 2 \\ m00f0.003b4 & 0 & 4 & 100 & 0.003 & 0 & 2 \\ m09f1b7A & 0.9 & 7 & 1 & 1 & 0 & 2 \\ m09f0.1b7A & 0.9 & 7 & 10 & 0.1 & 0 & 3.5 \\ m09f0.01b7 & 0.9 & 7 & 100 & 0.01 & 0 & 3.5 \\ m09f1b7B & 0.9 & 7 & 1 & 1 & 2 & 3.5 \\ m09f0.1b7B & 0.9 & 7 & 10 & 0.1 & 2 & 7 \\ \hline \hline \end{tabular} \end{table} Table 1: Here we describe the relevent parameters of each model presented in this work. Models n00f1b7B and m00f0.1b7B are restarts of m0f0.01b7 from 20,000 \(t_{g}\) with the injection rate lowered to increase the initial density contrast \(f_{\rho 0}\) to study how an evolved system changes once self-intersection is weakened. that we have taken advantage of the fact that the stress-energy tensor can be broken into gas (Reynolds) and magnetic (Maxwell) components. That is we write Equation 12 strictly in terms of the gas or magnetic components. We compute the eccentricity at each grid point via \[e=\sqrt{1+2\ell l^{2}}, \tag{27}\] where \(\epsilon=-(u_{t}+1)\) is the binding energy and \(l=u_{\varphi}\) is the angular momentum. To quantify the orientation of the disk and jet (or corona/funnel), we first use the magnetization to divide the fluid into 'disk' (\(\sigma<1\)) and 'jet' (\(\sigma\geq 1\)). In simulations where there is no spin, this is not a true jet since there is no mechanism to accelerate the gas to relativistic speeds. Nevertheless, this region is likely to be low optical depth and represents where X-rays are likely to escape. Note that we transform quantities from spherical polar to cartesian coordinates \(x^{i}=(x,y,z)\) to describe the position and angular momentum of the fluid in the following paragraphs. The angular momentum of the BH is aligned with the \(z\)-axis, so \[J^{i}_{\rm BH}=(0,0,a_{\rm BH}M). \tag{28}\] Since this term cancels when computing the tilt and precession and is meaningless for a Schwarzschild BH, we only show it here for completeness. We derive the angular momentum of each cell in the disk using the stress energy tensor transformed to Cartesian coordinates \[S^{i}=[i\,j\,k]x^{j}T^{0k}_{\rm Cart}, \tag{29}\] where the brackets denote the antisymmetric Levi-Cevita tensor. We then find the shell integrated, density weighted angular momentum components \[J^{i}=\frac{\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}\,w_{\rm disk}(\sigma)\rho \,S^{i}d\vartheta d\varphi}{\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{-g}\,w_{\rm disk }(\sigma)\rho d\vartheta d\varphi}. \tag{30}\] In the above expression, the term \[w_{\rm disk}(\sigma)=\begin{cases}1,&\quad\sigma<1\\ 0,&\quad\sigma\geq 1\end{cases} \tag{31}\] is used to only include the disk in integration. We then define the tilt angle relative to the BH spin (or z-axis in the zero spin case) as a function of radius \[\mathcal{T}_{\rm disk}(r)=\arccos\Bigg{[}\frac{J^{z}}{\sqrt{(J^{z})^{2}+(J^{ y})^{2}+(J^{z})^{2}}}\Bigg{]}. \tag{32}\] We also obtain the precession angle relative to the \(y\)-axis \[\mathcal{P}_{\rm disk}(r)=\arccos\Bigg{[}\frac{J^{y}}{\sqrt{(J^{z})^{2}+(J^{ y})^{2}}}\Bigg{]}. \tag{33}\] In aligned systems, the precession angle is not a useful quantity, but once tilt sets in it can show whether the disk and jet precess together. For the jet, we derive a position based angle. We start by finding the \(\sigma\) weighted mean position for the top and bottom jet at each radius \[x^{i}_{\rm jet,top}=\frac{\int_{0}^{2\pi}\int_{0}^{\pi/2}\sqrt{-g}\,w_{\rm jet }(\sigma)\sigma\ x^{i}d\vartheta d\varphi}{\int_{0}^{2\pi}\int_{0}^{\pi/2} \sqrt{-g}\,w_{\rm jet}(\sigma)\sigma d\vartheta d\varphi}, \tag{34}\] \[x^{i}_{\rm jet,bot}=\frac{\int_{0}^{2\pi}\int_{\pi/2}^{\pi}\sqrt{-g}\,w_{\rm jet }(\sigma)\sigma\ x^{i}d\vartheta d\varphi}{\int_{0}^{2\pi}\int_{\pi/2}^{\pi} \sqrt{-g}\,w_{\rm jet}(\sigma)\sigma d\vartheta d\varphi}. \tag{35}\] In both expressions, the term \[w_{\rm jet}(\sigma)=\begin{cases}0,&\quad\sigma<1\\ 1,&\quad\sigma\geq 1\end{cases} \tag{36}\] is used to explicitly exclude the disk from calculations. We then calculate a tilt and precession angle based on the mean position. For example, the top jet's tilt and precession are calculated as \[\mathcal{T}_{\rm jet,top}(r)=\arccos\Bigg{[}\frac{z_{\rm jet,top}}{\sqrt{(z _{\rm jet,top})^{2}+(y_{\rm jet,top})^{2}+(z_{\rm jet,top})^{2}}}\Bigg{]}, \tag{37}\] and \[\mathcal{P}_{\rm jet,top}(r)=\arccos\Bigg{[}\frac{y_{\rm jet,top}}{\sqrt{(z _{\rm jet,top})^{2}+(y_{\rm jet,top})^{2}}}\Bigg{]}. \tag{38}\] The same expressions are used for the bottom jet except with the mean coordinates \(x^{i}_{\rm jet,bot}\). For both the disk and jet, we report the average tilt and precession angles over \(10\leq r/r_{g}\leq 100\). We quantify the jet opening angle by computing the solid angle it subtends in a flat spacetime: \[\Omega_{\rm jet,top}(r)=\int_{0}^{2\pi}\int_{0}^{\pi/2}\,w_{\rm jet}(\sigma) \sin(\vartheta)\cos(\vartheta)d\vartheta d\varphi \tag{39}\] \[\Omega_{\rm jet,bot}(r)=-\int_{0}^{2\pi}\int_{\pi/2}^{\pi}\,w_{\rm jet}( \sigma)\sin(\vartheta)\cos(\vartheta)d\vartheta d\varphi. \tag{40}\] Note the minus sign in Equation 40 is to account for the negative introduced by \(\cos(\vartheta)\). We compute an average solid angle \[\Delta\Omega(r)=\frac{\Omega_{\rm jet,top}(r)+\Omega_{\rm jet,bot}(r)}{2}. \tag{41}\] We relate this to the mean jet width under the assumption of a conical geometry \[\mathcal{W}(r)=r\sin\biggl{(}\arccos[1-\Delta\Omega(r)/2\pi]\biggr{)}. \tag{42}\] ## 5 Results ### Stream/Disk Dynamics We show the large scale structure of models with \(f_{\rho}=0.01,0.1,1\) and \(a_{*}=0.9\) in Figure 3 (m09f1b7A, m09f0.1b7A, m09f0.01b7). When \(f_{\rho}=0.01\), the ram pressure from the disk is negligible, and the system evolves much like disk formation simulations initialized with no initial disk (Sadowski et al., 2016; Curd, 2021). The stream dissipates a negligible amount of orbital energy on its way to pericenter, where it goes through a nozzle shock due to vertical compression and self-intersects at roughly the self-intersection radius (See bottom left panel in Figure 3 and bottom right panel in Figure 4). Similar to Curd (2021), the nozzle shock is poorly resolved, so we do not discuss it throughout this work. Bound and unbound gas is produced by the self-intersection shock, some of which falls in and makes an accretion disk while the rest flows out and interacts with the jet and outer medium. The material which forms the accretion disk maintains a high eccentricity (See bottom right panel in Figure 5). Despite the low magnetic field strength injected with the stream, the forming disk maintains a strong magnetic field due to the pre-existing field being anchored to smaller radii by inflowing material (See bottom right panel in Figure 3). Similar to the magnetized disk formation simulations in Curd (2021), the magnetic field in material which has gone through the self-intersection shock becomes highly disordered and turbulent. However, as we discuss later, the poloidal magnetic flux inside the self-intersection radius remains trapped and the field in the inner accretion disk remains ordered. The outflowing part is launched with velocity \(\sim 0.1c\) and produces an asymmetrical ram pressure on the jet since it is quasi-spherical. This results in a force in the \(-x\) direction. We describe how this effects the disk and jet evolution in Section 5.4.1. With \(f_{\rho}=0.1\), we observe significant slowing of the stream on its way to pericenter, but it is not completely stopped by the disk (See middle left panel in Figure 3 and bottom left panel in Figure 4). As a consequence, the pericenter radius is shifted outward radially significantly and the self-intersection has far less kinetic energy available for dissipation. No quasi-spherical outflow is produced as a result. This may be due to the shock weakening due poorer resolution at larger radii. However, this result is not unreasonable since the energy and velocity of the self-intersection outflow is expected to rapidly drop off with increasing radius since the stream self-intersects at roughly the orbital velocity. We again find a highly eccentric accretion disk forms, but we note a slight decrease in eccentricity compared with the \(f_{\rho}=0.01\) model due to the dissipation of orbital energy as the stream interacts with the disk (See bottom left panel in Figure 5). Since there is no self-intersection outflow, the magnetic field in the outer accretion disk is less turbulent. We again find anchoring of poloidal magnetic field to the BH by the inflowing material. With \(f_{\rho}=1\), the ram pressure exerted on the stream by the disk is large enough to halt the stream before it reaches pericenter. Instead, the stream is observed to mix with the accretion disk (See top panel in Figure 3). This can clearly be seen in the velocity which closely resembles the initial MAD disk (See top panels in Figure 4). Interestingly, the stream does add eccentricity to the disk as the inflowing material reaches \(e>0.7\). The field structure closely resembles a standard MAD accretion disk (e.g. bottom panel in Figure 2) since the stream has little effect on the disk. The dynamics for a given \(f_{\rho}\) are similar in the \(a_{*}=0\) models. Videos of each simulation can be seen in our YouTube playlist. ### TDE Disks Maintain Magnetic Flux and Jets We show the accretion rate, normalized magnetic flux, and efficiency at the BH horizon in Figure 6. In all models save m09f0.01b7, the accretion rate drops from about 10 to 1 Eddington. This is due to a drop in density around the BH as the disk spreads viscously and mass is consumed by the BH. Surprisingly, there is little difference in accretion history as we vary \(f_{\rho}\) except in m09f0.01b7 which shows elevated accretion once the disk tilts, an effect we describe in the next section. In all models, a MAD or semi-MAD state is maintained. Despite the high eccentricity, magnetic field is successfully contained and does not rapidly diffuse from the horizon. This is a genuinely new result and is a bit of a surprise since Curd (2021) found negligible poloidal flux accumulation when the field comes from the stream even with a favorable field configuration. Our results indicate that once poloidal flux reaches the BH, regardless of how it was obtained (i.e. fossil disk or a dynamo effect), the chaotic and eccentric disk can anchor it to the BH. We note that while m09f0.01b7 showed a decrease in normalized magnetic flux, the total magnetic flux given by Equation 19 remains roughly the same. The decrease in normalized magnetic flux is due to additional accretion driven by strong shocks once the tilt sets in. See discussion in Section 5.4.1. We treat the efficiency as measured at the horizon as a proxy for the outgoing jet power. In all models with \(a_{*}=0.9\) we find \(\eta\approx 100-400\%\) while the magnetic flux remains MAD (\(\delta\gtrsim 50\)). Ultimately, the jet power at larger radii may decrease especially in cases where the self-intersection outflow is strong, and the jet may interact with the disk and outflow. In addition, instabilities in the jet disk interface can lead to additional dissipation of jet power (Chatterjee et al., 2019). For models with spin \(a_{*}=0\), the efficiency remains much lower at \(\sim 2-6\%\) since there is no jet. ### Magnetic Stresses are Subdominant To quantify the contribution to angular momentum transport from hydrodynamic and magnetic processes, we compute a radius-weighted average of \(\alpha_{\rm eff},\alpha_{\rm Rey}\), and \(\alpha_{\rm Max}\) in the disk (\(\sigma<1\)) from \(r_{H}<r<100r_{g}\) at \(t=t_{\rm end}\)2. We employ radius-weighting instead of density-weighting to incorporate part of the outer disk where shocks are present into the calculation. Footnote 2: We have verified that the viscosity behaves the same across time and the qualitative properties shown in Figure 7 are not effected by our choice of time to perform the measurement. We show the average viscosity in Figure 7 as a function of \(f_{\rho}\). We find that the effective and Reynolds viscosity both decline as a function of \(f_{\rho}\). Meanwhile, the Maxwell viscosity is similar across all values of \(f_{\rho}\) with \(\alpha_{\rm Max}\lesssim 10^{-3}\). At all values of \(f_{\rho}\), the effective viscosity and the Reynolds viscosity are larger than the Maxwell viscosity. At \(f_{\rho}\lesssim 0.01\), the effective viscosity is more than an order of magnitude larger than the Reynolds viscosity which suggests shocks dominate transport at this stage of a TDE. We observe that at \(f_{\rho}\gtrsim 0.1\), the effective and Reynolds viscosity are of roughly the same magnitude which suggests a transition to turbulent transport. Our findings at \(f_{\rho}\lesssim 0.1\) are similar to Sadowski et al. (2016) who found even after a disk formed, the Maxwell viscosity remained subdominant by at least an order of magnitude. At \(f_{\rho}\gtrsim 1\), the viscosity resembles some of the MAD disks in McKinney et al. (2012) which also showed a larger Reynolds viscosity than Maxwell viscosity in spite of the powerful poloidal magnetic fields. Figure 4: Here we show the velocity (colors) and velocity field vector (stream lines) for an equatorial slice of each of the \(a_{*}=0.9\) models for \(f_{\rho}=0.01\) (bottom right), \(0.1\) (bottom left), \(1\) (top right). We also show the velocity field for the initial conditions on the top left for comparison. Each panel shows in \(120r_{g}\times 120r_{g}\) region centered around the BH. See Section 5.1 for a description of the figures. Figure 5: Here we show the eccentricity (colors) for an equatorial slice of each of the \(a_{*}=0.9\) models for \(f_{\rho}=0.01\) (bottom right), \(0.1\) (bottom left), \(1\) (top right). We also show the eccentricity for the initial conditions on the top left for comparison. Each figure spans a region similar to Figure 4. See Section 5.1 for a description of the figures. Figure 3: Here we show the gas density (colors, left panels), velocity field (stream lines, left panels), magnetic field strength (colors, right panels), and magnetic field (stream lines, right panels) for an equatorial slice of each of the \(a_{*}=0.9\) models for \(f_{\rho}=0.01\) (bottom row), \(0.1\) (middle row), \(1\) (top row). Each figure spans a region of \(480r_{g}\times 480r_{g}\) centered around the BH. We describe the figure in Section 5.1. ### Disk and Jet Tilt Evolution #### 5.4.1 Low Density Contrast Jetted Model: \(f_{\rho}=0.01\) At the onset of stream injection, since the stream is substantially denser than the pre-existing MAD disk with \(f_{\rho}=0.01\), the stream is largely unperturbed by the disk material on its path towards pericenter. Subsequently, the stream precesses and violently self-intersects with itself at the self-intersection radius. Between \(t=0-0.7\times 10^{4}t_{g}\), the self-intersection outflow begins to tilt the jet and we measure tilt angles for both the top and bottom jet of \(\sim 10^{\circ}-20^{\circ}\). During this initial stage, the disk remains aligned with the BH spin. Between \(t=0.7-1.2\times 10^{4}t_{g}\), the disk tilt begins to increase until it roughly equals the tilt angle of the top and bottom jets. During this stage, the precession angle oscillates wildly, in part due to the initial tilt angle of zero. For \(t>1.2\times 10^{4}t_{g}\), the tilt of the top jet and disk continue to grow until \(\mathcal{T}_{\rm jet,top}\sim 30^{\circ}\) and \(\mathcal{T}_{\rm disk}\sim 23^{\circ}\). In a typical tilted MAD disk system, the jet acts to align the inner accretion disk with the BH spin. However, once the disk tilt begins to grow in m09f0.01b7, it is unable to realign with the BH spin due to already tilted disk material adding angular momentum at the self-intersection radius. This sets up a tilted system which is shown to be stable for at least the duration of the simulation. Interestingly, the jet precession angle does not show strong variability after the disk tilts. Instead the top Figure 8: Volume renderings of a \(200r_{g}\times 200r_{g}\) region of model m09f0.01b7 showing the stream/disk (red), outer disk/outflow (blue), and jet (yellow) viewed edge on (top panel) and viewed down the jet axis (bottom panel). We show times in \(t=0t_{g}\) (left), \(t=10,000t_{g}\) (middle), \(t=35,000t_{g}\) (right). The outflow pushes on the jet laterally and begins to tilt the jet. This ultimately leads to a tilted disk and jet in the final snapshot. Figure 6: We show the mass accretion rate (top row), normalized magnetic flux at the BH horizon (middle row), and efficiency (bottom row) for each of the \(a_{\star}=0\) (left column) and \(a_{\star}=0.9\) (right column) models. Each model shows an initial decrease in the mass accretion rate as the injected stream interacts with the disk. As we discuss in Section 5.2, this is due to the density in the disk decreasing due to viscous spreading and mass accretion. In each model we find \(\phi>20\), which confirms that TDE disks can maintain a strong poloidal field. For the models where no tilt instability sets in, a MAD flux of \(\phi>50\) is maintained and a powerful jet with \(\eta\approx 100-400\%\) is launched when \(a_{\star}=0.9\). As expected, no jet is launched when \(a_{\star}=0\) and we find similar \(\eta\) for both of the \(a_{\star}=0\) models. Figure 7: Here we show the radius-weighted viscosity as computed in Section 5.3 as a function of \(f_{\rho}\). We indicate \(a_{\star}=0\) models as squares and \(a_{\star}=0.9\) models as circles. and bottom jets show nearly constant precession angles that are roughly \(180^{\circ}\) out of phase at \(t>2.3\times 10^{4}t_{g}\). Volume renderings of the evolution are shown in Figure 8. Equatorial and poloidal slices as well as the full time evolution of the tilt and precession angles are shown in Figure 9. #### 5.4.2 Medium Density Contrast Jetted Model: \(f_{\rho}=0.1\) Since this model has an intermediate density contrast, the stream is still able to flow towards the BH. However, it is significantly perturbed and the pericenter radius is shifted slightly outward, which also increases the self-intersection radius. This leads to a substantially weakened self-intersection and self-intersection outflow. As a result, the jet is only slightly perturbed by the outflow and we find that the jet remains stable with \(\mathcal{T}\lesssim 10^{\circ}\) and the disk remains aligned with the BH spin throughout the entire evolution. The precession angle is not meaningful here due to the near perfect alignment. See Figure 10 for visualations and the time evolution. #### 5.4.3 High Density Contrast Jetted Model: \(f_{\rho}=1\) In this model, the density contrast is large enough that the stream experiences extreme ram pressure from the accretion disk and is halted at \(r\sim 50-100r_{g}\). The stream material never reaches pericenter and instead mixes in with the pre-existing disk. Consequently, the system resembles a standard MAD ADAF and neither the jet or disk show large changes in their tilt. Again, the precession angle is not meaningful here due to the near perfect alignment. This evolution is depicted in Figure 11. #### 5.4.4 Restarts of m09f0.01b7 with Higher Density Contrast For model m09f1b7, we perform a restart of m09f0.01b7 at \(t=2\times 10^{4}t_{g}\) with \(f_{\rho}\) instantaneously increased from 0.01 to Figure 10: _Top row:_ Here we show the same quantities as the top three rows in Figure 9, but for model m09f0.1b7A. As we discuss in Section 5.4.2, the stream loses orbital energy on its path to pericenter and the self-intersection outflow is significantly weakened which leads to a weaker perturbation on the jet. We note that the jet profile is less smooth than in the initial state (top panel in Figure 9) due to asymmetry in the disk structure induced by the interaction with the stream. _Bottom two rows:_ The weak perturbation on the jet leads to a non-zero tilt measurement. However, both the disk and jet maintain low tilts with \(\mathcal{T}<10^{\circ}\), which confirms that strong self-intersection is needed to induce strong interaction between the jet and disk. The top and bottom jet maintain precession angles which are roughly in-phase and oscillate over time, which is typical of spin aligned MAD disks. Figure 9: _Top two rows:_ Gas density (colors), velocity (streamlines), and jet boundary (\(\sigma=1\), pink line) for m09f0.01b7. We show an equatorial slice (left) and vertical slice (right) spanning a region of \(120r_{g}\times 120r_{g}\) centered on the BH. Snapshots are shown during the initial self-intersection (\(t=10^{4}t_{g}\), first row), and at the end of the simulation after the tilt has set in (second row). _Bottom two rows:_ We show the tilt and precession angle for the disk and top/bottom jet over the evolution of the simulation. As the stream flows in, a quasi-spherical outflow begins to push on the jet and we see the jet tilt increase initially. At around \(t=0.6\times 10^{4}t_{g}\), the jet begins to perturb the disk and we observe a steady increase in the disk tilt until it roughly aligns with the jet, after which the tilt in both the disk and jet increases until they settle around a rough equilibrium state at \(t=2.5\times 10^{4}t_{g}\). Once the disk tilts, a feedback cycle begins due to self-intersection and magneto-spin alignment cannot realign the inner disk. The precession angle prior to the tilt setting in is not a meaningful quantity since the system is initially aligned with the BH spin. Once the system tilts, the disk and top jet share the same precession angle and we do not observe much variability in the precession. The bottom jet points in the opposite direction and is roughly \(180^{\circ}\) out of phase with the top jet. 1. The self-intersection is rapidly halted due to the increased density contrast and the jet subsequently realigns with the BH spin. The tilt of the disk remains slightly elevated above the tilt of the jet. This is due to the density weighting applied in Equation 30, which gives larger weighting to higher density remnants of the tilted gas which is still in the simulation domain. However, as can be seen in Figure 12, the inner disk is able to realign with the BH spin by the end of the simulation. We expect in a physical scenario the system will have time to adjust and the disk tilt should completely realign with the BH spin similar to the jet. For model m09f0.1b7B, we also perform a restart of model m09f0.01b7 at \(t=2\times 10^{4}t_{g}\), but with \(f_{\rho}\) instantaneously increased from 0.01 to 0.1. Similar to m09f0.1b7A, the stream is only perturbed from its orbit and the self-intersection still persists, but is weakened as a result. With weaker ram pressure acting on the jet, the jet and disk begin to realign with the BH spin. However, this process is much slower than in model m09f1b7B, and we find that the disk and jet tilt are highly variable until finally decreasing until they settle at \(\mathcal{T}\sim 10^{\circ}\) by the end of the simulation (see Figure 13). The total run time of the simulation (see Table 1) corresponds to only roughly three days for a \(10^{6}M_{\odot}\) BH which suggests, assuming rapid transitions in the density contrast, that the tilt can evolve rapidly enough to explain features such as jet shut-off as we discuss later in this work. Figure 11: The same as Figure 10, but for model m09f1b7A. Since the stream is halted by the pre-existing disk, no self-intersection outflow occurs. Subsequently, the jet and disk are approximately aligned with the BH spin throughout the entire simulation. Interestingly, the added turbulence to the system during the interaction with the stream appears to perturb the jet boundary compared to the initial state. Note the precession angle of the disk is not a useful quantity since the disk is aligned with the BH. Figure 12: The same as Figure 10, but for model m09f1b7B which is a restart of m09f0.01b7 at \(t=2\times 10^{4}t_{g}\). \(f_{\rho}\) is instantaneously increased from 0.01 to 1 at the start of the simulation. Because the stream is halted by the disk due to the change in density contrast, the self-intersection ceases shortly after we start the simulation. Without the added perturbation from a self-intersection outflow, the jet realigns with the z-axis and magneto-spin alignment rapidly realigns the disk with the BH spin. Interestingly, the top and bottom jet remain approximately \(180^{\circ}\) out of phase even after self-intersection ceases. Figure 13: The same as Figure 10, but for model m09f0.1b7B which is a restart of m09f0.01b7 at \(t=2\times 10^{4}t_{g}\). \(f_{\rho}\) is instantaneously increased from 0.01 to 0.1 at the start of the simulation. Since the change in density contrast is milder than m09f1b7B, the stream manages to penetrate the disk, but loses a substantial amount of orbital energy similar to model m09f0.1b7A. As a result, the self-intersection outflow persists, but is much weaker. The tilt of the jet and disk slowly decreases over the course of the simulation until it was observed to reach a rough equilibrium of about \(10^{\circ}\). Although magneto-spin alignment is able to realign much of the inner system, filaments of tilted material linger in the disk which may contribute to the residual tilt in the system as well as the wild precession observed in the jet at late times. #### 5.4.5 Non-Jetted Models For the low density contrast model (m00f0.003b4), the initial evolution of the streams is similar to that of model m09f0.01b7. The self-intersection and self-intersection outflow result in a ram pressure which tilts the jet region. However, as there is no true jet since \(a*=0\), the jet region that we measure may be thought of as a corona. As shown in Figure 14, the corona becomes substantially tilted with \(\mathcal{T}\sim 20^{\circ}-40^{\circ}\). The disk remains perfectly aligned with the BH spin throughout the entire evolution. This demonstrates that a powerful jet is responsible for the tilt instability that we observe in m09f0.01b7. For the low density contrast model (m00f0.3b4), the stream is perturbed due to its interaction with the pre-existing disk, similar to m09f1b7A. However, we find that the disk tilt increases slightly over the course of the simulation (\(\mathcal{T}\lesssim 10^{\circ}\), see Figure 15). The corona attains a tilt of \(\mathcal{T}\sim 20^{\circ}\). This is due to asymmetry introduced to the system as the stream interacts with the disk. Since the magnetic field is strong, and stream material cannot steadily feed aligned material Figure 16: Here we show snapshots of violent self-intersection events in model m09f0.01b7 (first and second row). Colors indicate gas density and stream lines indicate gas velocity. We also show the mass accretion rate (third row), magnetic flux threading the BH (fourth row), and jet efficiency (fifth row). Vertical gray lines correspond to the same times as the snapshots shown in the first and second rows, \(2,800\), \(4,300\), \(5,700\), \(7,400\), \(9,200\), and \(13,500\)t\({}_{\rm g}\), respectively. The violent self-intersections are accompanied by a drop in magnetic flux and jet power. We also note a small increase in mass accretion rate, which is less dramatic than the change in magnetic flux and jet efficiency. Figure 14: The same as Figure 10, but for model m09i100b4. Note we show the initial state in the top row and the final state of the simulation in the bottom row. Since there is no jet, the corona is observed to tilt by \(\mathcal{T}>20^{\circ}\) due to the self-intersection outflow. However, the disk tilt remains approximately aligned with the BH spin. This confirms that a jet is necessary to induce a tilt instability in MAD TDE disks. Figure 15: The same as Figure 10, but for model m09i1b4. Due to the higher density contrast, the stream loses orbital energy on its way to pericenter, and the self-intersection outflow is negligible. Surprisingly, we measure a nonzero tilt for the corona and disk. We believe this is due to asymmetry introduced to the system by the stream in the absence of magneto-spin alignment and a strong jet. to the inner disk, the tilted corona is capable of tilting the disk. Unlike model m09f1b7A, magneto-spin alignment does not counteract any induced tilt. Tilt induction in a MAD around a non-spinning BH was also demonstrated by Ressler et al. (2020) in the context of a stellar wind fed model of Sagittarius A\({}^{*}\), suggesting tilt induction may be common in MAD disks around non-spinning BHs that are fueled by asymmetrical inflows. ### Violent Self-Intersections and Variability For the first \(15,000\,t_{g}\) of model m09f0.01b7, we identify six complete stream disruptions at times \((2,800,\,4,300,\,5,700,\,7,400,\,9,200,\,13,500)t_{g}\) as shown in Figure 16. These correspond to a temporal separation of \((1500,\,1400,\,1700,\,1800,\,3300)t_{g}\). Assuming a Keplerian orbit, this corresponds to an orbital radius of \((38,\,37,\,42,\,43,\,65)r_{g}\). These are similar to the self-interaction radius of \(\sim 50r_{g}\), which is to be expected in the case of a feedback loop caused by angular momentum transfer during self-intersection (Sadowski et al., 2016; Curd, 2021). Here we find that not only does the mass accretion rate vary during these events, but the magnetic flux threading the BH drops from \(\phi_{\rm BH}\sim 60\) to \(\sim 40\). Since the disk is MAD and the BH is rapidly rotating, this will inevitably lead to flaring behaviour. Indeed, we see the total efficiency drop from \(\sim 100\%\) at the peaks to \(10-50\%\) at the minima. We discuss how our model can be applied to the variability in jetted TDEs like _Swift_ J1644+57 in Section 6. ### Jet Collimation We measure the mean jet width at \(r=10r_{g}\) (\(\mathcal{W}_{10}\)) and \(r=100r_{g}\) (\(\mathcal{W}_{100}\)) as a function of time following Equation 42 in Figure 17. The jet width shows oscillations as a function of time due to the highly variable magnetic flux. This is typical of a MAD disk, but here we are focused on the average behavior of the jet. For model m09f0.01b7, the self-intersection outflow causes substantial collimation. The velocity stream lines in the right middle panel of Figure 9 show high density material sometimes flowing completely in the \(-x\) direction which will provide substantial ram pressure on the jet. We see a decrease of roughly \(10r_{g}\) in the jet width measured at \(100r_{g}\) compared to the initial jet. For models m09f1b7A and m09f0.1b7A, the jet width at \(r=10r_{g}\) is similar to that of the initial jet prior to injection due to the weakening of the self-intersection outflow. However, we do observe slightly more collimation at \(r=100r_{g}\) compared to the initial jet, perhaps due to changes in the outflow properties when the stream interacts with the disk. For instance, the velocity stream lines in Figure 11 and Figure 10 show flows towards the jet axis, which are not present in the initial jet (see top panel of Figure 9). This may lead to more collimation in TDE jets as they propagate outwards compared to a standard MAD; however, we limit ourselves to measuring the jet profile for \(r\leq 100r_{g}\) due to poor angular resolution of the jet at larger radii. For model m09f1b7B, once the self-intersection ceases due to the increased \(f_{\rho}\), the jet width returns to near the initial value within \(\sim 5000t_{g}\). However, model m09f0.1b7B shows a much narrower jet when compared with model m09f0.1b7A. This is not due to the self-intersection outflow, but the magnetic flux dropping off towards the end of the simulation. We also time average the jet width from \(t_{\rm start}+5000t_{g}\) to \(t_{\rm end}\) (see bottom panel in Figure 17). We find similar jet profiles for all models with weak self-intersection outflows (m09f1b7A, m09f0.1b7A, m09f1b7B). Model m09f0.1b7B is similar to model m09f0.01b7, but again this is due to a decrease in magnetic flux and not a result of the self-intersection outflow. We compare our results with the jet profile for the \(a_{*}=0.9\) model from Narayan et al. (2022). We find that our initial conditions result in a slightly narrower jet, but the profile appears to be quite similar for the models with weak self-intersection. ### Gas Temperature We estimate the gas temperature in the disk by accounting for radiation under the assumption that the disk is optically Figure 17: We show the mean jet width at \(r=10r_{g}\) (\(\mathcal{W}_{10}\), top panel) and \(r=100r_{g}\) (\(\mathcal{W}_{100}\), middle panel) as a function of time for each model. In the bottom panel we show the mean jet width as a function of \(z\) and time averaged over \(t_{\rm start}+5000t_{g}\) to \(t_{\rm end}\). We also show the jet profile for the \(a_{*}=0.9\) model from Narayan et al. (2022) (dashed black line). We describe the figures in Section 5.6. thick. We split the temperature into gas and radiation by solving \[p_{g}=\frac{\rho kT}{\overline{m}}+\frac{1}{3}aT^{4}, \tag{43}\] where \(\overline{m}\) is the mass per particle and \(T\) is the temperature. The gas temperature in the \(\sigma>1\) region is uncertain due to both numerical floors and the use of entropy inversion when energy conserving inversion fails in highly magnetized zones in GRMHD. As a result, we mask the gas temperature in the jet/corona, but we generally expect it to be substantially hotter than the disk (Curd & Narayan, 2019). We show the gas temperature for each model at \(t=t_{\rm end}\) in Figure 18. In the accretion disk, since the gas and radiation pressure are split evenly, the gas temperature of the accretion disk reaches \(T\sim 10^{5-6}\) K, which approximately agrees with Curd & Narayan (2019). Nozzle and self-intersection shocks also contribute to heating the gas and drive the temperature up to \(\sim 10^{6}\) K at radii up to \(50-100r_{g}\). In models with a prominent jet, the gas temperature may Figure 18: Here we show the gas temperature (colors) and \(\sigma=1\) boundary (pink line) for each model at the final snapshot. We mask the gas temperature in regions where \(\sigma>1\) for numerical reasons. exceed \(10^{6}\) K where \(\sigma>1\), which is in the range for X-ray photon production (Curd & Narayan, 2019). Since the jet is able to prevent polar inflows, the poles will remain optically thin even at the peak of the fallback rate, allowing jet emission to emerge. Comptonization within this region is expected to produce a hard spectrum which shines even in the \(\gamma\)-ray band. The non-jetted models on the other hand may have their X-ray emission largely absorbed if the photosphere is roughly spherical early on. Only after the funnel can form (or the photosphere recedes) can X-rays emerge. ## 6 Discussion ### Variability Driven by Violent Self-Intersection _Swift_J1644+57 showed variability on a range of timescales with both short period QPOs at \(\sim 200\)s (Reis et al., 2012) and long period dips in the light curve on time scales of \(\sim 10^{6}\)s (Saxton et al., 2012). The short period QPOs are thought to originate from short term variability on the horizon scale due to orbits or resonances in the inner accretion disk. The long period variability has been suggested to arise from wobbling of the jet (Tchekhovskoy et al., 2014), periodic violent stream self-intersection (Andalman et al., 2022), or magnetic flux eruption events (Curd & Narayan, 2023). Previous global simulations of forming TDE disks have identified complete disruptions of the incoming stream in cases where \(\beta=3-7\)(Curd, 2021; Andalman et al., 2022). The disruptions are temporally spaced by roughly the orbital period at the self-intersection radius. The fact that such a periodic dynamical effect took place was viewed as an attractive explanation for the variability in J1644+57. However, with no magnetic fields or radiative transfer calculations available, Andalman et al. (2022) hypothesized that this interaction could drive flaring through changes in the mass accretion rate at the horizon. As shown in Section 5.5, we directly relate the complete disruptions during self-intersection with jet variability. Since the total efficiency in Figure 16 correlates directly to jet power in MAD accretion disks, this can account for the large drops in the flux seen in J1644+57, which had minima as low as \(\lesssim 50\%\) of the maximum. This is solid confirmation of the idea proposed by Andalman et al. (2022); however, we suggest that flaring is not because of changes in the mass accretion rate directly. Rather, it is the fact that the stream acts to keep magnetic flux anchored to the BH. The magnetic flux threading the BH is at the saturation value prior to the stream disrupts itself during self-intersection. When the feeding from incoming steam material is temporarily halted, magnetic flux eruptions shed flux until \(\phi_{\rm BH}\) settles to a lower value. The disk injection simulations presented in Curd & Narayan (2023) found that after flux eruption events the magnetic flux took roughly the orbital period at the injection radius to recover. This is dynamically similar to the effects seen in this work; however, here the period is directly related to the self-intersection radius rather than the gas injection radius. Given the relationship between the variability period and the self-intersection radius, this suggests that X-ray variability can be related to the orbital properties of the disrupted star in a jetted TDE. For instance, assuming \(M_{\rm BH}=10^{6}M_{\odot}\) for J1644+57, the roughly \(10^{6}\) second variability corresponds to a self-intersection radius on the order of \(10^{3}r_{g}\). For an \(a_{*}=0\) BH, this corresponds to a \(\beta\sim 1.5\) TDE. The steady increase in the variability period may be due to an increase in the self-intersection radius as the disk builds up over time as illustrated by Ryu et al. (2023). We will explore the properties of magnetized TDE disks and magnetic flux saturation in more detail in a future report. ### Could Disk Collapse Still Be Dynamically Important in a MAD? Simulations of thin MADs presently negate the idea that TDEs will rapidly shed magnetic flux and resemble a standard thin disk as the accretion rate goes below Eddington. The powerful fields in a MAD provide support against runaway thermal collapse. This may only apply to the inner disk. Here we treat the thermal instability of the entire disk and examine how changes in \(f_{\rho}\) may lead to tilt evolution of the disk/jet without it becoming non-MAD. Since the mass fallback rate in TDEs evolves from super- to sub-Eddington, it is thought that the mass accretion rate in the disk will evolve similarly assuming \(\dot{M}\sim\dot{M}_{\rm fb}\). We can apply standard accretion theory to predict the geometry of the accretion disk over time. In an accretion disk, angular momentum transport is driven by viscosity and this drives accretion onto the BH but also heats the disk (See Abramowicz et al., 1988; Abramowicz & Fragile, 2013 for an introductory discussion). In order for the disk to remain stable, it cools through advection and radiation. In super-Eddington disks, the disk is optically thick and cooling is dominated by advective cooling, \(Q^{-}_{\rm adv}\). Dynamically, this means radiation within the disk is advected with the inflow and eventually crosses the BH horizon. In thin disks, energy generated by viscous heating is radiated locally and cooling is dominated by radiation, \(Q^{-}_{\rm rad}\). Since radiation pressure dominates in super-Eddington systems, the accretion disk puffs up to a large scale-height \(h_{d}\equiv H_{d}/R\gtrsim 0.3\) when radiation cannot escape directly, or \(Q^{-}_{\rm adv}\gg Q^{-}_{\rm rad}\). If the system is in a steady state, meaning that the mass accretion rate is constant with radius, advective and radiative cooling vary with radius. We assert that a steady state is a reasonable assumption even in the chaotic environment of a TDE since \(\dot{M}\) was found to be roughly constant with radius in Curd (2021). Following Sadowski (2011), we can write the ratio between advective and radiative cooling \[\frac{Q^{-}_{\rm adv}}{Q^{-}_{\rm rad}}\approx\dot{M}\frac{\kappa_{\rm es}h_{d }}{\pi Rc}, \tag{44}\] where \(h_{d}\) is the disk scale-height and \(\kappa_{\rm es}\) is the electron scattering opacity. From this expression, it is clear that as the accretion rate declines radiative cooling begins to become more significant until a critical accretion rate (which is around Eddington) where it becomes dominant. Assuming \(\dot{M}\) and \(h_{d}\) are constant and setting \(\kappa_{\rm es}=0.2(1+X){\rm cm^{2}\,g^{-1}}\) where \(X=X_{\odot}=0.7381\) is the solar hydrogen mass fraction, we can approximate the transition radius where advective and radiative cooling terms balance (or \(Q^{-}_{\rm adv}=Q^{-}_{\rm rad}\)) \[R_{\rm tr}=\dot{M}\frac{\kappa_{\rm es}h_{d}}{\pi c}. \tag{45}\] From the above expression, we can conclude that (i) \(R_{\rm tr}\) scales linearly with mass accretion rate and thus shrinks over time in a TDE, (ii) we expect the system to become thermally unstable at \(r>R_{\rm tr}\). Assuming the disk is heated purely by viscosity, collapse occurs on the thermal timescale \(t_{\rm th}\sim(\alpha\Omega)^{-1}\), where \(\Omega\) is the angular velocity and \(\alpha\) is the unitless viscosity parameter. We note that we have ignored other sources of heating such as dissipative heating due to the shocks for simplicity in our calculation of \(R_{\rm tr}\). If heating generated by shocks is not radiated locally, regions of the disk which have become thermally unstable by the condition \(Q_{\rm adv}^{-}<Q_{\rm rad}^{-}\) may remain stable and geometrically thick. The first shock we consider is the self-intersection shock which can release a large amount of energy, especially for relativistic TDEs. We account for heating by the self-intersection shock by first approximating the self-intersection radius. We adopt a similar method to Dai et al. (2015) to quantify apsidal precession. For material making its first pericenter passage, the precession angle may be approximated by \[\Delta\phi=\frac{6\pi}{a(1-e^{2})}. \tag{46}\] Here \(e\) is the eccentricity of the incoming stream and \(a\) is the semi-major axis. Note that we have expressed \(\Delta\phi\) using gravitational units so the semi-major axis \(a\) is given in gravitational radii. Treating the orbits of the incoming stream that has yet to pass through pericenter and the already precessed stream as ellipses, the self-intersection between the incoming material and material that has precessed occurs at the radius \[R_{\rm SI}=\frac{(1+e)R_{t}}{\beta(1-e\cos(\Delta\phi/2))}. \tag{47}\] The self-intersection shock releases energy at a rate of roughly \[L_{\rm SI}(t)\approx\frac{1}{2}\dot{M}_{\rm fb}(t)v_{\rm SI}^{2}, \tag{48}\] where the velocity at which the streams collide, \(v_{\rm SI}\), is on the order of the free-fall velocity. As the velocity of the stream elements is greater at smaller radii, the rate of dissipation will also be greater for closer orbits. We note that our definition of \(R_{\rm SI}\) assumes \(a_{\rm BH}=0\); however, \(a_{\rm BH}>0\) BHs can cause smaller \(R_{\rm SI}\) at lower \(\beta\) for retrograde TDEs due to frame dragging effects. Shocks are present in the disk throughout the evolution and are also sites of dissipative heating (Shiokawa et al., 2015; Sadowski et al., 2016; Liptai et al., 2019; Ryu et al., 2023). The \(\beta=1\) model in Liptai et al. (2019) showed dissipation from shocks exceed Eddington at up to ten times \(t_{\rm fb}\). Ryu et al. (2023) estimate the total mechanical energy output and find that it exceeds Eddington even after \(2t_{\rm fb}\), though they do not isolate energy from shocks. Since the spiral shocks are spread over the majority of the disk, energy generated from the shocks is expected to not be localized. Energy released from shocks may delay thermal collapse assuming it is instantaneously spread evenly in the disk. If the disk radiates at \(L_{\rm Edd}\), elements of the outer disk which are already thermally unstable by the condition \(Q_{\rm adv}^{-}\leq Q_{\rm rad}^{-}\) cannot collapse until the dissipation rate from shocks is less than Eddington. We define the time when the dissipation rate from shocks is less than \(L_{\rm Edd}\) as \(t_{\rm Edd}\). To illustrate how thermal collapse occurs, it is instructive to compute the time at which the disk component at \(R_{\rm tr}\) will collapse, \[t_{\rm collapse}=\begin{cases}t+t_{\rm th},&t\geq t_{\rm Edd}\\ t_{\rm Edd},&t<t_{\rm Edd},\end{cases} \tag{49}\] where \(t\) is the time since the initial disruption. We show examples of \(t_{\rm collapse}\) versus \(R_{\rm tr}\) in Figure 19 for \(m_{6}=1\), \(m_{*}=1\), and \(\beta=1\) TDE. We assume a Keplerian profile in both models (since the disk is circularized) and \(\alpha=0.1\) such that \(t_{th}\propto R^{3/2}\). In a standard accretion disk, the outer disk will collapse first and \(R_{\rm tr}\) slowly decreases over several hundred days. As a result, the ram pressure acting on the stream will also slowly increase since the bulk of the disk will still be geometrically thick until \(R_{\rm tr}\sim r_{H}\). Thus, a model where the transition radius depends only on mass accretion cannot explain rapid state transitions since \(R_{\rm tr}\propto\dot{M}\propto t^{-5/3}\). For the collapsing disk model, we assume \(t_{\rm Edd}=515\) days to be similar to _Swift_ J1644+57. Since the energy injected into the disk is assumed to exceed the radiated energy of the system until \(t>t_{\rm Edd}\), the outer disk will remain geometrically thick until it collapses (the vertical part of the curve in Figure 19) on the thermal time scale which is much smaller than \(t\). Once \(t>t_{\rm Edd}\), the inner disk follows the standard accretion curve. Here we have ignored the possibility of magnetic pressure support for simplicity. For an assumed state transition at several hundred days, Figure 19.— In the top panel, we illustrate a delayed disk collapse model where the accretion disk remains geometrically thick all the way to the transition radius (\(R_{\rm tr}\)) until \(t\geq t_{\rm Edd}\). In the bottom panel, we show the collapse time (\(t_{\rm collapse}\)) as a function of radius in the disk for a \(m_{6}=1\), \(m_{*}=1\), and \(\beta=1\) TDE with (dashed line) and without (solid line) delayed thermal collapse. In our delayed collapse model, larger radii are prevented from cooling early on and a large portion of the disk has the same collapse time (vertical portion of dashed line). only the delayed collapse model will have an instantaneous change in \(f_{\rho}\) over most of the disk. This will lead to the desired dynamical consequences on the jet and disk. That is, the density contrast of the collapsed region of the disk will rapidly increase by more than an order of magnitude at \(t_{\rm collapse}\) since the disk density in Equation 9 for \(r>R_{\rm tr}\) will be multiplied by a factor of \(h_{d}^{-1}\). This will lead to the self-intersection outflow rapidly ceasing as in our simulations, in which case the disk and jet will rapidly realign with the BH spin due to the disk being MAD. We also note the possibility of radial contraction of the disk (or a decrease in \(R_{d}\)) as in Metzger (2022) would only enhance the rise in \(f_{\rho}\). As such, we expect that the effects of disk collapse will play a role dynamically; however, our analysis favors relativistic TDEs with \(\beta>1\) (or retrograde TDEs) if self-intersection is the assumed method of delaying disk collapse. ### Tilt Evolution in Jetted TDEs Our simulations illustrate that even an aligned TDE can undergo strong tilt excitation when a jet is present. The fact that the tilt decreases when the density contrast increases, which is due to the self-intersection shock and outflow being weakened, suggests that X-ray shut-off in TDE jets may be possible even without the disk exiting the MAD state. We produce a toy model of a relativistic jet where the tilt and flux depend on \(f_{\rho}\). The tilt is assumed to be \[\mathcal{T}_{\rm jet}(f_{\rho})=\mathcal{T}_{\rm jet,0}\begin{cases}1,&f_{ \rho}<f_{\rho,\rm min}\\ \left(1-\frac{f_{\rho}-f_{\rho,\rm min}}{f_{\rho,\rm max}-f_{\rho,\rm min}} \right),&f_{\rho,\rm min}\leq f_{\rho}\leq f_{\rho,\rm max}\\ 0,&f_{\rho}>f_{\rho,\rm max}.\end{cases} \tag{50}\] Here we have assumed that the jet angle is constant when the stream is dense enough for the self-intersection shock to occur. It then linearly decreases from \(\mathcal{T}_{\rm jet,0}\) to \(0\) as \(f_{\rho}\) increases from \(f_{\rho,\rm min}\) to \(f_{\rho,\rm max}\). Here \(f_{\rho,\rm min}\) is the critical density contrast where self-intersection is weak enough for the jet to begin to realign and \(f_{\rho,\rm max}\) is where the jet is completely unperturbed. The X-ray variability in _Swift_ J1644+57 indicates that X-rays originate from near the BH, so we use a simple top-hat jet model and incorporate beaming effects to predict the time evolution of the flux. We adopt a model similar to Beniamini et al. (2023) where the off-axis jet flux is proportional to the on-axis jet flux through a simple beaming correction factor \[a=\frac{1-\beta_{\rm jet}}{1-\beta_{\rm jet}\cos(\mathcal{T}_{\rm obs}- \mathcal{T}_{\rm jet})} \tag{51}\] is the beaming correction, where \(\beta_{\rm jet}\) is the jet velocity \(\mathcal{T}_{\rm obs}\) is the angle of the observer relative to the \(z\)-axis. The flux is approximated as \[F(\mathcal{T}_{\rm jet})=F_{\rm on,jet}(t)\begin{cases}1,&\Delta\theta<\theta_{ \rm jet}\\ 0.5a^{2},&\theta_{\rm jet}<\Delta\theta<2\theta_{\rm jet}\\ 0.5a^{3},&\Delta\theta>2\theta_{\rm jet}.\end{cases} \tag{52}\] Here \(\Delta\theta\equiv\mathcal{T}_{\rm obs}-\mathcal{T}_{\rm jet}\) and \(\theta_{\rm jet}=\gamma_{\rm jet}^{-1}\) is the angle that the jet is beamed into. The factor of \(0.5\) is a geometrical correction. We assume that the jet flux is directly correlated to the mass accretion rate, which we assume to be a fraction of the fallback rate \(F_{\rm on,jet}(t)\propto\dot{M}_{\rm fb}(t)\). We divide the flux by \(F_{\rm on,jet}(t=0)=F_{\rm peak}\) for simplicity. We apply our toy model to a range of TDEs in Figure 20. For the disk/stream interaction, we set \(f_{\rho,\rm min}=0.01\) and \(f_{\rho,\rm max}=0.1\). For the jet, we assume \(\gamma_{\rm jet}=10\), \(\mathcal{T}_{\rm jet,0}=20^{\circ}\). The observer is assumed to be aligned with the jet initially with \(\mathcal{T}_{\rm obs,0}=20^{\circ}\). In addition to smoothly varying \(f_{\rho}\) models, we also analyse a collapsing disk model, motivated by the discussion in Section 6.2, with \(m_{\rm\theta}=5,\ m_{\star}=1,\ \beta=1\) which instantaneously collapses to \(h_{d}=0.05\) at \(t=t_{\rm Edd}=515\) days. Our model illustrates that if the band of \(f_{\rho}\) where the self Figure 20: In the top panel, we illustrate how \(f_{\rho}\) evolves in our simple disk model described in Section 2 with a range of BH mass \(m_{\rm\theta}=\dot{M}_{\rm BH}/10^{6}M_{\odot}\) and stellar mass \(m_{\star}=M_{\star}/M_{\odot}\). Note that we set \(f_{\rm acc}=0.1\) in each profile for simplicity. We show the initial \(f_{\rho}\) for each simulation with \(a_{\star}=0.9\) models in Table 1 based on \(\dot{M}_{\rm inj}\) (horizontal dashed lines). We also show a case where we assume disk collapse at \(t=t_{\rm Edd}\) for \(m_{\rm\theta}=5\) and \(m_{\star}=1\) (brown line). In the middle panel, we show the jet tilt from Equation 50 assuming \(\mathcal{T}_{\rm jet,0}=20^{\circ}\), \(f_{\rho,\rm min}=0.01\), and \(f_{\rho,\rm max}=0.1\). In the bottom panel, we show the beamed jet flux computed from Equation 52 assuming \(\mathcal{T}_{\rm obs}=\mathcal{T}_{\rm jet,0}\). We compare each model with the normalized X-ray flux from _Swift_ J1644+57 taken from Zauderer et al. (2013); Eftekhari et al. (2018); Cendes et al. (2021) (black circles). Models without collapse show a steady decrease in jet flux as the jet angle changes. Only the model which assumes disk collapse reasonably explains the \(>2\) order of magnitude decrease in X-ray flux observed in _Swift_ J1644+57. intersection weakens is large, the jet cannot shift by tens of degrees in 1-2 weeks. A rapid shutoff may instead be related to the collapse of the outer disk which causes a rapid spike in \(f_{\rho}\) and a subsequent rapid realignment of the jet, as illustrated by the 'collapsing disk' model. Due to relativistic beaming, this can account for the more than 2 orders of magnitude drop in X-rays at \(\sim 500\) days in less than 15 days jetted TDEs like _Swift_ J1644+57 with the appropriate TDE parameters. Note that we only require that the X-ray emission decrease by at least two orders of magnitude within \(\sim 2\) weeks in order to explain the behaviour of _Swift_ J1644+57. The X-rays after the decline could be disk emission which becomes dominant when the jet is out of the line of sight. This is more attractive of an explanation since the jet emission follows a \(t^{-5/3}\) power law even after tilting, but the late time X-rays are approximately constant. ### Coronal Evolution in Non-Jetted TDEs Tilt effects are unlikely to lead to substantial X-ray changes in non-jetted TDEs since the emitting region is non-relativistic and we only saw changes of up to \(\sim 10^{\circ}\) in our models. However, our \(a_{*}=0\) simulations demonstrate that a coronal region can be sustained even during stream self-intersection provided enough magnetic flux threads the disk/BH. Curd (2021) found no funnel/corona region during the stream injection phase due to a substantially lower magnetic flux than our MAD disks, but this may only apply to the TDE evolution near the peak fallback rate as their simulations covered only a few days of evolution. Assuming magnetic flux increases as a function of time, which appears to occur in TDE disks (Sadowski et al., 2016), our \(a_{*}=0\) simulations may be interpreted as the limiting state for a TDE at a given \(f_{\rho}\) around a Schwarzschild BH since they are MAD. Increases in X-ray emission in non-jetted TDEs may then be related to both a hot, magnetized corona forming as \(\phi\) increases combined with a decrease in optical depth as the fall back rate declines. The X-rays during this phase would exhibit a slow rise as the photosphere radius drops. The X-ray emission in AT2021ehb steadily turned on until it reached a maximum of \(\sim 5\times 10^{43}\)erg s\({}^{-1}\) before promptly declining by an order of magnitude at \(\sim 270\) days. The rise phase and spectral hardening of AT2021ehb could be explained by the coronal evolution scenario outlined in the previous paragraph while the rapid decrease in X-ray flux could conceivably be due to the delayed disk collapse we discuss in Section 6.2. While the coronal evolution in our non-jetted models is expected to be similar to a non-MAD case, whether or not thermal TDEs are also MAD is unclear and simulations which evolve the magnetic field suggest they should not be. This leads to important dynamical differences when considering the evolution of the disk. While a MAD disk may remain magnetic pressure supported, non-MAD accretion disks are expected to become thermally unstable once pressure support from shocks is lost. ### Future Prospects The discovery of tilt instability in TDE disks could have profound consequences on the emission properties beyond the X-ray emission from the jet or corona. It is conceivable that the polarization signal of the disk and jet will be impacted by changes in the tilt of the system. Although we found some evidence of enhanced magnetic flux accumulation in model m09f0.01b7, the turbulent dynamics near the horizon may have impeded this effect. The onset of the disk tilt also seems to correspond with a decrease of the magnetic flux at the horizon. Simulations with the self intersection radius move further away from the horizon may allow higher magnetic flux to be sustained. This may lead to a magnetic flux much higher than expected by Tchekhovskoy et al. (2014). Curd et al. (2022, 2023) investigated the morphology and radio spectra of jets from SANE super-Eddington accretion disks. Such an analysis could similarly be carried out on MAD TDE disks and would provide useful insight into how the dynamics of the system effect the ultimate jet properties. We plan to investigate this in a future work. ## 7 Conclusions * All of our simulations maintained a significant magnetic flux threading the horizon even after interacting with the TDE stream. Each simulation reached a MAD or semi-MAD state. Powerful jets were launched for \(a_{*}=0.9\) models. This is strong validation of the idea that TDEs can become MAD and launch spin-powered jets. * We found that the Maxwell stress is subdominant to hydrodynamic sources of viscosity at all values of \(f_{\rho}\) investigated in this work. Instead, shocks and hydrodynamic viscosity drive angular momentum transport. * The strength of the self-intersection outflow depends on the ratio between the stream and the disk. As the stream becomes less dense, ram pressure from the disk can effectively brake the stream and it eventually joins with the disk with either a weak self-intersection or no self-intersection at all. * During the early stages of a TDE, the stream is much denser than the disk with \(f_{\rho}<0.01\) since most of the mass has yet to fallback. The stream is essentially unperturbed by the disk at this stage and has a strong self-intersection shock since it maintains its orbital energy. The self-intersection outflow pushes on the jet/corona region. This tilts the jet/corona by \(10-40^{\circ}\) in our simulations. As \(f_{\rho}\) increases, the self-intersection shock weakens and powerful jets remain aligned with the BH spin. * In jetted TDEs, because the jet is tilted by the self-intersection outflow, the jet can transfer momentum to the disk, which tilts the disk to \(\sim 20-30^{\circ}\) in less than \(10,000t_{g}\). This configuration is stable due to the self-intersection of tilted material within \(R_{\rm SI}\) with un-tilted material being brought in from the stream. This effect does not occur when there is no self-intersection outflow (the stream is not dense enough) or there is no jet (as shown by our \(a_{*}=0\) models). * When we lowered the stream density in a restart of the model m09f0.01b7 after the disk/jet was tilted, we found that a MAD or semi-MAD state leads to alignment of the disk/jet similar to GRMHD simulations of tilted disks. We propose that this is due to the weakening/absence of the self-intersection, which acts to maintain the tilt once it sets in. * We demonstrate that rapid changes in \(f_{\rho}\), which may occur due to delayed disk collapse, will lead to a rapid X-ray shutoff in jetted TDEs. Jet realignment with the BH spin in models m09f1b7B and m09f0.1b7B represents a change of \(\sim 20-30^{\circ}\) in the jet angle in less than three days. We propose that this is an alternative method of rapidly dropping the X-ray flux in _Swift_ J1644+57 in \(\sim 15\) days without also requiring that the system no longer be MAD. ## 8 Acknowledgements We thank Enrico Ramirez-Ruiz for useful suggestions during preparation of the manuscript. We thank Aviyel Ahiyya for assistance with obtaining observational data. This work was supported by the Simons Collaboration on Extreme Electrodynamics of Compact Sources. Richard Jude Anantua was supported by the Oak Ridge Associated Universities Powe Award for Junior Faculty Enhancement. Computational support was provided via ACCESS resources (grant PHY230006). ## 9 Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2309.13985
Physics-Driven ML-Based Modelling for Correcting Inverse Estimation
When deploying machine learning estimators in science and engineering (SAE) domains, it is critical to avoid failed estimations that can have disastrous consequences, e.g., in aero engine design. This work focuses on detecting and correcting failed state estimations before adopting them in SAE inverse problems, by utilizing simulations and performance metrics guided by physical laws. We suggest to flag a machine learning estimation when its physical model error exceeds a feasible threshold, and propose a novel approach, GEESE, to correct it through optimization, aiming at delivering both low error and high efficiency. The key designs of GEESE include (1) a hybrid surrogate error model to provide fast error estimations to reduce simulation cost and to enable gradient based backpropagation of error feedback, and (2) two generative models to approximate the probability distributions of the candidate states for simulating the exploitation and exploration behaviours. All three models are constructed as neural networks. GEESE is tested on three real-world SAE inverse problems and compared to a number of state-of-the-art optimization/search approaches. Results show that it fails the least number of times in terms of finding a feasible state correction, and requires physical evaluations less frequently in general.
Ruiyuan Kang, Tingting Mu, Panos Liatsis, Dimitrios C. Kyritsis
2023-09-25T09:37:19Z
http://arxiv.org/abs/2309.13985v2
# Physics-Driven ML-Based Modelling for Correcting Inverse Estimation + ###### Abstract When deploying machine learning estimators in science and engineering (SAE) domains, it is critical to avoid failed estimations that can have disastrous consequences, e.g., in aero engine design. This work focuses on detecting and correcting failed state estimations before adopting them in SAE inverse problems, by utilizing simulations and performance metrics guided by physical laws. We suggest to flag a machine learning estimation when its physical model error exceeds a feasible threshold, and propose a novel approach, GEESE, to correct it through optimization, aiming at delivering both low error and high efficiency. The key designs of GEESE include (1) a hybrid surrogate error model to provide fast error estimations to reduce simulation cost and to enable gradient based backpropagation of error feedback, and (2) two generative models to approximate the probability distributions of the candidate states for simulating the exploitation and exploration behaviours. All three models are constructed as neural networks. GEESE is tested on three real-world SAE inverse problems and compared to a number of state-of-the-art optimization/search approaches. Results show that it fails the least number of times in terms of finding a feasible state correction, and requires physical evaluations less frequently in general. Physics-Driven Machine Learning Correction Optimization ## 1 Introduction Many estimation problems in science and engineering (SAE) are fundamentally inverse problem, where the goal is to estimate the state \(\mathbf{x}\in\mathcal{X}\) of a system from its observation \(\mathbf{y}\in\mathcal{Y}\). Examples include estimating the temperature state from the observed spectrum in combustion diagnostics [1], and discovering design parameters (state) of aero engine according to a group of performance parameters (observation) [2]. Traditional physics-driven inverse solvers are supported by rigorous physical laws, which vary depending on the application, e.g., the two-colour method for spectrum estimation [3], and cycle analysis for aero engine design [4]. Recent advances take advantage of machine learning (ML) techniques, constructing mapping functions \(F\) to directly estimate the state from the observation, i.e., \(\hat{\mathbf{x}}=F(\mathbf{y})\)[5, 6, 7]. Such ML solutions are more straightforward to develop, moreover, efficient and easy to use. However, ML-based state estimates can sometimes be erroneous, while SAE applications have very low error tolerance. One can imagine the disastrous consequences of providing unqualified aero engine design parameters. Therefore, it is critical to detect and correct failed ML estimations before adopting them. This leads to a special SAE requirement of evaluating the estimation correctness in the deployment process of an ML estimator. Since the ground truth state is unknown at this stage, indirect evaluation has to be performed. Such evaluations can be based on physical forward models and performance metrics [8; 9]. A common practice is to combine multiple evaluations to obtain an accumulated physical error, enforcing quality control from different aspects. When the physical error exceeds a feasibility threshold, one has to remediate the concerned ML estimation. One practice for finding a better estimation is to directly minimize the physical error in state space [10]. This requires solving a black-box optimization problem, for which it is challenging to find its global optimum, iterative approaches are used to find a near-optimal solution [11; 12]. In each iteration, a set of states are selected to collect their physical errors, then error feedback is used to generate better state(s) until a near-optimal state is found. Physical error collection involves time-consuming simulations[13; 14], e.g., a spectrum simulation which, despite taking just several minutes for each run [15], can become costly if queried many times. Consequently, the optimization process becomes time-consuming. Therefore, in addition to searching a satisfactory state with as small as possible physical error, it is also vital to decrease the query times to the physical evaluation. Our work herein is focused on developing an efficient algorithm for remediating the concerned ML estimation in deployment. We propose a novel correction algorithm, **G**enerative **E**xploitation and **E**xploration guided by hybrid **S**urrogate **E**rror (GEESE), building upon black-box optimization. It aims at finding a qualified state within an error tolerance threshold after querying the physical evaluations as few times as possible. The key design elements of GESE include: (1) A hybrid surrogate error model, which comprises an ensemble of multiple base neural networks, to provide fast estimation of the physical error and to enable informative gradient-based backpropagation of error feedback in model training. (2) A generative twin state selection approach, which consists of two generative neural networks for characterizing the distributions of candidate states, to effectively simulate the exploitation and exploration behaviours. We conduct thorough experiments to test the proposed algorithm and compare it with a series of state-of-the-art optimization/search techniques, based on three real-world inverse problems. Results show that, among the compared methods, GESE is able to find a qualified state after failing the least number of times and needing to query the physical evaluations less times. ## 2 Related Work **Optimization in SAE:** Development of SAE solutions often requires to formulate and solve optimization problems [16; 17; 18]. They are often black-box optimization due to the SAE nature. For instance, when the objective function is characterized through physical evaluations and solving partial differential equations (PDEs) [19], it is not given in a closed form. Typical black-box optimization techniques include Bayesian Optimization [20], Genetic Algorithm (GA) [21], and Particle Swarm Optimization (PSO) [22], etc. They often require a massive number of queries to the objective function in order to infer search directions for finding a near-optimal solution, which is time-consuming and expensive in SAE applications. Instead, differentiable objective functions are constructed, and the problem is reduced to standard optimization, referred to as white-box optimization to be in contrast with black-box. A rich amount of well established solvers are developed for this, e.g., utilizing first- and second-order gradient information [23]. Some recent developments use neural networks to optimize differentiable physical model evaluations, e.g., Optnet [24] and iterative neural networks [25]. However, physics-driven objective functions cannot always be formulated in a differential form, e.g., errors evaluated by the physical forward model in aero engine simulation, which is a mixture of database data, map information and PDEs [26]. A grey-box setting is thus more suitable in practice, where one does not overwrap the evaluations as a black box or oversimplify them as a white box, but a mixture of both. **Surrogate Model in Black-box Optimization:** To reduce the cost of querying objective function values in black-box optimization, recent approaches construct surrogate models to obtain efficient and cheap estimation of the objective function. This practice has been by and large used in SAE optimization, where the objective functions are mostly based on physical evaluations. The most popular technique for constructing surrogate models is ML, including neural networks and Gaussian process models [27; 28; 29]. The associated surrogate model is then incorporated within an optimization process, guided by, for instance, GA and Bayesian optimization, which generate states and interact with it [30; 29], or neural networks that work with differentiable surrogate models [31; 32; 12]. To avoid overfitting, recent effort has been invested to develop surrogate models consistent with some pre-collected data, aiming at obtaining more reliable near-optimal solutions [33; 34; 35; 36; 37]. Nevertheless, there is no guarantee that a surrogate model can well approximate a physical model consistently. Indeed, this is the motivation for the proposed method, where surrogate models are used to speed up the querying process, while the decision in regards to the suitability of the solution is based on the actual physical evaluation. **Reinforcement Learning for Inverse Problems:** In addition to black-box optimization based approaches, Reinforcement Learning (RL) [38; 39] serves as an alternative framework for solving inverse problems [40; 41; 42]. In an RL-based solution framework, physical evaluations are wrapped as a black-box environment outputting scalar reward, and the actions are the states to estimate according to the observation. The behaviour of the environment is simulated by training a world/critic model [43; 44], which is equivalent to a surrogate model of the physical evaluations. Different from black-box optimization based approaches, RL does not intend to search a feasible state estimation for the given observation, but to learn an authoritative agent/policy model [45; 46] to provide state estimations, while the policy training is guided by optimizing an accumulated scalar reward or error [47; 48]. Because of the desire of training a powerful policy model and the statistical nature of the reward, RL often requires many physical evaluations to collect diverse samples and validate training performance [49; 50]. This can be time-consuming when there is limited computing resource. ## 3 Proposed Method We firstly explain the notation convention: Ordinary letters, such as \(x\) or \(X\), represent scalars or functions with scalar output. Bold letters, such as \(\mathbf{x}\) or \(\mathbf{X}\), represent vectors or functions with vector output. The \(i\)-th element of \(\mathbf{x}\) is denoted by \(x_{i}\), while the first \(k\) elements of \(\mathbf{x}\) by \(x_{1:k}\). We use \(|\mathbf{x}|\), \(\|\mathbf{x}\|_{1}\) and \(\|\mathbf{x}\|_{2}\) to denote the dimension, \(l_{1}\)-norm and \(l_{2}\)-norm of the vector \(\mathbf{x}\). An integer set is defined by \([n]=\{1,2\ldots n\}\). Without loss of generality, an estimated state \(\hat{\mathbf{x}}\) is assessed by multiple physical models and/or metrics \(\{P_{i}\}_{i=1}^{h}\), resulting to an \(h\)-dimensional error vector, denoted by \[\mathbf{e}(\hat{\mathbf{x}},\mathbf{y})=\left[E_{P_{1}}(\hat{\mathbf{x}}, \mathbf{y}),E_{P_{2}}(\hat{\mathbf{x}},\mathbf{y}),\ldots,E_{P_{h}}(\hat{ \mathbf{x}},\mathbf{y})\right]. \tag{1}\] Each concerned ML estimation obtained from an observation \(\mathbf{y}\) is remediated independently, so \(\mathbf{y}\) acts as a constant in the algorithm, which enables simplifying the error notation to \(\mathbf{e}(\hat{\mathbf{x}})\) and \(E_{P_{i}}(\hat{\mathbf{x}})\). A better state estimation is sought by minimizing the following accumulated physical error as \[\min_{\hat{\mathbf{x}}\in\mathcal{X}}e(\hat{\mathbf{x}})=\sum_{i=1}^{h}w_{i}E_ {P_{i}}(\hat{\mathbf{x}}), \tag{2}\] where the error weights are priorly identified by domain experts according to the targeted SAE application. For our problem of interest, the goal is to find a state correction that is within a desired error tolerance, e.g., \(e(\hat{\mathbf{x}})\leq\epsilon\) where \(\epsilon>0\) is a feasibility threshold, determined by domain experts. Thus it is not necessary to find a global optimal solution, instead a feasible solution suffices. To achieve this, we adapt a typical iterative framework for black-box optimization: (1) Exploitation: Search the corrected states \(\left\{\hat{\mathbf{x}}_{i}^{(t)}\right\}_{i=1}^{n_{\text{tr}}}\) according to the guidance of surrogate model, and assess them by error function \(e\); (2) Exploration: Collect more data pair \(\left\{(\hat{\mathbf{x}}_{i},\mathbf{e}_{i})\right\}_{i=1}^{n_{\text{tr}}}\) for the purpose of updating the surrogate model. (3) Estimation: Training the surrogate error model with online collected data; This process is terminated till one of corrected states \(e(\hat{\mathbf{x}}_{\text{tr}})<\epsilon\). The objective is to find a feasible state \(\hat{\mathbf{x}}^{*}\) by querying the physical errors as less times as possible because it is time-consuming to collect the errors. Therefore, we challenge the difficult setting of choosing only two states to query at each iteration, where one is for exploitation and the other for exploration. A novel _twin state selection_ approach is proposed for this, which selects a potentially near-optimal state for exploitation and a potentially informative state for exploration at each iteration. Subsequently, this requires to perform error analysis for a large set of candidate states, which involves both the errors and their gradients. To ease and enable such computation, we develop a differentiable surrogate error model to rapidly approximate those error elements that are expensive to evaluate or in need of gradient calculation, and also provide informative gradient guidance with the assistance of error structure. A sketch of GEESE is shown in the algorithm 1. Below, we first explain the process of constructing the surrogate model for error approximation, followed by the twin state selection for characterizing the probability distributions of the candidate states and collecting errors, and finally, the implementation of the complete algorithm. ### Hybrid Neural Surrogate Error Models We start from an informal definition of implicit and explicit errors. Among the set of \(h\) error elements in Eq. (1 ), those that are expensive to collect or to perform gradient calculation are referred to as _implicit errors_, which includes the system is too complicated which need much more time to calculate the gradient than that of network's backpropagation; or the system is indifferentialable, such as the physical model of spectroscopy [15] and aeroengine [26] containing database or map. In addition to implicit error, the remaining are _explicit errors_. We order these error elements so that the first \(k\) elements \(\left\{E_{P_{i}}(\hat{\mathbf{x}})\right\}_{i=1}^{k}\) are implicit while the remaining \(\left\{E_{P_{i}}(\hat{\mathbf{x}})\right\}_{i=k+1}^{n}\) are explicit. Our strategy is to develop a surrogate for each implicit error element, while directly calculate each explicit error. Taking advantage of the robustness of ensemble learning [51, 52], we propose to estimate the implicit errors by an ensemble of multiple base neural networks. Each base neural network is fully connected with a mapping function \(\boldsymbol{\phi}(\mathbf{x},\mathbf{w}):\mathcal{R}^{D}\times\mathcal{R}^{| \mathbf{w}|}\rightarrow\mathcal{R}^{k}\), taking the \(D\)-dimensional state space \(\mathcal{R}^{D}\) as its input space, while returning the approximation of the \(k\) implicit errors by its \(k\) output neurons. The dimension of state space for prediciting temperature and concentration from spectroscopy is \(D=2\) which are respectively for temperature and concentration, while for desigining aeroengine is the amount of design paramters, which is eleven in our experiment(4),as we have eleven design parameters herein. The network weights are stored in the vector \(\mathbf{w}\). We train \(L\) individual base networks sharing the same architecture, while obtain the final prediction using an average combiner. As a result, given a state estimation \(\hat{\mathbf{x}}\), the estimate of implicit error vector is computed by \[\hat{\mathbf{e}}_{\text{im}}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i} \right\}_{i=1}^{L}\right)=\frac{1}{L}\sum_{i=1}^{L}\boldsymbol{\phi}\left( \hat{\mathbf{x}},\mathbf{w}_{i}\right), \tag{3}\] and thus, the accumulated physical error is approximated by \[\hat{e}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}\right\}_{i=1}^{L}\right)= \underbrace{\sum_{j=1}^{k}w_{j}\left(\frac{1}{L}\sum_{i=1}^{L}\phi_{j}\left( \hat{\mathbf{x}},\mathbf{w}_{i}\right)\right)}_{\text{approximated implicit error}}+ \underbrace{\sum_{j=k+1}^{h}w_{j}E_{P_{j}}(\hat{\mathbf{x}})}_{\text{true explicit error}}. \tag{4}\] We refer to Eq. (4) as a hybrid surrogate error model including both approximated and true error evaluation. The weights of the base neural networks \(\left\{\mathbf{w}_{i}\right\}_{i=1}^{L}\) are trained using a set of collected state-error pairs, e.g., \(D=\left\{(\hat{\mathbf{x}}_{i},\mathbf{e}_{i})\right\}_{i=1}^{N}\). In our implementation, bootstrapping sampling [53] is adopted to train each base neural network independently, by minimizing a distance loss between the estimated and collected implicit errors, as \[\min_{\mathbf{w}_{i}}\mathbb{E}_{(\hat{\mathbf{x}},\mathbf{e})\sim D}\left[ \text{dist}\left(\boldsymbol{\phi}\left(\hat{\mathbf{x}},\mathbf{w}_{i}\right),\mathbf{e}_{1:k}\right)\right]. \tag{5}\] A typical example of the distance function is \(\text{dist}(\hat{\mathbf{e}},\mathbf{e})=\|\hat{\mathbf{e}}-\mathbf{e}\|_{2}^{2}\). Notably, each element of the implicit error vector is estimated, rather than scalar value of the weighted error sum, as the structural information of the error vector can directly contribute in training, through the associated gradient information. When estimating the weighted sum directly, it is in a way to restrict the training loss to a form loosely like \((\hat{e}\left(\mathbf{w}\right)-\|\mathbf{e}\|_{1})^{2}\), which negatively affects the information content of the gradient information. We have observed empirically that, the proposed individual error estimation leads to improvements in training the exploitation generator, compared to using the weighted error sum, see ablation study (1) in Table 2. ### Twin State Selection A selection strategy, i.e., twin state selection (TSS), for querying two individual states at each iteration is proposed, one for exploration and one for exploitation, respectively. The objective of TSS is to substantially reduce the cost associated with physical error collection. In turn, this translates to the formidable challenge of designing a selection process, which maximizes the informativeness of the associated physical error collection subject to minimizing query times. It is obviously impractical and inaccurate to adopt the naive approach of choosing directly one state by searching the whole space. Instead, we target at a two-folded task, researching (1) which candidate set of states to select from and (2) how to select. By taking advantage of developments in generative AI, we construct generative neural networks to sample the candidate states. Specifically, we employ a latent variable \(\mathbf{z}\in\mathcal{R}^{d}\), which follows a simple distribution, e.g., uniform distribution \(\mathbf{z}\sim U\left([-a,a]^{d}\right)\), and a neural network \(\mathbf{G}(\mathbf{z},\boldsymbol{\theta}):\mathcal{R}^{d}\times\mathcal{R}^ {|\boldsymbol{\theta}|}\rightarrow\mathcal{R}^{D}\). The transformed distribution \(p\left(\mathbf{G}(\mathbf{z},\boldsymbol{\theta})\right)\) is then used to model the distribution of a candidate set. Thus, the task of candidate selection is transformed into determining the neural network weights \(\boldsymbol{\theta}\) for the generator \(\mathbf{G}\). In general, exploitation attempts to select states close to the optimal one, whereas exploration attempts to select more informative states to enhance the error estimation. There are various ways to simulate the exploitation and exploration behaviours. For instance, in conventional black-box optimization, e.g., Bayesian optimization and GA, exploitation and exploration are integrated within a single state selection process [54], while in reinforcement learning, a balance trade-off approach is pursued [55; 39]. Our method treats them as two separate tasks with distinct strategies for constructing generators and selecting states. **ExploI_Tation:** To simulate the exploitation behaviour, the exploitation generator \(\mathbf{G}_{\text{IT}}\) is trained at each iteration by minimizing the expectation of the physical error estimate, using the hybrid surrogate error model \[\boldsymbol{\theta}_{\text{G}_{\text{IT}}}^{(t)}=\arg\min_{\boldsymbol{\theta} \in\mathcal{R}^{d}}\mathbb{E}_{\mathbf{z}\sim U([-a,a]^{d})}\left[\hat{e} \left(\mathbf{G}_{\text{IT}}(\mathbf{z},\boldsymbol{\theta}),\left\{\mathbf{ w}_{i}^{(t-1)}\right\}_{i=1}^{L}\right)\right], \tag{6}\] where the base networks from the last iteration are used and we add the subscript \(t-1\) to the weights of the error network for emphasizing. Finally, among the candidates generated by \(\mathbf{G}_{\text{IT}}\) with its trained weights \(\boldsymbol{\theta}_{\text{G}_{\text{IT}}}^{(t)}\), we select the following state \[\hat{\mathbf{x}}_{\text{IT}}^{(t)}=\arg\min_{\hat{\mathbf{x}}\sim p\left(\hat {\mathbf{x}}|\boldsymbol{\theta}_{\text{G}_{\text{IT}}}^{(t)}\right)}\hat{e} \left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}^{(t-1)}\right\}_{i=1}^{L}\right), \tag{7}\] to query its physical error by Eq. (1), resulting in the state-error pair \(\left(\hat{\mathbf{x}}_{\text{IT}}^{(t)},\mathbf{e}_{\text{IT}}^{(t)}\right)\). If the queried error is less than the feasibility threshold, i.e., \(\mathbf{e}_{\text{IT}}^{(t)}\leq\epsilon\), this selected state is considered acceptable and the iteration is terminated. Otherwise, it is used to keep improving the training of the surrogate error model in the next iteration. **ExploRation:** To simulate the exploration behaviour, a state that does not appear optimal but has the potential to complement the surrogate error model should be selected. We use an exploration generator \(\mathbf{G}_{\text{R}}\) to generate candidates. To encourage diversity so as to facilitate exploration, we assign the generator random weights sampled from a simple distribution, e.g., \[\boldsymbol{\theta}_{\text{G}_{\text{R}}}^{(t)}\sim N\left(0,\mathcal{I}^{| \theta_{\text{G}_{\text{R}}}|}\right). \tag{8}\] We do not intend to train the exploration generator \(\mathbf{G}_{\text{R}}\), because any training loss that encourages exploration and diversity can overly drive the base networks to shift focus in the state space and cause instability in the integrated algorithm. Such an instability phenomenon, caused by training \(\mathbf{G}_{\text{R}}\), is demonstrated in the ablation study (2) in Table 2. By adopting the idea of active exploration via disagreement [56; 57], we consider the state, for which the base networks are the least confident about to estimate the implicit errors, as more informative. Since we use an ensemble of base neural networks to estimate the error, the standard deviations of the base network predictions serve as natural confidence measures [56], which are stored in a \(k\)-dimensional vector: \[\boldsymbol{\sigma}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}^{(t-1)} \right\}_{i=1}^{L}\right)=\left[\sigma_{1}\left(\left\{\boldsymbol{\phi}_{1} \left(\hat{\mathbf{x}},\mathbf{w}_{i}\right)\right\}_{i=1}^{L}\right),\ldots, \sigma_{k}\left(\left\{\boldsymbol{\phi}_{k}\left(\hat{\mathbf{x}},\mathbf{w}_ {i}\right)\right\}_{i=1}^{L}\right)\right]. \tag{9}\] The state maximizing disagreement, i.e., an accumulated standard deviation, between the base networks, is selected given as \[\hat{\mathbf{x}}_{\text{R}}^{(t)}=\arg\max_{\hat{\mathbf{x}}\sim p\left(\hat{ \mathbf{x}}|\boldsymbol{\theta}_{\text{G}_{\text{R}}}^{(t)}\right)}\boldsymbol{ \sigma}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}^{(t-1)}\right\}_{i=1}^{L} \right)\mathbf{w}_{k}^{T}, \tag{10}\] where the row vector \(\mathbf{w}_{k}=[w_{1},w_{2},\dots,w_{k}]\) stores the implicit error weights. The state-error pair \(\left(\hat{\mathbf{x}}_{\text{R}}^{(t)},\mathbf{e}_{\text{R}}^{(t)}\right)\) is obtained after error collection. **Surrogate Model Update:** To initialize the algorithm, we priorly collect a set of state-error pairs \(D_{0}=\{\mathbf{x}_{i},\mathbf{e}_{i}\}_{i=1}^{N}\) for randomly selected states. Next, at each iteration \(t\), two new states are selected and their physical errors are calculated, thus resulting to two new training examples to update the surrogate error model, and an expanded training set \(D_{t}=D_{t-1}\cup\left(\hat{\mathbf{x}}_{\text{IT}}^{(t)},\mathbf{e}_{\text{IT }}^{(t)}\right)\cup\left(\hat{\mathbf{x}}_{\text{R}}^{(t)},\mathbf{e}_{\text{ R}}^{(t)}\right)\). In our implementation, the base neural network weights \(\mathbf{w}_{i}^{(t-1)}\) obtained from the previous iteration are further fine tuned using the two added examples \(\left(\hat{\mathbf{x}}_{\text{IT}}^{(t)},\mathbf{e}_{\text{IT}}^{(t)}\right)\) and \(\left(\hat{\mathbf{x}}_{\text{R}}^{(t)},\mathbf{e}_{\text{R}}^{(t)}\right)\), as well as \(N\) examples sampled from the previous training set \(D_{t-1}\). ### Remediation System and Implementation Given an ML estimation \(\hat{\mathbf{x}}\), the remediation system collects its physical error vector as in Eq. (1), then calculates the accumulated error from the objective function of Eq. (2) and compares it to the feasibility threshold \(\epsilon>0\). When the error exceeds the threshold, the GEESE algorithm is activated to search a feasible estimation \(\hat{\mathbf{x}}^{*}\) such that \(e\left(\hat{\mathbf{x}}^{*}\right)\leq\epsilon\) by querying the physical error as few times as possible. Algorithm 2 outlines the pseudocode of GEESE3, while Fig.1 illustrates its system architecture. Our key implementation practice is summarized below. Footnote 3: We will release an implementation of GEESE after the paper is accepted and insert the link here. **Empirical Estimation:** Eqs. (6), (7) and (10) require operations performed over probability distributions. In practice, we approximate these by Monte Carlo sampling. For Eq. (6), we minimize instead the average over the sampled latent variables \(Z_{\text{IT}}=\{\mathbf{z}_{i}\}_{i=1}^{N_{\text{IT}}}\) with \(\mathbf{z}_{i}\sim U\left([-a_{\text{IT}},a_{\text{IT}}]^{d}\right)\), and this is fixed in all iterations. The search space of Eq. (7) is approximated by a state set computed from \(Z_{\text{IT}}\) using the trained generator, i.e., \(X_{\text{IT}}^{(t)}=\left\{\mathbf{G}_{\text{IT}}\left(\mathbf{z}_{i},\mathbf{ \theta}_{\text{G}_{\text{IT}}}^{(t)}\right)\right\}_{i=1}^{N_{\text{IT}}}\). Similarly, the search space of Eq. (10) is approximated by a state sample \(X_{\text{R}}^{(t)}=\left\{\mathbf{G}_{\text{R}}\left(\mathbf{z}_{i},\mathbf{ \theta}_{\text{G}_{\text{R}}}^{(t)}\right)\right\}_{i=1}^{N_{\text{IT}}}\) where \(\mathbf{z}_{i}\sim U\left([-a_{\text{R}},a_{\text{R}}]^{d}\right)\). **Early Stopping:** When training the base neural networks for implicit error estimation, in addition to the maximum iteration number \(T_{e}\), early stopping of the training is enforced when the training loss in Eq. (5) is smaller than a preidentified threshold \(\epsilon_{e}\). As a result, a higher number \(n_{e}\) of early stopped base neural networks indicates a potentially more accurate error estimation. This strengthens the confidence in training the generator \(\mathbf{G}_{\text{IT}}\) by Eq. (6) that uses the trained base neural network from the previous iteration. In other words, when the base neural network are not sufficiently well trained, it is not recommended to put much effort in training the generator, which relies on the estimation quality. Therefore, we set the maximum iteration number \(T_{G}\) for training \(\mathbf{G}_{\text{IT}}\) in proportional to \(n_{e}\), i.e., \(T_{G}=\delta_{G}[\frac{2n_{e}}{L}+1]\), where \(\delta_{G}\) is training frequency coefficient. **Failed Exploitation Exclusion:** The state selection motivated by exploitation aims at choosing an \(\hat{\mathbf{x}}_{\text{IT}}^{(t)}\) with comparatively low physical error. To encourage this, a focus coefficient \(c\) is introduced, which, together with the feasibility error threshold \(\epsilon>0\), is used to exclude a potentially failed state with a high estimated error, i.e., \(\hat{e}\left(\hat{\mathbf{x}},\{\mathbf{w}_{i}\}_{i=1}^{L}\right)>c\epsilon\), to avoid an unnecessary query. ## 4 Experiments and Results We test the proposed approach GEESE on three real-world engineering inverse problems, including aero engine design [42], electro-mechanical actuator design [58] and pulse-width modulation of 13-level inverters [59]. The first problem is to find eleven design parameters (state) of an aero engine to satisfy the thrust and fuel consumption requirement (observation), the second problem is to find 20 design parameters (state) of an electro-mechanical actuator to satisfy requirements for overall cost and safety factor (observation), and the third problem is to find a group of 30 control parameters (state) of a 13-level inverter to satisfy the requirements for distortion factor and nonlinear factor (observation). It is notable that these three problems are not computationally expense, which is only used for the convenience of demonstrating the proposed GEESE algorithm. Details of these problems along with their physical models and metrics for evaluation are explained in supplementary material (Section A). We compare it with a set of classical and state-of-the-art black-box optimization techniques, including Bayesian Optimization with Gaussian Process (BOGP), GA [21], PSO [22], CMAES [60], ISRES [61], NSGA2 [62], and UNSGA3 [63], as well as the recently proposed work SVPEN [42], which employs RL in solving SAE inverse problems. These techniques are chosen because they are effective at seeking solutions with the assist of actual physical evaluations. Driven by the research goal of finding a feasible state estimation by querying as few times as possible the physical evaluations, we adopt two metrics to compare performance. First, we set a maximum budget of \(T=1,000\) query Figure 1: The workflow of whole system: Existed ML model gives first estimation, which is assessed by physical evaluations \(E_{P}\). If failed, GEESE is activated. The error estimated by hybrid surrogate error model is used to train exploitation generator \(\mathbf{G}_{\text{IT}}\). Two candidate state sets are generated by \(\mathbf{G}_{\text{IT}}\) and exploration generator \(\mathbf{G}_{\text{R}}\), and finally, two states \(\hat{\mathbf{x}}^{*}=\hat{\mathbf{x}}_{\text{IT}}\) and \(\hat{\mathbf{x}}_{\text{R}}\) are selected by surrogate error model and feed to \(E_{P}\) for evaluation and data collection. The process is terminated till \(e(\hat{\mathbf{x}}^{*})\leq\epsilon\). times for all studied problems and compared methods, and test each method on each problem individually with 100 experimental cases, each case corresponds to a concerned ML state estimation. The setup of the experimental cases is described in Appendix A of supplementary material. We measure the number of experiments out of 100 where a method fails to correct the concerned estimation when reaching the maximum query budget, and refer to it as the failure times \(N_{\text{failure}}\). Also, the average number of queries that a method requires before finding a feasible state in an experiment, is reported over 100 experiments, and referred to as average query times \(N_{\text{query}}\). A more competitive algorithm expects smaller \(N_{\text{failure}}\) and \(N_{\text{query}}\). We report the adopted hyper-parameter and model setting for GEESE: The common hyperparameter settings shared between all three studied problems include \(T_{e}=40\), \(\epsilon_{e}=1e^{-4}\) and \(N=64\), and the learning rates of \(1e^{-2}\) and \(1e^{-4}\), for training the exploitation generator and base neural networks, respectively. Different focus coefficients of \(c=1.5,2\) and \(5\) (set in an increasing fashion) are used for problems 1, 2 and 3, respectively, due to an increased problem complexity in relation to their increasing dimensions of the state space. Similarly, an increasing training frequency coefficient \(\delta_{G}=1,1\) and \(7\) is used for problems 1, 2 and 3, respectively, because the problem requires more training iterations as it involves more complex patterns from higher-dimensional state space. The ensemble surrogate model for estimating the implicit errors is constructed as an average of 4 multi-layer perceptrons (MLPs) each with three hidden layers consisting of 1024, 2028 and 1024 hidden neurons. The exploration generator \(\mathbf{G_{R}}\) is constructed as a single layer perceptron (SLP) and its one-dimensional input is sampled from \(U\left([-5,5]\right)\). For problems 1 and 2 that are relatively less complex from an engineering point of view, we design a simplified exploitation generator that making \(\mathcal{Z}=\mathcal{X}\). Then, we directly sample \(Z_{\text{IT}}\) to be initial state set \(X_{\text{IT}}^{(0)}\), such an initial state set is iterated via the following equations modified from Eq. (6) and (7) to obtain state set \(X_{\text{IT}}^{(t)}\): \[X_{\text{IT}}^{(t)}=\arg\min_{X_{\text{IT}}^{(t)}\in\mathcal{X}}\mathbb{E}_{ \hat{\mathbf{x}}\sim X_{\text{IT}}^{(t-1)}}\left[\hat{e}\left(\hat{\mathbf{ x}},\left\{\mathbf{w}_{i}^{(t-1)}\right\}_{i=1}^{L}\right)\right], \tag{11}\] \[\hat{\mathbf{x}}_{\text{IT}}^{(t)}=\arg\min_{\hat{\mathbf{x}}\in X_{\text{IT} }^{(t)}}\hat{e}\left(\hat{\mathbf{x}},\left\{\mathbf{w}_{i}^{(t-1)}\right\}_{ i=1}^{L}\right). \tag{12}\] Problem 3 involves a special state pattern, requiring an increasing state value over the dimension, i.e., \(\mathbf{x}_{i}-\mathbf{x}_{i+1}<0\). To enable the latent variables to capture this, we construct the exploitation generator \(\mathbf{G_{\text{IT}}}\) as an MLP with three hidden layers consisting of 256, 512 and 256 hidden neurons. Also, to avoid generation collapse [64] in problem 3, a regularization term has been added to the training loss in Eq. (6), resulting in the following revised training to encourage state diversity, as \[\boldsymbol{\theta}_{\text{GT}}^{(t)}=\arg\min_{\boldsymbol{\theta}\in\mathbb{ R}^{30}}\mathbb{E}_{\mathbf{z}\sim U\left([-5,5]^{30}\right)}\left[\hat{e} \left(\mathbf{G_{\text{IT}}}(\mathbf{z},\boldsymbol{\theta}),\left\{\mathbf{ w}_{i}^{(t-1)}\right\}_{i=1}^{L}\right)+\max\left(0.0288-\sigma_{1}(\mathbf{z}, \boldsymbol{\theta}),0\right)\right], \tag{13}\] where \(\sigma_{1}(\mathbf{z},\boldsymbol{\theta})\) denotes the standard deviation of the first state element generated by \(\mathbf{G_{\text{IT}}}\). We encourage it to shift away from the collapsed point but not overly spread, by bounding \(\sigma_{1}\) with a portion of the standard deviation of a uniform distribution, e.g., \(0.288\), and the portion \(\frac{0.288}{10}=0.0288\) is observed empirically effective. The spread control is only needed for the first state as the remaining states follow by \(\mathbf{x}_{i}-\mathbf{x}_{i+1}<0\). Configurations of the competing methods and the extra information on GEESE are provided in Appendix B of supplementary material. ### Results and Comparative Analysis Table 1 summarizes the results of the compared methods for the three problems, obtained with a feasibility threshold of \(\epsilon=0.075\), which reflects high challenge with low error tolerance. It can be observed that GEESE has the least failure times \(N_{\text{failure}}\) on all three problems. In problem 3, especially, GEESE succeeds with no failure while most other methods have more than 10 failures. This is a highly desired characteristic for a remediation system with low error tolerance. In addition, GEESE also has the least query times \(N_{\text{query}}\) in all three problems, indicating the best efficiency. We report additional results in Appendix C of supplementary material by varying the feasibility threshold \(\epsilon\) and the initial sample size \(N\), where GEESE also achieves satisfactory performance in general, while outperforming other methods in handling higher-dimensional problems with lower error tolerance. SVPEN [42] cannot return a feasible correction in 1000 queries in all experiments, as its core supporting RL requires a lot more queries than other optimization based techniques. ### Ablation Studies and Sensitivity Analysis To examine the effectiveness of the key design elements of GEESEE, we perform a set of ablation studies and report the results in Table 2 using problem 1 with a small feasibility threshold \(\epsilon=0.05\) indicating low error tolerance. The studies include the following altered designs: (1) Estimate directly the implicit error sum using an MLP with the same hidden layers but one single output neuron. (2) Train the exploration generator \(\mathbf{G_{R}}\) by using an approach suggested by [57]. (3) Remove the early stopping design. (4) Remove the focus coefficient. Results show that estimating the implicit error sum worsens the performance. This is because the structural information in gradient is lost in error sum estimation, and this can cause ambiguous update when training \(\mathbf{G_{IT}}\) and consequently requires GEESE to make more error queries. Also training \(\mathbf{G_{R}}\) worsens the performance as compared to just assigning random network weights to \(\mathbf{G_{R}}\) without training. As previously explained in Section 3.2, this is because training \(\mathbf{G_{R}}\) can frequently shift the focus of the surrogate error model and, thus, impact on the stability of the optimization process. Both early stopping and focus coefficient play an important role in GEESE, where the former prevents GEESE from overfitting and the latter helps avoid unnecessary queries. Additional results on hyperparameter sensitivity analysis for GEESE are provided in Appendix D of supplementary material. The results show that GEESE is not very sensitive to hyperparameter changes and allows a wide range of values with satisfactory performance, which makes GEESE easy to be tuned and used in practice. ## 5 Discussion and Conclusion We have proposed a novel physics-driven optimization algorithm GEESE to correct ML estimation failures in SAE inverse problems. To query less frequently expensive physical evaluations, GEESE uses a cheaper hybrid surrogate error model, mixing an ensemble of base neural networks for implicit error approximation and analytical expressions of exact explicit errors. To effectively model the probability distribution of candidate states, two generative neural networks are constructed to simulate the exploration and exploitation behaviours. In each iteration, the exploitation generator is trained to find the most promising state with the smallest error, while the exploration generator is randomly sampled to find the most informative state to improve the surrogate error model. These two types of selection are separately guided by the approximated error by the ensemble and the disagreement between its base neural networks. The element-wise error approximation promotes a more effective interaction between the surrogate error model and the two generators. Being tested on three real-world engineering inverse problems, GEESE outperforms all the compared methods, finding a feasible state with the least query number with no failure under the low error tolerance setup. In future work, there are still challenges to address, particularly for very high-dimensional inverse problems. Such problems are in need of larger and more complex model architecture to accommodate their more complex underlying patterns, and thus impose challenge on training time and data requirement. Computation expense should not only consider the query cost of physical evaluations but also the learning cost of such models. Flexible neural network architectures that allow for embedding domain/induced knowledge in addition to simulation data and its training, as well \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{3}{c}{(1): Individual vs Sum Error Estimation} \\ \hline Surrogate Error Model & Query times & Standard deviation \\ \hline Estimate error elements & **20.20** & **16.37** \\ Estimate error sum & 23.26 & 21.18 \\ \hline \hline \multicolumn{3}{c}{(3): Effect of Early stopping} \\ \hline Schedule & Query times & Standard deviation \\ \hline with earlystop & **20.20** & **16.37** \\ w/o earlystop & 32.80 & 17.84 \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline \hline \multicolumn{3}{c}{(2): Effect of Exploration Training} \\ \hline Exploration style & Query times & Standard deviation \\ \hline w/o training & **32.64** & **22.82** \\ with training & 41.32 & 97.15 \\ \hline \hline \multicolumn{3}{c}{(4): Effect of Focus Coefficient} \\ \hline Schedule & Query times & Standard deviation \\ \hline with focus coefficient & **20.20** & **16.37** \\ w/o focus coefficient & 27.19 & 19.36 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of ablation studies reported on problem 1, where a better performance is highlighted in **bold**. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Algorithm**} & \multicolumn{2}{c|}{**Problem 1**} & \multicolumn{2}{c|}{**Problem 2**} & \multicolumn{2}{c}{**Problem 3**} \\ & \multicolumn{2}{c|}{**State Dimension:11**} & \multicolumn{2}{c|}{**State Dimension:20**} & \multicolumn{2}{c}{**State Dimension:30**} \\ & Failure times & Query times & Failure times & Query times & Failure times & Query times \\ \hline BOGP & 0 & 3.29 ±1.51 & 97 & 97.73.76 ±144.28 & 4 & 112.66 ±229.98 \\ GA & 0 & 64.00 ±0.00 & 0 & 130.56 ±63.31 & 13 & 231.76 ±339.71 \\ PSO & 0 & 64.00 ±0.00 & 0 & 64.00 ±0.00 & 12 & 244.16 ±343.71 \\ CMAES & 0 & 55.67 ±3.28 & 0 & 119.44 ±41.80 & 12 & 227.42 ±312.17 \\ ISRES & 0 & 65.00 ±0.00 & 0 & 177.64 ±80.51 & 16 & 250.05 ±350.16 \\ NSGA2 & 0 & 64.00 ±0.00 & 0 & 139.52 ±68.56 & 13 & 232.40 ±359.94 \\ UNSGA3 & 0 & 64.00 ±0.00 & 0 & 140.80 ±79.94 & 12 & 227.52 ±330.07 \\ SVPEN & 100 & 1000.00 ±0.00 & 100 & 1000.00 ±0.00 & 100 & 1000.00 ±0.00 \\ GEESE (Ours) & 0 & **3.18 ±1.98** & 0 & **51.65 ±33.01** & **0** & **43.56 ±65.28** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of the compared methods, where the best is shown in **bold**, while the second best is underlined as its interaction with the main solution model, e.g., an ML estimator for inverse problems, are interesting directions to pursue.
2309.07728
Predictions of the Strange partner of $T_{cc}$ in the quark delocalization color screening model
Inspired by the detection of $T_{cc}$ tetraquark state by LHCb Collaboration, we preform a systemical investigation of the low-lying doubly heavy charm tetraquark states with strangeness in the quark delocalization color screening model in the present work. Two kinds of configurations, the meson-meson configuration and diquark-antidiquark configuration, are considered in the calculation. Our estimations indicate that the coupled channel effects play important role in the multiquark system, and a bound state with $J^{P}=1^{+}$ and a resonance state with $J^{P}=0^{+}$ have been predicted. The mass of the bound state is evaluated to be $(3971\sim3975)$ MeV, while the mass and width of the resonance are determined to be $(4113\sim4114)$ MeV and $(14.3\sim 16.1)$ MeV, respectively.
Xuejie Liu, Dianyong Chen, Hongxia Huang, Jialun Ping
2023-09-14T14:04:01Z
http://arxiv.org/abs/2309.07728v1
# Predictions of the Strange partner of \(T_{cc}\) in the quark delocalization color screening model ###### Abstract Inspired by the detection of \(T_{cc}\) tetraquark state by LHCb Collaboration, we preform a systemical investigation of the low-lying doubly heavy charm tetraquark states with strangeness in the quark delocalization color screening model in the present work. Two kinds of configurations, the meson-meson configuration and diquark-antidiquark configuration, are considered in the calculation. Our estimations indicate that the coupled channel effects play important role in the multiquark system, and a bound state with \(J^{P}=1^{+}\) and a resonance state with \(J^{P}=0^{+}\) have been predicted. The mass of the bound state is evaluated to be \((3971\sim 3975)\) MeV, while the mass and width of the resonance are determined to be \((4113\sim 4114)\) MeV and \((14.3\sim 16.1)\) MeV, respectively. pacs: 13.75.Cs, 12.39.Pn, 12.39.Jh + Footnote †: Corresponding author + Footnote †: Corresponding author ## I Introduction In the recent two decades, an increasing number of charmonium-like states have been observed experimentally, which provide a good opportunity of searching for multiquark states. As the first confirmed charmonium-like state, \(Z_{c}(3900)\) was first observed in the year of 2013 by the BESIII[1] and Belle [2] Collaborations in the \(\pi^{+}J/\psi\) invariant mass spectrum of the process \(e^{+}e^{-}\to\pi^{+}\pi^{-}J/\psi\) at a center of mass energy of 4.26 GeV, and then the authors of Ref. [3] further confirmed the existence of \(Z_{c}(3900)\) by using the data sample collected by CLEO-c detector in the same process but at \(\sqrt{s}=4.170\) GeV. The partial wave analysis of the process \(e^{+}e^{-}\to\pi^{+}\pi^{-}J/\psi\) with the data sample accumulated at \(\sqrt{s}=4.23\) and 4.26 GeV indicated that the spin and parity of the \(Z_{c}(3900)^{+}\) state are \(1^{+}\)[4]. The observations indicate that such a new particle can not be simply interpreted in the conventional quark-antiquark and three-quark schemes. Thus, some exotic interpretations, such as tetraquark state [5; 6; 7; 8], hadronic molecular state [9; 10; 11; 12; 13; 14; 15; 16; 17], have been proposed. Besides the resonance interpretations, \(Z_{c}(3900)\) has also been considered as the kinematic effects [18; 19; 20; 21; 22; 23], which indicated that \(Z_{c}(3900)\) was not a genuine resonance. In the resonance frame, the quark component of \(Z_{c}(3900)\) is \(c\bar{c}q\bar{q}\). The flavor independence of the strong interactions naturally indicates the possible existence of the strange partner of \(Z_{c}(3900)\), whose quark components are \(c\bar{c}s\bar{q}\). Such kind of charmonium-like states with strangeness have been predicted theoretically in various model, such as tetraquark scenarios [24; 25], hadronic molecular model [26; 27], the hadro-quarkonium model [25] and initial single chiral particle emission mechanism [28]. In the year of 2020, the BES III Collaboration observed a new states named \(Z_{cs}(3985)\) in the \(K^{+}\) recoil mass distributions of the process \(e^{+}e^{-}\to K^{+}D_{s}^{-}D^{*0}/K^{+}D_{s}^{*-}D^{0}\)[29]. Later on, the LHCb Collaboration reported their observation of two exotic structures, \(Z_{cs}(4000)\) and \(Z_{cs}(4220)\), in the \(J/\psi K^{+}\) invariant mass spectrum of the \(B^{+}\to J/\psi\phi K^{+}\) decay in 2021 [30]. Since the observed masses of \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) were similar, these two states may be considered as the same one (hereinafter, we use \(Z_{cs}(3985)\) to refer to this state). It's interesting to notice that \(Z_{c}(3900)\) is located in the vicinity of the \(D^{*}\bar{D}\) threshold, while \(Z_{cs}(3985)\) is close to \(D_{s}^{*}\bar{D}\) threshold, thus one can consider \(Z_{cs}(3985)\) as a strange partner of \(Z_{c}(3900)\). Consequently, the hadronic molecular [31; 32; 33; 34; 35; 36; 37; 38; 39; 40], compact tetraquark [41; 42; 43] and hadro-quarkonium [25] scenarios have been proposed to decode the nature of \(Z_{cs}(3985)\). In the naive multiquark scenario, if there are multiquark states composed of \(c\bar{c}q\bar{q}\), the states composed of \(cc\bar{q}\bar{q}\) are also expected to exist and have been considered to be the molecular \(D^{*+}D^{0}\) states [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60], and compact states [61; 62; 63]. Recently, the LHCb Collaboration reported the observation of the first doubly charmed tetraquark state \(T_{cc}^{+}(3875)\) in the \(D^{0}D^{0}\pi^{+}\) mass spectrum just below the \(D^{*+}D^{0}\) mass threshold [64; 65] with \(I(J^{P})=1(1^{+})\). As indicated in Fig. 1, the quark components of \(T_{cc}(3875)\) are \(cc\bar{q}\bar{q}\), which indicate that \(T_{cc}(3875)\) could be a good candidate of compact tetraquark state. In Refs. [61; 62], the authors investigated the mass Figure 1: (Color online). The similarity of the hidden charm and doubly charmed states. Hereinafter, \(T_{cc}\) is used to refer the doubly charmed state with strangeness. spectrum of the \(S-\)wave doubly heavy tetraquark states \(QQ\bar{q}\bar{q}\) based on the improved chromomagnetic interaction model and found a stable \(cc\bar{a}\bar{d}\) tetraquark state with \(I(J^{P})=0(1^{+})\) below the \(D^{*+}D^{0}\) threshold, which is well consistent with the observed \(T_{cc}^{+}(3875)\). Moreover, the QCD sum rule estimation in Ref. [63] also supported the compact tetraquark interpretation. In addition, the observed mass of \(T_{cc}^{+}(3875)\) is only several hundred keV below the threshold of \(D^{0}D^{*+}\), which imply that \(T_{cc}^{+}(3875)\) could be interpreted as a shallow molecular state composed of \(D^{0}D^{*+}+h.c.\). Further estimations by using the quark models [44; 45; 46; 47; 48; 57; 58; 59], QCD sum rules [49; 50; 51], heavy quark symmetry [52; 53; 54; 60] and Bethe-Salpeter equations [55; 56] indicated that \(T_{cc}^{+}(3875)\) could be a good candidate of \(D^{0}D^{*+}+h.c.\) molecular state. Similar to the relation between \(Z_{cc}(3985)\) and \(Z_{c}(3900)\), one can expect the existence of the strange partner of \(T_{cc}(3875)\), i.e., the tetraquark states composed of \(cc\bar{q}\bar{s}\). Actually, before the observation of \(T_{cc}^{+}(3875)\), the Lattice QCD estimations in Ref. [66] predicted that the \(T_{cc\bar{s}}\) state with \(J^{P}=1^{+}\) was about 10 MeV below the threshold of \(D^{+}D_{s}^{-}\), while the estimations by using the heavy quark symmetry in Ref. [67] both its mass to be about 180 MeV above the corresponding threshold. In Ref. [68], the predicted \(T_{cc\bar{s}}\) tetraquark state with \(J^{P}=1^{+}\) was below the threshold of \(D^{+}D_{s}^{-}\), while those with \(J^{P}=0^{+}\) and \(2^{+}\) were both above the corresponding thresholds. After the observation of \(T_{cc}^{+}\), the authors in Ref. [60] took advantage of the experimental information on the binding energy of \(T_{cc}^{+}\) to fix the cutoff regulator of the loops in the Bethe-Salpeter equation and a \(D_{s}^{*}D^{*}\) bound state with \(J^{P}=1^{+}\) was predicted. Besides, the color-magnetic model estimations in Ref. [69] implied that both \(T_{cc}^{+}\) and \(T_{cc\bar{s}}^{+}\) system could be stable against the strong interactions. However, the state \(T_{cc\bar{s}}^{+}\) was not found in the quark model but if the mixing of S\(-\)D wave was taken into account, this state may be obtained [59]. As mentioned above, theorists have not reach an agreement on the existence of \(T_{cc\bar{s}}\) tetraquark states. In the present work, we perform a system estimations of \(T_{cc\bar{s}}\) system by using the quark delocalization color screening model (QDCSM) in an attempt to further explore the existence of the possible bounded and resonant states in the \(T_{cc\bar{s}}\) system. This paper is organized as follows. After the introduction, the details of the QDCSM and resonating group method (RGM) are presented in Section II. Our numerical results and the related discussions for \(T_{cc\bar{s}}\) system are given in Section III, and the last section is devoted to a short summary. ## II Quark delocalization color screening model and the resonanting group method ### Quark delocalization color screening model The QDCSM is an extension of the native quark cluster model [70; 71; 72; 73] and also developed with aim of addressing multiquark systems. For the tetraquark system, the Hamiltonian reads, \[H=\sum_{i=1}^{4}\left(m_{i}+\frac{p_{i}^{2}}{2m_{i}}\right)-T_{CM}+\sum_{j>i=1 }^{4}V(r_{ij}), \tag{1}\] where \(T_{CM}\) is the center-of-mass kinetic energy, who is usually subtracted without losing generality since we mainly focus on the internal relative motions of the multiquark system. The interplay is two body potential, which includes color-confining potential \(V_{\rm CON}\), one-gluon exchange potential \(V_{\rm OGE}\), and the potential results from Goldstone-boson exchange, \(V_{\chi}\), i.e., \[V(r_{ij})=V_{\rm CON}(r_{ij})+V_{\rm OGE}(r_{ij})+V_{\chi}(r_{ij}). \tag{2}\] In the present work, we focus on the \(S-\)wave low-lying positive \(T_{cc\bar{s}}\) tetraquark system with positive parity. In this case, the spin-orbit and tensor interactions vanish and the potential \(V_{\rm OGE}(r_{ij})\) becomes, \[V_{\rm OGE}(r_{ij}) = \frac{1}{4}\alpha_{s}^{q_{i}q_{j}}\lambda_{i}^{c}\cdot\lambda_{j}^ {c} \tag{3}\] \[\left[\frac{1}{r_{ij}}-\frac{\pi}{2}\delta(r_{ij})(\frac{1}{m_{i} ^{2}}+\frac{1}{m_{j}^{2}}+\frac{4\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}}{3m_{i}m_{ j}})\right],\] where \(m_{i}\) is the quark mass, \(\mathbf{\sigma}_{i}\) and \(\mathbf{\lambda}_{i}^{c}\) are the Pauli matrices and SU(3) color matrices, respectively. The \(\alpha_{s}^{q_{i}q_{j}}\) is the quark-gluon coupling constant, which offers a consistent description of mesons from light to heavy-quark sector. The values of \(\alpha_{ij}\) are associated with the quark flavors and in the present work they are fixed by reproducing the mass difference of the low-lying mesons with \(S=0\) and \(S=1\). The confining potential \(V_{\rm CON}(r_{ij})\) is, \[V_{\rm CON}(r_{ij})=-a_{c}\mathbf{\lambda}_{i}^{c}\cdot\mathbf{\lambda}_{j}^{c}\left[f( r_{ij})+V_{0_{q_{ij}}}\right], \tag{4}\] where the \(V_{0_{q_{ij}}}\) is determined by the mass differences of the theoretical estimations and experimental measurement of each kind of meson, which is also quark flavor related parameter. In the QDCSM, the function \(f(r_{ij})\) is defined as, \[f(r_{ij})=\left\{\begin{array}{ll}r_{ij}^{2}&\mbox{if $i,j$ occur in the same cluster,}\\ \frac{1-e^{-\mu_{ij}r_{ij}^{2}}}{\mu_{ij}}&\mbox{if $i,j$ occur in different cluster,} \end{array}\right. \tag{5}\] where the color screening parameter \(\mu_{ij}\) relevant to the light quarks can be determined by fitting the deuteron properties, \(NN\) and \(NY\) scattering phase shifts [74; 75; 76], which are \(\mu_{qq}=0.45\), \(\mu_{qs}=0.19\) and \(\mu_{ss}=0.08\). The parameter \(\mu_{ij}\) satisfy the relation \(\mu_{qs}^{2}=\mu_{qq}\mu_{ss}\), where \(q\) represents \(u\) or \(d\) quark. When extending to the heavy-quark case, we found that the dependence of the parameter \(\mu_{cc}\) is rather weak in the calculation of the spectrum of \(P_{c}\) states by taking the value of \(\mu_{cc}\) from \(10^{-4}\) to \(10^{-2}\) fm\({}^{-2}\)[77]. Moreover, when \(\mu_{ij}\) is rather small, the exponential function can be approximated to be, \[e^{-\mu_{ij}r_{ij}^{2}} = 1-\mu_{ij}r_{ij}^{2}+\mathcal{O}(\mu_{ij}^{2}r_{ij}^{4}).\] in the small \(r\) region. Accordingly, the confinement potential between two clusters is approximated to be, \[V_{\rm CON}(r_{ij}) = -a_{c}\lambda_{i}^{c}\cdot\lambda_{j}^{c}\left(\frac{1-e^{-\mu_{ij} \mathbf{r}_{ij}^{2}}}{\mu_{ij}}+V_{0_{ij}}\right) \tag{7}\] \[\approx -a_{c}\lambda_{i}^{c}\cdot\lambda_{j}^{c}\left(r_{ij}^{2}+V_{0_{ ij}}\right)\!,\] which is the same with the expression of two quarks in the same cluster. Thus, when the value of the \(\mu_{ij}\) is very small, the screened confinement will return to the quadratic form, which is why the results are insensitive to the value of \(\mu_{cc}\). So in the present work, we take \(\mu_{cc}=0.01\) fm\({}^{-2}\). Then \(\mu_{sc}\) and \(\mu_{uc}\) are obtained by the relation \(\mu_{sc}^{2}=\mu_{ss}\mu_{cc}\) and \(\mu_{uc}^{2}=\mu_{uu}\mu_{cc}\), respectively. The Goldstone-boson exchange interactions between light quarks appear because the dynamical breaking of chiral symmetry. For the \(T_{cc\bar{s}}\) system, the \(\pi\) exchange interaction vanishes because there is no unfavor quark pair in the tetraquark state, and then the concrete form of the Goldstone-boson exchange potential becomes, \[V_{ij}^{\chi} = V_{K}(\mathbf{r}_{ij})\sum_{a=4}^{7}\lambda_{i}^{a}\cdot\lambda_{j}^ {a}+ \tag{8}\] \[V_{\eta}(\mathbf{r}_{ij})\left[\left(\lambda_{i}^{8}\cdot\lambda_{j }^{8}\right)\cos\theta_{P}-\left(\lambda_{i}^{0}\cdot\lambda_{j}^{0}\right) \sin\theta_{P}\right],\] with \[V_{\chi}(\mathbf{r}_{ij}) = \frac{g_{ch}^{2}}{4\pi}\,\frac{m_{x}^{2}}{12m_{i}m_{j}}\frac{ \Lambda_{\chi}^{2}}{\Lambda_{x}^{2}-m_{x}^{2}}m_{x} \tag{9}\] \[\left\{(\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j})\!\left[Y(m_{\chi}\;r _{ij})-\frac{\Lambda_{\chi}^{3}}{m_{\chi}^{3}}Y(\Lambda_{\chi}\;r_{ij}) \right]\right\},\] \[\chi=\{K,\eta\},\] where \(Y(x)=e^{-x}/x\) is the standard Yukawa function. The \(\mathbf{\lambda}^{a}\) is the SU(3) flavor Gell-Mann matrix. The mass of the \(K\) and \(\eta\) meson is taken from the experimental value [78]. The chiral coupling constant, \(g_{ch}\), is determined from the \(\pi NN\) coupling constant through, \[\frac{g_{ch}^{2}}{4\pi}=\left(\frac{3}{5}\right)^{2}\frac{g_{\pi NN}^{2}}{4 \pi}\frac{m_{u,d}^{2}}{m_{N}^{2}}, \tag{10}\] where the SU(3) flavor symmetry only broken by the different masses of the light quarks. All the other model parameters are the same as the ones in Ref. [79], where three different sets of parameters were used to study the \(\bar{c}\bar{c}s\bar{s}\) tetraquark system and some experimental discovered charmonium-like state, such as \(\chi_{c0}(3930)\), \(X(4350)\), \(X(4500)\), \(X(4700)\) and \(X(4274)\), could be well explained. For the singlet of completeness, we collect the relevant model parameters in Table 1. In the QDCSM, the single-particle orbital wave functions in the ordinary quark cluster model are the left and right centered single Gaussian functions, which are, \[\phi_{\alpha}(\mathbf{S}_{i})=\left(\frac{1}{\pi b^{2}}\right)^{\frac {1}{2}}e^{-\frac{(\mathbf{r}_{\alpha}+\mathbf{S}_{i}\mathbf{\gamma}^{2})}{2\beta^{2}}},\] \[\phi_{\beta}(-\mathbf{S}_{i})=\left(\frac{1}{\pi b^{2}}\right)^{\frac {1}{2}}e^{-\frac{(\mathbf{r}_{\beta}+\mathbf{S}_{i}\mathbf{\gamma}^{2})}{2\beta^{2}}}. \tag{11}\] The quark delocalization is realized by writing the single-particle orbital wave function as a linear combination of the left and right Gaussians, which are, \[\psi_{\alpha}(\mathbf{S}_{i},\mathbf{\epsilon}) = \left(\phi_{\alpha}(\mathbf{S}_{i})+\epsilon\phi_{\alpha}(-\mathbf{S}_{i} )\right)/N(\epsilon),\] \[\psi_{\beta}(-\mathbf{S}_{i},\mathbf{\epsilon}) = \left(\phi_{\beta}(-\mathbf{S}_{i})+\epsilon\phi_{\beta}(\mathbf{S}_{i} )\right)/N(\epsilon),\] \[N(\epsilon) = \sqrt{1+\epsilon^{2}+2\epsilon e^{-S_{i}^{2}/4b^{2}}}, \tag{12}\] where \(\mathbf{\epsilon}(\mathbf{S}_{i})\) is the delocalization parameter determined by the dynamics of the quark system rather than free parameters. In this way, the system can choose its most favorable configuration through its dynamics in a larger Hilbert space. \begin{table} \begin{tabular}{c c c c c} \hline & Parameters & QDCSM1 & QDCSM2 & QDCSM3 \\ \hline \multirow{4}{*}{Quark Mass} & \(m_{u}\)(MeV) & 313 & 313 & 313 \\ & \(m_{s}\)(MeV) & 536 & 536 & 536 \\ & \(m_{c}\)(MeV) & 1728 & 1728 & 1728 \\ \hline \multirow{4}{*}{Confinement} & b(fm) & 0.29 & 0.3 & 0.315 \\ & \(a_{c}\)(MeV \(fm^{-2}\)) & 101 & 101 & 101 \\ & \(V_{0_{\rm m}}\)(MeV) & -2.3928 & -2.2543 & -2.0689 \\ & \(V_{0_{\rm m}}\)(MeV) & -1.9137 & -1.7984 & -1.6429 \\ & \(V_{0_{\rm m}}\)(MeV) & -1.4175 & -1.3231 & -1.2052 \\ & \(V_{0_{\rm m}}\)(MeV) & -1.3448 & -1.2826 & -1.2745 \\ & \(V_{0_{\rm m}}\)(MeV) & -0.7642 & -0.6739 & -0.5452 \\ & \(V_{0_{\rm m}}\)(MeV) & 0.6063 & 0.7555 & 0.9829 \\ \hline \multirow{4}{*}{OGE} & \(\alpha_{s}^{\rm na}\) & 0.2292 & 0.2567 & 0.3019 \\ & \(\alpha_{s}^{\rm na}\) & 0.2655 & 0.2970 & 0.3484 \\ \cline{1-1} & \(\alpha_{s}^{\rm na}\) & 0.3437 & 0.3805 & 0.4405 \\ \cline{1-1} & \(\alpha_{s}^{\rm na}\) & 0.3856 & 0.3604 & 0.3360 \\ \cline{1-1} & \(\alpha_{s}^{\rm na}\) & 0.5969 & 0.6608 & 0.7649 \\ \cline{1-1} & \(\alpha_{s}^{\rm na}\) & 1.5101 & 1.6717 & 1.9353 \\ \hline \end{tabular} \end{table} Table 1: Three sets of model parameters involved in the present estimations. ### The resonating group method In the present work, the RGM is employed to carry out the dynamical calculation. When dealing with the two-cluster system in this method, one can only consider the relative motion between the clusters, while the two clusters are frozen inside [80]. So the wave function of the \(T_{cc\bar{s}}\) system can be constructed as, \[\psi_{4q} = \mathcal{A}\left[\left[\psi_{A}(\mathbf{\rho}_{A})\psi_{B}(\mathbf{\rho}_{ B})\right]^{[\sigma]JS}\otimes\chi_{L}(\mathbf{R})\right]^{\prime}, \tag{13}\] where the symbol \(\mathcal{A}\) is the antisymmetry operator, which is defined as \[\mathcal{A} = 1-P_{13}. \tag{14}\] where the \(P_{13}\) indicates the exchange of the particle positions with numbers 1 and 3 from the Fig. 2. \([\sigma]=[222]\) gives the total color symmetry. The symbols \(I\), \(S\), \(L\), and \(J\) represent flavor, spin, orbit angular momentum, and total angular momentum of \(T_{cc\bar{s}}\) system, respectively. \(\psi_{A}\) and \(\psi_{B}\) are the wave functions of the two-quark cluster, which are, \[\psi_{A} = \left(\frac{1}{2\pi b^{2}}\right)^{3/4}e^{-\rho_{A}^{2}/(4b^{2}) }\eta_{A}S_{A}\chi_{A}^{c},\] \[\psi_{B} = \left(\frac{1}{2\pi b^{2}}\right)^{3/4}e^{-\rho_{B}^{2}/(4b^{2}) }\eta_{I_{B}}S_{B}\chi_{B}^{c}, \tag{15}\] where \(\eta_{I}\), \(S\), and \(\chi\) represent the flavor, spin and internal color terms of the cluster wave functions, respectively. According to Fig. 2, we adopt different Jacobi coordinates for different diagrams. As for the meson-meson configuration in Fig. 2-(a), the Jacobi coordinates are defined as, \[\mathbf{\rho}_{A} = \mathbf{r}_{q_{1}}-\mathbf{r}_{\bar{q}_{2}},\quad\mathbf{\rho}_{B}=\mathbf{r}_{q_ {3}}-\mathbf{r}_{\bar{q}_{4}},\] \[\mathbf{R}_{A} = \frac{m_{1}\mathbf{r}_{q_{1}}+m_{2}\mathbf{r}_{\bar{q}_{2}}}{m_{1}+m_{2}},\] \[\mathbf{R}_{B} = \frac{m_{3}\mathbf{r}_{q_{3}}+m_{4}\mathbf{r}_{\bar{q}_{4}}}{m_{3}+m_{4}},\] \[\mathbf{R} = \mathbf{R}_{A}-\mathbf{R}_{B},\] \[\mathbf{R}_{\mathbf{c}} = \frac{m_{1}\mathbf{r}_{q_{1}}+m_{2}\mathbf{r}_{\bar{q}_{2}}+m_{3}\mathbf{r}_{ q_{3}}+m_{4}\mathbf{r}_{\bar{q}_{4}}}{m_{1}+m_{2}+m_{3}+m_{4}}. \tag{16}\] where the subscript \(q/\bar{q}\) indicates the quark or antiquark particle, while the number indicates the quark position in Fig. 2-(a). As for the diquark-antidiquark configuration as shown in Fig. 2-(b), the relevant Jacobi coordinates can be obtained by interchanging \(\mathbf{r}_{q_{3}}\) with \(\mathbf{r}_{\bar{q}_{2}}\) in Eq. (16). Form the variational principle, after variation with respect to the relative motion wave function \(\chi(\mathbf{R})=\sum_{L}\chi_{L}(\mathbf{R})\), one obtains the RGM equation, which is, \[\int H\left(\mathbf{R},\mathbf{R}^{\prime}\right)\chi\left(\mathbf{R}^{\prime }\right)d\mathbf{R}^{\prime}=E\] \[\int N\left(\mathbf{R},\mathbf{R}^{\prime}\right)\chi\left(\mathbf{R}^{ \prime}\right)d\mathbf{R}^{\prime}, \tag{17}\] where \(H(\mathbf{R},\mathbf{R}^{\prime})\) and \(N(\mathbf{R},\mathbf{R}^{\prime})\) are Hamiltonian and norm kernels, respectively. The eigenenergy \(E\) and the wave functions can be obtained by solving the RGM equation. In the present estimation, the function \(\chi(\mathbf{R})\) can be expanded by gaussian bases, which is \[\chi(\mathbf{R}) = \frac{1}{\sqrt{4\pi}}\sum_{L}\left(\frac{1}{\pi b^{2}}\right)^{3 /4}\sum_{i}^{n}C_{i,L} \tag{18}\] \[\times\int e^{-\frac{1}{2}\left(\mathbf{R}-S\right)^{2}/b^{2}}Y^{L} \left(\hat{\mathbf{S}}_{i}\right)d\hat{\mathbf{S}}_{i},\] where \(C_{i,L}\) is the expansion coefficient, and \(n\) is the number of gaussian bases, which is determined by the stability of the results. \(\mathbf{S}_{i}\) is the separation of two reference centers. \(\mathbf{R}\) is the dynamic coordinate defined in Eq. (16). After including the motion of the center of mass, i.e., \[\phi c(\mathbf{R_{c}})=\left(\frac{4}{\pi b^{2}}\right)^{3/4}\mathrm{e}^{\frac{-2 \pi b^{2}}{b^{2}}}, \tag{19}\] one can rewrite Eq. (13) as, \[\psi_{4q} = \mathcal{A}\sum_{i,L}C_{i,L}\int\frac{d\hat{\mathbf{S}}_{i}}{\sqrt{4 \pi}}\prod_{\alpha=1}^{2}\phi_{\alpha}\left(\mathbf{S}_{i}\right)\prod_{\alpha=3}^ {4}\phi_{\beta}\left(-\mathbf{S}_{i}\right) \tag{20}\] \[\times\left[\left[\eta_{I_{A}S_{\alpha}}\eta_{I_{B}S_{\alpha}} \right]^{JS}Y^{L}(\hat{\mathbf{S}}_{i})\right]^{J}\left[\chi_{A}^{c}\chi_{B}^{c} \right]^{[\sigma]},\] where \(\phi_{\alpha}(\mathbf{S}_{i})\) and \(\phi_{\beta}(-\mathbf{S}_{i})\) are the single-particle orbital wave functions with different reference centers, whose specific expressions have been presented in Eq. (11). With the reformulated ansatz as shown in Eq. (20), the RGM equation becomes an algebraic eigenvalue equation, which is, \[\sum_{j,L}C_{j,L}H_{i,j}^{L,L^{\prime}} = E\sum_{j}C_{j,L^{\prime}}N_{i,j}^{L^{\prime}}, \tag{21}\] where \(N_{i,j}^{L^{\prime}}\) and \(H_{i,j}^{L,L^{\prime}}\) are the overlap of the wave functions and the matrix elements of the Hamiltonian, respectively. By solving the generalized eigenvalue problem, we can obtain the energies of the tetraquark systems \(E\) and the corresponding expansion coefficients \(C_{j,L}\). Finally, the relative motion wave function between two clusters can be obtained by substituting the \(C_{j,L}\) into Eq. (18). As for the flavor, spin and color Figure 2: The meson-meson configuration (diagram (a)) and diquark-antidiquark configuration (diagram (b)) in the \(T_{cc\bar{s}}\) tetraquark system. wave functions of the tetraquark system, they are constructed in a two step way. One can first construct the wave functions for the two clusters, and then coupling the wave functions of two clusters to form the wave function of tetraquark system. The details of the flavor, spin and color wave functions of tetraquark system are collected in the Appendix A. ## III Numerical results and discussions In this work, only the low-lying \(S-\)wave \(T_{cc\bar{s}}\) tetraquark state are considered and the spin of the tetraquark system can be 0, 1, and 2. Thus, the spin parity of \(T_{cc\bar{s}}\) tetraquark states can be \(0^{+}\), \(1^{+}\) and \(2^{+}\), respectively. Moreover, in the present estimations, both the meson-meson and diquark-antidiquark configurations are considered. In general, there are two types of color structures for the meson-meson configuration, which are color singlet-singlet (\(\mathbf{1}_{c}\otimes\mathbf{1}_{c}\)) and the color octet-octet (\(\mathbf{8}_{c}\otimes\mathbf{8}_{c}\)). The later color structure have been taken into account by introducing the color screening effects in the model, thus, we only consider the color singlet-singlet structures in the present estimations. A for the diquark-antidiquark configuration, both the antitriplet-triplet (\(\mathbf{\hat{3}}_{c}\otimes\mathbf{3}_{c}\)) and sextet-antisextet (\(\mathbf{6}_{c}\otimes\mathbf{\bar{6}}_{c}\)) structure are taken into account. All the relevant channels for all possible quantum numbers are listed in Table 2, where \(F^{i};S^{j}_{s};\chi^{i}_{k}\) shows the necessary basis combinations in flavor (\(F^{i}\)), spin (\(S^{j}_{s}\)) and color (\(\chi^{c}_{k}\)) degrees of freedom. ### Bound State With the above preparations, the low-lying \(S-\)wave \(T_{cc\bar{s}}\) tetraquark states are systematically explored herein. In Tables 3-5, we collect the estimated eigenenergies of the \(T_{cc\bar{s}}\) tetraquark states with different \(J^{P}\) quantum numbers. In those tables, the index of the first column represents the symbols of each channel and in the second and third columns we list all the involved channels and the corresponding theoretical threshold, respectively. Moreover, \(E_{sc}\) is the eigenenergy obtained in the single channel estimations, \(E_{cc}\) and \(E_{mix}\) are the eigenenergies estimated by considering the coupled channel effects in each kind of configuration, and in both configurations, respectively. Additionally, we define the binding energy \(E_{b}\) of the \(T_{cc\bar{s}}\) tetraquark states as \(E_{bt}=E_{i}-E_{4}(\infty)\) to identify whether or not the tetraquark states are stable against the strong interactions, where \(E_{4}(\infty)\) is the lowest possible threshold of the two meson structure estimated in the QDCSM. and \(i\) represents the different situation of channel coupling. Such a subtraction procedure can greatly reduce the influence of the model parameters on the binding energies. If \(E_{b}>0\), the tetraqaurk systems can fall apart into two mesons via the strong interactions. If \(E_{b}<0\), the strong decay into two mesons is forbidden kinemetally and therefore the decay can only occur via either the weak or electromagnetic interaction. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=0^{+}\), there are two channels in the meson-configuration and two channels in the diquark-antidiquark configuration. The estimated eigenenergies of \(T_{cc\bar{s}}\) state with \(J^{P}=0^{+}\) are listed in Table 3. The theoretical thresholds of the meson-meson channels are also presented for comparison. With the parameters in QDCSM1, the single channel estimations in the meson-meson configuration find that the eigenenergies are all above the corresponding threshold, which indicate that the single channel estimations do not support the existence of the bound states. In addition, when considering the coupled channels effects in the meson-meson configurations, we find the estimated eigenenergy is 3836.2 MeV, which is above the threshold of \(D^{0}D^{+}_{s}\). The lowest eigenenenergy obtained by coupled channel estimations in the meson-meson configuration is very close to the one of the single channel estimations in the \(D^{0}D^{+}_{s}\) channel, which indicates that the coupled channel effect in the meson-meson configuration is rather weak. As for the diquark-antidiquark configuration, both the single channel estimations and the coupled channel estimations indicate that the eigenenergies are above the threshold of \(D^{0}D^{+}_{s}\). Different from the meson-meson configuration, we find the eigenenergy obtained from the coupled channel estimation is at least 20 MeV below the lowest one of the single channel estimation, which indicate the coupled channels effect in the diquark-antidiquark configuration is much strong. Moreover, we extend the coupled channel effect in both configurations, and the eigenenergy is estimated to be 3836.2 MeV, which is still above the threshold of \(D^{0}D^{+}_{s}\). The results estimated with the parameters in QDCSM2 and QDCSM3 are very similar with those obtained with the parameter in QDCSM1 and no stable tetraquark state is found. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=1^{+}\), there are six channels, including three channels in the meson-meson configuration and three channels in the diquark-antidiquark configuration. From Table 4, the estimated results of three sets of model parameters are almost identical. When considering the single channel estimations in the meson-meson configuration, we find that the estimated eigenenergy of \(D^{0}D^{+}_{s}\) and \(D^{*}D^{+}_{s}\) channels are above the theoretical threshold of the corresponding physical channels, which indicates that these channels are scattering channels in single channel calculations. However, a bound state in the \(D^{*}D^{*+}_{s}\) channel with the bound energy about \(1\sim 10\) MeV is obtained with all three sets of model parameters. Besides, by the coupling channels with the meson-meson configuration, the estimated eigenenergy is slightly above the lowest theoretical threshold of the \(D^{*}D^{+}_{s}\), which show that the effect of couple channels in the meson-meson configuration is rather weak. For the diquark-antidiquark configuration, the estimated eigenenergies obtained for the single-channel and channel-coupled estimations are above the theoretical threshold of the lowest channel \(D^{*}D^{+}_{s}\). Nevertheless, when the channel coupling between the two configuration are taken into account, a shallow bound state is detected, although the magnitude of the bound energy is slightly different with different sets of the model parameters. In view of the above conclusions, we estimate the average values of each terms in the Hamiltonian to examine how a shallow \(D^{*}D^{+}_{s}\) bound state with \(J^{P}=1^{+}\) is created. In Table 6, we present the contributions of each interaction by considering the single channel and coupled channel calculations. In addition, the average values of each interaction of two conventional \(D^{*}\)and \(D_{s}^{+}\) mesons without interactions, i.e., the distance between the two mesons are large enough, are also listed in the table for comparison. From the Table, one finds that the magnitude of the average values of each terms for different sets of model parameter are very similar. Here, we define \(\Delta E_{xc}=E_{xc}-E_{M}\), \(\Delta E_{cc}=E_{cc}-E_{M}\) and \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Index} & \multirow{2}{*}{Channel} & \multirow{2}{*}{Threshold} & \multicolumn{3}{c}{QDCSM1} & \multicolumn{3}{c}{QDCSM2} & \multicolumn{3}{c}{QDCSM3} \\ \cline{3-13} & & & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) \\ \hline 1 & \(D^{0}D_{s}^{*+}\) & 3977 & 3978.2 & 3977.1 & 3971.1 & 3978.2 & 3977.7 & 3973.8 & 3978.2 & 3978.1 & 3974.8 \\ 2 & \(D^{*}D_{s}^{*}\) & 3975 & 3978.0 & & 3978.1 & & 3978.2 & & & \\ 3 & \(D^{*}D_{s}^{*+}\) & 4119 & 4110.8 & & 4117.2 & & 4118.1 & & & \\ 4 & \((cc)(\bar{q}\bar{s})\) & & 4544.2 & 4128.2 & & 4535.4 & 4127.2 & & 4518.9 & 4124.1 & \\ 5 & \((cc)(\bar{q}\bar{s})\) & & 4132.7 & & 4132.5 & & 4130.7 & & & \\ 6 & \((cc)(\bar{q}\bar{s})\) & & 4337.5 & & 4334.1 & & 4327.8 & & & \\ \hline \hline \end{tabular} \end{table} Table 4: The same as Table 3 but for the tetraquark states with \(J^{p}=1^{+}\). \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & \(J^{p}=0^{*}\) & & & \(J^{p}=1^{+}\) & & & \(J^{p}=2^{+}\) & & \\ \hline \multirow{2}{*}{index} & \(F^{i};S^{j}_{i};\chi^{c}_{k}\) & \multirow{2}{*}{channels} & \multirow{2}{*}{index} & \(F^{i};S^{j}_{i};\chi^{c}_{k}\) & & & \(F^{i};S^{j}_{i};\chi^{c}_{k}\) & & \\ & [i;j;k] & & & [i;j;k] & & & & [i;j;k] & & channels \\ \hline 1 & [1,1,1] & \(D^{0}D_{s}^{*}\) & 1 & [1,3,1] & \(D^{0}D_{s}^{**}\) & 1 & [1,6,1] & \(D^{*}D_{s}^{**}\) \\ 2 & [1,2,1] & \(D^{*}D_{s}^{**}\) & 2 & [1,4,1] & \(D^{*}D_{s}^{**}\) & 2 & [2,6,4] & \((cc)(\bar{q}\bar{s})\) \\ 3 & [2,1,3] & \((cc)(\bar{q}\bar{s})\) & 3 & [1,5,1] & \(D^{*}D_{s}^{**}\) & & & & \\ 4 & [2,2,4] & \((cc)(\bar{q}\bar{s})\) & 4 & [2,3,3] & \((cc)(\bar{q}\bar{s})\) & & & & \\ & & & 5 & [2,4,4] & \((cc)(\bar{q}\bar{s})\) & & & & \\ & & & 6 & [2,5,4] & \((cc)(\bar{q}\bar{s})\) & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: The relevant channels for all possible \(J^{p}\) quantum numbers. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Index} & \multirow{2}{*}{Channel} & \multirow{2}{*}{Threshold} & \multicolumn{3}{c}{QDCSM1} & \multicolumn{3}{c}{QDCSM2} & \multicolumn{3}{c}{QDCSM3} \\ \cline{3-13} & & & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) & \(E_{xc}\) & \(E_{cc}\) & \(E_{mix}\) \\ \hline 1 & \(D^{0}D_{s}^{*}\) & 3833 & 3836.3 & 3836.2 & 3836.2 & 3836.3 & 3836.3 & 3836.2 & 3836.2 & 3836.2 & 3836.2 \\ 2 & \(D^{*}D_{s}^{**}\) & 4119 & 4119.7 & & & 4120.9 & & 4121.2 & & \\ 3 & \((cc)(\bar{q}\bar{s})\) & & 4589.3 & 4299.8 & & 4585.1 & 4291.8 & & 4574.7 & 4277.9 & \\ 4 & \((cc)(\bar{q}\bar{s})\) & & 4321.3 & & & 4316.5 & & 4308.0 & & \\ \hline \hline \end{tabular} \end{table} Table 3: The low-lying eigenenergies (in unit of MeV) of \(T_{cc}\) tetraquark states with \(J^{p}=0^{*}\). \(\Delta E_{mix}=E_{mix}-E_{M}\). From our estimations, we find the contributions of the confinement potential to \(\Delta E_{xc}\), \(\Delta E_{cc}\) and \(\Delta E_{mix}\) are positive, which indicate the confinement potential hinders the \(D^{*}\) and \(D_{s}^{*}\) subclusters from forming a bound states. For the kinetic energy term, with more physical channels taking into consideration, the properties of kinetic energy basically transforms gradually from repulsion towards very strong attraction, whereas the similar conclusions can be drawn for the one-gluon-exchange interaction. In addition, in the meson exchange interactions, the meson exchange potential contributions to \(\Delta E_{xc}\), \(\Delta E_{cc}\) and \(\Delta E_{mix}\) are negligible, in particularly, the contributions from \(\eta\) meson exchange potential are all less than 0.05 MeV, which are not listed in the table. According to the above analysis, one can find that the kinetic energy term and the one-gluon-exchange potential have deep attractions in the channel coupling calculations with both the meson-meson and diquark-antidiquark configurations, However, the confinement potential displays as repulsive interaction, which weaken the overall attraction. Such a phenomenon illustrates the very delicate competition between the kinetic energy and the interaction potential from various sources in the Hamiltonian. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\), only one physics channel in the meson-meson configuration and one channel in the diquark-antidiquark configuration exists. From Table 5, one can find the eigenenergies obtained from the single channel estimation is higher than the physical meson-meson channel. After considering the coupled channel effect between the meson-meson and diquark-antidiquark configurations, the estimated eigenenergy is still above the threshold of \(D^{*}D_{s}^{**+}\), which indicates that there is no bound state in the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\). ### Resonance States In the bound state estimations, we find one bound state with \(J^{P}=1^{+}\) while there is no bound state in the \(J^{P}=0^{+}\) and \(J^{P}=2^{+}\) systems. In the following, we will employ the real scaling method to explore the possible resonance states in the \(T_{cc\bar{s}}\) tetraquark system. To determine whether these resonance states could be detected by the open channels, we perform a channel coupling estimation by including all the meson-meson and diquark-antidiquark channels in the estimations. The real scaling method is developed to identify the genuine resonances from the states with discrete energies with finite volume [81]. In this method, a factor \(\mathbf{S}_{m}\), which is the distance between two clusters, is adopted to scale the finite volume. So with the increase of the distance between two clusters, the continuum state will fall off toward its thresh \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{QDCSM1} & \multicolumn{4}{c}{QDCSM2} & \multicolumn{4}{c}{QDCSM3} \\ \cline{2-13} & \(H_{T}\) & \(V_{\text{CON}}\) & \(V_{\text{OGE}}\) & \(V_{K}\) & \(H_{T}\) & \(V_{\text{CON}}\) & \(V_{\text{OGE}}\) & \(V_{K}\) & \(H_{T}\) & \(V_{\text{CON}}\) & \(V_{\text{OGE}}\) & \(V_{K}\) \\ \hline \(E_{xc}\) & 1081.3 & -901.7 & -506.6 & \(\sim 0.0\) & 1011.2 & -783.9 & -554.2 & \(\sim 0.0\) & 917.9 & -615.9 & -628.8 & \(\sim 0.0\) \\ \(E_{cc}\) & 1073.9 & -895.9 & -505.8 & -0.1 & 1008.8 & -782.5 & -553.5 & -0.1 & 917.1 & -615.5 & -628.5 & \(\sim 0.0\) \\ \(E_{mix}\) & 1049.0 & -820.4 & -558.1 & -4.4 & 998.4 & -752.4 & -573.7 & -3.5 & 915.3 & -609.8 & -635.4 & -0.3 \\ \(E_{M}\) & 1079.6 & -903.3 & -506.1 & \(\sim 0.0\) & 1008.7 & -784.7 & -553.8 & \(\sim 0.0\) & 915.0 & -616.3 & -628.5 & \(\sim 0.0\) \\ \hline \(\Delta E_{xc}\) & 1.7 & 1.6 & -0.5 & \(\sim 0.0\) & 2.5 & 0.8 & 0.4 & \(\sim 0.0\) & 2.9 & 0.4 & -0.3 & \(\sim 0.0\) \\ \(\Delta E_{cc}\) & -5.7 & 7.4 & 0.3 & -0.1 & 0.1 & 2.2 & -0.3 & -0.1 & 2.1 & 0.8 & 0.0 & \(\sim 0.0\) \\ \(\Delta E_{mix}\) & -30.6 & 82.9 & -52.0 & -4.4 & -10.3 & 32.3 & -19.9 & -3.5 & 0.3 & 5.5 & -7.2 & -0.3 \\ \hline \hline \end{tabular} \end{table} Table 6: Contributions of each terms in Hamiltonian to the energy of the \(D^{0}D_{s}^{**}\) bound state with \(J^{P}=1^{+}\) in unit of MeV. \(E_{M}\) stands for the sum of two mesons threshold. Our estimations indicate the contributions of \(\eta\) meson exchange potential are all less than 0.05 MeV in different sets of model parameters. Thus, the contributions from \(\eta\) meson exchange are not presented. Figure 3: A sketch diagram of the resonance shape in the real-scaling method. old, the energy of the bound state remains unchanged, while a resonance state will tend to be stable. If the energy of a scattering state is far away from the one of the resonance, the coupling between the resonance and the scattering states is rather weak, and the energy of the resonance is almost stable. When the energy of the scattering state approaches the one of the resonance due to the increasing of \(\mathbf{S}_{m}\), the coupling will become strong, and if \(\mathbf{S}_{m}\) increases further, the energy gap between the resonance and scattering states will increase and the coupling will become weak again. In this way, an avoided crossing structure appears. This is a general feature of two interacting energy levels. Because of the continuum nature of the scattering states, the avoided crossing structure will show up repeatedly with the increasing of \(\mathbf{S}_{m}\) as shown in Fig. 3 and the resonance line corresponds to the energy of the resonance state. In addition, from the slopes of resonance and scattering states, the decay width can be estimated by, \[\Gamma = 4|V_{\text{min}}(S)|\frac{\sqrt{|k_{r}||k_{c}|}}{|k_{r}-k_{c}|} \tag{22}\] where \(k_{r}\) and \(k_{c}\) are the slopes of the resonance and scattering states, respectively. While, \(V_{min}(S)\) is the minimal energy difference between the resonance and the scattering state at avoided crossing point. This method has been successfully applied to investigate the pentaquark [82; 83], the dibaryon [84], and the tetraquark systems [79; 85; 86]. In the present work, we expand the spacial wave function with a set of gaussians with differences \(\mathbf{S}_{m}\), \((m=1,2,3,\ldots,n)\) and the distance with the relative motion of two clusters can be scaled. So we calculate the energy eigenvalues of the \(T_{cc\bar{s}}\) tetraquark system by taking the value of the largest distance (\(S_{m}\)) between two clusters from 4.0 to 9.0 fm to check if there is any resonance state. Here, we take the results of the QDCSM1 as examples, which are shown in Fig. 4 with different \(J^{P}\) quantum numbers. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=0^{+}\) as shown in Fig. 4-(a), one can note that the lower black horizontal line corresponds to the physical threshold of \(D_{s}^{+}D^{0}\), while the upper blue horizontal line with the energy to be about 4114 MeV, locates below the threshold of \(D^{*}D_{s}^{**}\), which corresponds to a resonance state since the resonance behavior appearing in the Fig. 4-(a) as the finite space is constantly expanding. Moreover, the resonance state is estimated by considering the full channel coupling, and the present result indicates that its main ingredient is \(D^{*}D_{s}^{**}\). In other words, the effect of the channel coupling push the energy of the physical channel \(D^{*}D_{s}^{**}\) a bit below its threshold. In addition, the width of this resonance state is estimated to be about 14.3 MeV according to Eq. (22). For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=1^{+}\) as shown in Fig. 4-(b), it is obvious that the lowest red horizontal line locates at the energy of 3971 MeV, which is below the threshold of the \(D^{0}D_{s}^{**}\), and this represents the bound states of \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=1^{+}\). This conclusion is consistent with the estimations in the last subsection. Moreover, two additional horizontal lines are also presented, which stand for the threshold of \(D^{*}D_{s}^{+}\) and \(D^{*}D_{s}^{**}\), respectively. The present estimations indicate that there is no resonance state in the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=1^{+}\), and the bound state in the \(D^{*}D_{s}^{**}\) channel becomes the scattering state by the effect of the channel coupling. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\) as shown in Fig. 4-(c), there is one horizontal line, which represents the threshold of \(D^{*}D_{s}^{**}\). It is clearly to conclude that there are no bound or resonant states in the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\). In addition, we perform the same estimations for the \(T_{cc\bar{s}}\) tetraquark system in the QDCSM2 and QDCSM3. The results are similar to those of QDCSM1. We summarize the results obtained from three sets of model parameters in Table 7. By taking the coupled channel effects into consideration, we find one resonance state with a mass \(4113\sim 4114\) MeV for the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=0^{+}\). The dominant component of the resonance state is \(D^{*}D_{s}^{**}\) with the percentage of this component to be about 80%. Moreover, the decay width of this resonance state is predicted to be \(14.3\sim 16.1\) MeV. For the \(J^{P}=1^{+}\) system, there is a bound state with energy range (\(3971.1\sim 3974.8\)) MeV and no resonance state is obtained. For the \(T_{cc\bar{s}}\) tetraquark system with \(J^{P}=2^{+}\), no resonance or bound state is obtained by the channel coupling estimations. ## IV Summary In the present work, the \(T_{cc\bar{s}}\) tetraquark system with the quantum number \(J^{P}=0^{+},1^{+},2^{+}\) are systemically investigated to search for the possible bound state and resonance state by using the RGM in the QDCSM framework. In the model, both meson-meson and diquark-antidiquark configurations are taken into account, and the single-channel and the coupled channel calculations are preformed to obtain the energy of the \(T_{cc\bar{s}}\) tetraquark system. In addition, a stabilization calculation is carried out to seek for possible resonance states. Furthermore, to check whether the estimated results are parameter dependent, three different sets of model parameters are employed in the calculation and we find the qualitative results of three sets of model parameters for the \(T_{cc\bar{s}}\) tetraquark system are very similar. From the present estimations, we find that the coupled channel effects plays important role in the \(T_{cc\bar{s}}\) tetraquark system. After taking the coupled channel effects into consideration, we predict one bound state with the energy to be \(3971.1\sim 3974.8\) MeV and \(J^{P}=1^{+}\). Moreover, one resonance state with \(J^{P}=0^{+}\) is also obtained, the resonance mass and width are estimated to be \(4113\sim 4114\) MeV and \(14.3\sim 16.1\) MeV, respectively. The predictions in the present work could be experimentally detected in the future by LHCb and Belle II. Additionally the theoretical and further experimental investigations for properties of the \(T_{cc\bar{s}}\) tetraquark could pave \begin{table} \begin{tabular}{c c c c c} \hline \multicolumn{2}{c}{State} & \multicolumn{3}{c}{Parameter Sets} \\ \hline & \(J^{P}\) & QDCSM1 & QDCSM2 & QDCSM3 \\ \hline Bound & \(1^{+}\) & \(3971.1\) & \(3973.8\) & \(3974.8\) \\ Resonance & \(0^{+}\) & \(4114/14.3\) & \(4144/15.8\) & \(4143/16.1\) \\ \hline \end{tabular} \end{table} Table 7: The energies and widths of the \(T_{cc\bar{s}}\) tetraquark states. the way for possible doubly and triply tetraquark states. ###### Acknowledgements. This work is supported partly by the National Natural Science Foundation of China under Contract No. 12175037, No. 12335001, No. 11775118 and No. 11535005. This work is also supported by china Postdoctoral Science Foundation funded project No. 2021M690626, and No. 1107020201. ## Appendix A The wave function of the open heavy charm tetraquark with strangeness ### The color wave function Plenty of color structures in multiquark systems will be available with respect to those of conventional hadrons such as \(q\bar{q}\) mesons and \(qqq\) baryons. In this appendix, we present how to construct the colorless wave function for a tetraquark system. For the meson-meson configurations, the color wave functions of a \(q\bar{q}\) cluster would be, \[C^{1}_{[111]} = \sqrt{\frac{1}{3}}(r\bar{r}+g\bar{g}+b\bar{b}),\] \[C^{2}_{[21]} = r\bar{b},\qquad C^{3}_{[21]}=-r\bar{g},\] \[C^{4}_{[21]} = g\bar{b},\qquad C^{5}_{[21]}=-b\bar{g},\] \[C^{6}_{[21]} = g\bar{r},\qquad C^{7}_{[21]}=b\bar{r},\] \[C^{8}_{[21]} = \sqrt{\frac{1}{2}}(r\bar{r}-g\bar{g}),\] \[C^{9}_{[21]} = \sqrt{\frac{1}{6}}\Big{(}-r\bar{r}-g\bar{g}+2b\bar{b}\Big{)}, \tag{34}\] where the subscript \([111]\) and \([21]\) stand for color-singlet (\(\mathbf{1}_{c}\)) and color-octet (\(\mathbf{8}_{c}\)), respectively. So, the SU(3)\({}_{\rm color}\) wave functions of color-singlet (two color-singlet clusters, \(\mathbf{1}_{c}\otimes\mathbf{1}_{c}\)) and hidden-color (two color-octet clusters, \(\mathbf{8}_{c}\otimes\mathbf{8}_{c}\)) channels are given, respectively, \[\chi^{c}_{1} = C^{1}_{[111]}C^{1}_{[111]},\] \[\chi^{c}_{2} = \sqrt{\frac{1}{8}}\big{(}C^{2}_{[21]}C^{7}_{[21]}-C^{4}_{[21]}C^{ 5}_{[21]}-C^{3}_{[21]}C^{6}_{[21]} \tag{35}\] \[+C^{8}_{[21]}C^{8}_{[21]}-C^{6}_{[21]}C^{2}_{[21]}+C^{9}_{[21]}C ^{9}_{[21]}\] \[-C^{8}_{[21]}C^{4}_{[21]}+C^{7}_{[21]}C^{2}_{[21]}\big{)}.\] For the diquark-antidiquark structure, the color wave functions of the diquark clusters are, \[C^{1}_{[2]} = rr,\qquad C^{2}_{[2]}=\sqrt{\frac{1}{2}}\big{(}rg+gr\big{)},\] \[C^{3}_{[2]} = gg,\qquad C^{4}_{[2]}=\sqrt{\frac{1}{2}}\big{(}rb+br\big{)},\] \[C^{5}_{[2]} = \sqrt{\frac{1}{2}}\big{(}gb+bg\big{)},\qquad C^{6}_{[2]}=bb,\] \[C^{7}_{[11]} = \sqrt{\frac{1}{2}}\big{(}rg-gr\big{)},\qquad C^{8}_{[11]}=\sqrt{ \frac{1}{2}}\big{(}rb-br\big{)},\] \[C^{9}_{[11]} = \sqrt{\frac{1}{2}}\big{(}gb-bg\big{)}. \tag{36}\] While the color wave functions of the antidiquark clusters can Figure 4: The stabilization plots of the energies of the \(T_{cct}\) tetraquark systems. be writen as, \[C^{1}_{[22]} = \bar{r}\bar{r},\qquad C^{2}_{[22]}=-\sqrt{\frac{1}{2}}\big{(}\bar{r} \bar{b}+\bar{g}\bar{r}\big{)},\] \[C^{3}_{[22]} = \bar{g}\bar{g},\qquad C^{4}_{[22]}=\sqrt{\frac{1}{2}}\big{(}\bar{r} \bar{b}+\bar{b}\bar{r}\big{)},\] \[C^{5}_{[22]} = -\sqrt{\frac{1}{2}}\big{(}\bar{g}\bar{b}+\bar{b}\bar{g}\big{)}, \qquad C^{6}_{[22]}=\bar{b}\bar{b},\] \[C^{7}_{[211]} = \sqrt{\frac{1}{2}}\big{(}\bar{r}\bar{g}-\bar{g}\bar{r}\big{)}, \qquad C^{8}_{[211]}=-\sqrt{\frac{1}{2}}\big{(}\bar{r}\bar{b}-\bar{b}\bar{r} \big{)},\] \[C^{9}_{[211]} = \sqrt{\frac{1}{2}}\big{(}\bar{g}\bar{b}-\bar{b}\bar{g}\big{)}. \tag{10}\] The color-singlet wave functions of the diquark-antidiquark configuration can be the product of color sextet and antisextet clusters (\(\mathbf{6}_{c}\otimes\bar{\mathbf{6}}_{c}\)) or the product of color-triplet and antitriplet cluster (\(\mathbf{3}_{c}\otimes\bar{\mathbf{3}}_{c}\)), which read, \[\chi^{c}_{3} = \sqrt{\frac{1}{6}}\big{(}C^{1}_{[21]}C^{1}_{[22]}-C^{2}_{[2]}C^{ [2]}_{[22]}+C^{3}_{[2]}C^{3}_{[22]}\] \[+C^{4}_{[2]}C^{4}_{[22]}-C^{5}_{[2]}C^{5}_{[22]}+C^{6}_{2}C^{6}_{ 2}\big{)},\] \[\chi^{c}_{4} = \sqrt{\frac{1}{3}}\big{(}C^{7}_{[11]}C^{7}_{[211]}-C^{8}_{[11]}C^ {8}_{[211]}+C^{9}_{[11]}C^{9}_{[211]}\big{)}. \tag{11}\] ### The flavor wave function For the flavor degree of freedom, the different coupling methods generate different flavor wave function. From the Table 2, the \(T_{cc\bar{c}}\) tetraquark flavor wave function can be categorized as \(F^{i}_{m}\) and \(F^{i}_{d}\), where the subscript \(m\) and \(d\) refer to meson-meson and the diquark-antidiquark configurations, respectively. Distinctive structures are gotten the quark coupling arrange. For the meson-meson structure, the coupling orders can be accessed as, \[F^{1}_{m} = (c\bar{q})-(c\bar{s}), \tag{12}\] while for the diquark-antidiquark structure, the flavor wave function should be written as \[F^{2}_{d} = (cc)-(\bar{q}\bar{s}) \tag{13}\] ### The spin wave function The total spin \(S\) of tetraquark states ranges from 0 to 2. All of them are considered. The wave functions of two body clusters are, \[\chi_{11} = \alpha\alpha,\] \[\chi_{10} = \sqrt{\frac{1}{2}}\big{(}\alpha\beta+\beta\alpha\big{)},\] \[\chi_{1-1} = \beta\beta,\] \[\chi_{00} = \sqrt{\frac{1}{2}}\big{(}\alpha\beta-\beta\alpha\big{)}. \tag{14}\] Then, the total spin wave functions \(S^{i}_{\,j}\) are obtained by considering the coupling of two subcluster spin wave functions with SU(2) algebra, and the total spin wave functions of four-quark states can be read as, \[S^{1}_{\,0} = \chi_{00}\chi_{00},\] \[S^{2}_{\,0} = \sqrt{\frac{1}{3}}\big{(}\chi_{11}\chi_{1-1}-\chi_{10}\chi_{10}+ \chi_{1-1}\chi_{11}\big{)},\] \[S^{3}_{\,1} = \chi_{00}\chi_{11},\] \[S^{4}_{\,1} = \chi_{11}\chi_{00},\] \[S^{5}_{\,1} = \sqrt{\frac{1}{2}}\big{(}\chi_{11}\chi_{10}-\chi_{10}\chi_{11}\big{)},\] \[S^{6}_{\,2} = \chi_{11}\chi_{11}. \tag{15}\]
2309.10254
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
Large language model (LLM) platforms, such as ChatGPT, have recently begun offering an app ecosystem to interface with third-party services on the internet. While these apps extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted. Apps also interface with LLM platforms and users using natural language, which can have imprecise interpretations. In this paper, we propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future third-party integrated LLM platforms. Our framework is a formulation of an attack taxonomy that is developed by iteratively exploring how LLM platform stakeholders could leverage their capabilities and responsibilities to mount attacks against each other. As part of our iterative process, we apply our framework in the context of OpenAI's plugin (apps) ecosystem. We uncover plugins that concretely demonstrate the potential for the types of issues that we outline in our attack taxonomy. We conclude by discussing novel challenges and by providing recommendations to improve the security, privacy, and safety of present and future LLM-based computing platforms.
Umar Iqbal, Tadayoshi Kohno, Franziska Roesner
2023-09-19T02:20:10Z
http://arxiv.org/abs/2309.10254v2
# LLM Platform Security: ###### Abstract Large language model (LLM) platforms, such as ChatGPT, have recently begun offering a _plugin ecosystem_ to interface with third-party services on the internet. While these plugins extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted. Plugins also interface with LLM platforms and users using natural language, which can have imprecise interpretations. In this paper, we propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future plugin-integrated LLM platforms. Our framework is a formulation of an attack taxonomy that is developed by iteratively exploring how LLM platform stakeholders could leverage their capabilities and responsibilities to mount attacks against each other. As part of our iterative process, we apply our framework in the context of OpenAI's plugin ecosystem. We uncover plugins that concretely demonstrate the potential for the types of issues that we outline in our attack taxonomy. We conclude by discussing novel challenges and by providing recommendations to improve the security, privacy, and safety of present and future LLM-based computing platforms. ## 1 Introduction Large language models (LLMs), such as GPT-4 [1], and platforms that leverage them, such as ChatGPT [2], have recently advanced tremendously in capabilities and popularity. In addition to the actual LLM at their core, platforms like ChatGPT [2] and Bard [3] are becoming increasingly complex in order to support various use cases and integrate with different features and third-party services. For example, platforms vendors like OpenAI and Google have announced and begun implementing a plugin ecosystem, allowing the LLM to interface with third-party services [4, 5]. In this paper, we investigate conceptually and empirically the security and privacy of these emerging LLM-based platforms that support third-party integrations or plugins. We focus on OpenAI, which has the most mature plugin ecosystem, as a case study. While extending the capabilities of LLM platforms, third-party plugins may add to the long list of security, privacy, and safety concerns raised by the research community about LLMs, e.g., [6, 7, 8, 9, 10, 11]. First, plugins are developed by third-party developers and thus should not be implicitly trusted. Prior research on other computing platforms has shown that third-party integrations often raise security and privacy issues, e.g., [12, 13, 14, 15, 16, 17]. In the case of LLM platforms, anecdotal evidence already suggests that third-party plugins can launch prompt injection attacks and can potentially take over LLM platforms [18]. Second, as we observe, plugins interface with LLM platforms and users using natural language, which can have ambiguous and imprecise interpretation. For example, the natural language functionality descriptions of plugins could either be interpreted too broadly or too narrowly by the LLM platform, both of which could cause problems. Furthermore, at least some LLM platform vendors, such as OpenAI, currently only impose modest restrictions on third-party plugins with a handful of policies [19, 20] and -- based on our analysis and anecdotal evidence found online [21] -- a frail review process. These concerns highlight that at least some LLM platform plugin ecosystems are emerging without a systematic consideration for security, privacy, and safety. If widely deployed without these key considerations, such integrations could result in harm to the users, plugins, and LLM platforms. Thus, to lay a systematic foundation for secure LLM platforms and integrations as a whole, we propose a framework that can be leveraged by current and future designers of LLM-based platforms. To develop the framework, we first formulate an extensive taxonomy of attacks by systematically and conceptually enumerating potential security, privacy, and safety issues with an LLM platform that supports third-party plugins. To that end, we survey the capabilities of plugins, users, and LLM platforms, to determine the potential attacks that these key stakeholders can carry out against each other. We consider both attacks and methods that uniquely apply to the LLM platform plugin ecosystem as well as attacks and methods that already exist in other computing platforms but also apply to LLM platform plugin ecosystems. Second, to ensure that our taxonomy is informed by current reality, we investigate existing plugins to assess whether they have the potential to implement adversarial actions that we enumerate in our taxonomy. Specifically, we leveraged our developed attack taxonomy to systematically analyze the plugins hosted on OpenAI's plugin store (as of June 6 2023) by reviewing their code (manifests and API specifications) and by interacting with them. When we uncovered a new attack possibility or found that a conjectured attack is infeasible, we iteratively revised our attack taxonomy. Looking ahead, we anticipate that third-party plugin integration in LLM platforms is only the beginning of an era of _LLMs as computing platforms_[22]. In parallel with innovation in the core LLMs, we expect to see systems and platform level innovations in how LLMs are integrated into web and mobile ecosystems, the IoT, and even core operating systems. The security and privacy issues that we identify in the context of LLM plugin ecosystems are "canaries in the coalmine" (i.e., advance warnings of future concerns and challenges), and our framework can help lay a foundation for these emerging LLM-based computing platforms. We summarize our key contributions below: 1. We develop a framework for the systematic evaluation of the security, privacy, and safety properties of LLM computing platforms. The core component of this framework is a taxonomy of attacks. 2. We demonstrate the actionability of our framework by evaluating it on a leading LLM platform (OpenAI and its plugin ecosystem) and found numerous examples where plugins, at least at the time of our analysis, had the potential to mount attacks enumerated in our taxonomy. 3. We reflect upon the framework and the attacks we found, to identify challenges and lessons for future researchers and industry practitioners seeking to secure LLM computing platforms. ## 2 Background: LLM plugin architecture Pre-trained LLMs on their own are limited at tasks that require interaction with external services. For example, LLMs cannot create a travel itinerary without using data about active flight schedules and cannot book tickets without reaching out to travel agencies. To tackle these limitations, platform vendors, such as OpenAI, have begun to extend LLMs by integrating them with third-party plugins [4]. Third-party plugins expose API endpoints to LLM platforms so that the LLMs can access up-to-date and/or restricted data (e.g., data beyond the training samples) and interface with third party services on the internet (i.e., to act on recommendations made in the emitted output) [23]. ### _Plugin architecture & interaction workflow_ LLM platform plugins (in at least one, currently existing design) consist of a manifest and an API specification, both of which are defined through natural language descriptions [23]. Code 1 and 2 show the manifest and API specification for an OpenAI plugin, respectively. The manifest includes plugin metadata, functionality description (defined separately for users and the LLM), authentication details, a link to a privacy policy, and a reference to the API specification. The API specification includes the API server endpoint, API functionality endpoints along with their description, expected API data with its type and description, and expected API response type. Figure 1 summarizes the life cycle of a user prompt to an LLM that requires interaction with a plugin. Once a user enables a plugin, its _description_for_model_ and endpoints (specified under _paths_) are fed to the LLM to build the context that is necessary for interpreting and resolving the user prompt with the help of the plugin. Once a user initiates a prompt, the LLM first determines if addressing the prompt requires the use of the installed plugin, based on the plugin's _description_for_model_ in Code 1. Then the LLM platform makes a call to the relevant plugin API endpoint, which is determined through the endpoint path _summary_ defined in Code 2. The LLM also determines the necessary data that needs to be sent along with API call, based on the schema _properties_ in Code 2. The LLM may send additional user data, that is not part of the user prompt, such as the country and state, with the plugin API request [23]. After the LLM makes the API call, the plugin executes its functionality on its own server and returns the response. The LLM then interprets the response returned from the API, and then formats it to show it to the user. Note that the LLM platform mediates all interactions with the plugin; users and plugins do not directly interact, except for a few instances, e.g., logging in on plugin service. ### _Responsibilities of key stakeholders_ Next, we briefly survey the capabilities and responsibilities of plugins, LLM platforms, and users, in order to provide background on the roles of different potential victims and attackers in our subsequent discussions. Additional details are provided in Appendix A. While surveying the capabilities, we consider OpenAI's current plugin architecture as our reference point. First, **plugin developers** are responsible for (1) developing and updating plugins, (2) hosting the plugin on their own servers, (3) supporting authentication of platform (e.g., endpoints restricted to traffic from the LLM platform), (4) supporting authentication of users to the plugin's entity, and (5) processing data and fulfilling commands provided by the LLM platform. Next, the **LLM platform** is responsible for (1) reviewing plugins and making them available on the plugin store, (2) providing user authentication interfaces, (3) initiating plugins based on user prompts, and (4) facilitating user-plugin interaction. Finally, the **user** is responsible for (1) installing and removing plugins, (2) managing their accounts, and (3) issuing prompts to interact with plugins. ### _Security considerations_ It is a standard practice in computing platforms that support third party ecosystems to impose restrictions on third parties. OpenAI also deploys some restrictions, provides suggestions, and enforces a review process to improve the security of the plugin ecosystem. As for restrictions, OpenAI requires that plugins use HTTPS for all communication with the LLM platform [24], build confirmation flows for requests that might alter user data, e.g., through POST requests [23], use OAuth if the plugin takes an action on user's behalf [19], not use non-OpenAI generative image models [19], adhere to OpenAI's content policy [25], comply with OpenAI's brand guidelines [26], among other things mentioned in the plugin review process [19]. OpenAI also: states that it will remove plugins if they change [27], restricts communication to only the plugin's root domain [28], and only passes user identifiers that do not persist for more than a day and beyond a chat session [29]. As for suggestions, OpenAI suggests that plugins implement API request rate limits [29] and provides an IP address range for OpenAI servers so that plugins can add it to their allow lists [30]. These restrictions and suggestions are a step in the right direction, but in our assessment, insufficient in securing LLM platforms (as we elaborate in Section 7.2). Furthermore, anecdotal evidence found online [21] and experience of some developers (Section 3.4) suggests that even these restrictions are not fully enforced by OpenAI. Figure 1: Life cycle of a user command to the LLM that requires the use of a plugin: User installs a plugin on LLM platform from the plugin store (step 1). Plugin description and its endpoints are fed to the LLM to build the context that is necessary for interpreting user prompt (step 2). User makes a prompt to the LLM that requires the use of the installed plugin (step 3). LLM selects the relevant plugin based on its description (step 4) and makes a request to the plugin API endpoint with the required parameters (step 5). LLM then interprets the response from the plugin API endpoint and displays it to the user. ### _Threat modeling_ We consider both security and NLP researchers and practitioners to be among our target audience with this paper. We rely heavily on threat modeling, a common technique in computer security. For the benefit of non-security readers, we provide some background here. Threat modeling is a process to systematically uncover vulnerabilities in a system with a goal to improve its security. The vulnerabilities uncovered during the threat modeling can be structured in an _attack taxonomy_, which thematically groups different classes of potential attack. The attack taxonomy provides information related to the objectives of the attacker and the potential mechanisms it could use to achieve the objectives. This structured information is used by system designers to triage and eliminate the potential attack mechanisms or the classes of attacks. To identify the threats, security analysts use a variety of techniques, including surveying existing security and privacy literature that closely relates to the system, domain knowledge, and parallels from the real-world. The goal of threat modeling is to not just reveal the novel attacks that uniquely apply to the system, but instead to enumerate a comprehensive set of both existing and novel attacks that need to be addressed in order to improve the security of the system. Along with the novel attacks, such as the ones related to the complexity of natural language processing in our case, which we later uncover in our taxonomy, existing attacks that uniquely apply to the system may also require development of new concepts and framework for mitigation. Listing both existing and novel attacks is also crucial because the consumers of an attack taxonomy may not be security experts, they may be experts in another domain, including NLP experts or product managers trying to make prioritization decisions. ## 3 Methodology In this section, we describe our framework to systematically evaluate the security, privacy, and safety properties of LLM platform plugin ecosystem. We iteratively develop a framework where we first formulate a preliminary attack taxonomy and evaluate it on the LLM platform plugins. Based on our evaluation, we refine our attack taxonomy and improve the examination of plugins. While developing the framework, we consider OpenAI's plugin-integrated LLM platform as our reference point. ### _Framework goal and tenets_ Our primary goal for building this framework is to contribute to a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future plugin-integrated LLM platforms. To achieve that goal, we set the fundamental tenets of our framework to be _actionable, extensive, extensible, and informed_. By being actionable, we intend to provide a scaffolding that could be leveraged to create an attack taxonomy for analyzing the security, privacy, and safety of plugin-integrated LLM platforms. Through extensiveness, we intend to capture a broad set of classes of existing attacks that also apply to LLM platforms along with new and future attacks that uniquely apply to LLM platforms. While being extensive, we also intend our framework to be extensible so that our framework can incorporate future attacks and is also generalizable across existing and future LLM platforms. Lastly, we intend to be informed in our enumeration and discovery of attacks such that they are grounded in reality and are not mere speculation. ### _Framework formulation process_ To begin creating our attack taxonomy, we take inspiration from prior research which has studied and discovered security and privacy issues in other computing platforms that support third-party application and plugins, such as the web [31, 32, 33, 34], mobile [12, 35], and IoT [14, 36, 16]. We then filter the attacks that apply to the plugin-integrated LLM platform, by considering the capabilities of key stakeholders, i.e., plugins, users, and the LLM platform, and the relationships between them, surveyed in Section 2. We also assume that an external adversary could compromise any of the stakeholders and assume their roles. Next, we use a structured threat modeling process with all authors to identify new and future attacks that could be mounted against plugin-integrated LLM platforms. To systematically enumerate these attacks, we review the surveyed capabilities of users, plugins, and LLM platforms (in Section 2) and determine the potential ways in which an adversary could leverage its capabilities to raise security, privacy, and safety issues. While determining, we rely on our domain knowledge and consider the issues that could arise due to the complexity of understanding the functionality described in natural language [37]. Toward achieving extensibility, it is important for the framework to be well-structured. To provide that structure, we first group the attacks based on the high-level goal that the attacker intends to achieve, and then further under pairs of LLM platform stakeholders, each acting as adversaries and/or victims. This extensibility will allow future researchers to incorporate new stakeholders, attack goals, and specific instantiations of attacks that might appear in future LLM platforms (or others that are not captured by our framework). ### _Applying the framework_ To ensure that our taxonomy is informed by current reality, we evaluate the feasibility of enumerated attacks by doing an analysis of plugins hosted on OpenAI. We also iteratively updated the taxonomy throughout this process. #### 3.3.1 Crawling OpenAI plugins OpenAI implemented support for plugins in ChatGPT in March, 2023 [4] and as of August 1, the OpenAI plugin store contains 799 plugins. Our analysis considers 268 plugins from June 6 and a few other plugins from later dates. All of the analysis was conducted between June 6 and July 31, 2023. We visited the OpenAI plugin store and individual plugin developer websites to download plugin manifest and specifications. We downloaded the amalgamated manifests for all plugins from the OpenAI's official plugin store. We then programmatically traversed the plugin manifests and sent requests to each plugin services' API URL to download their API specifications. Additionally, we also download privacy policies of plugins from the links provided by plugins. #### 3.3.2 Analyzing OpenAI plugins We started by manually analyzing the plugins' manifests and API specifications. We reviewed each plugin and examined whether our hypothesized attacks apply to the plugin. If we suspected that a plugin might demonstrate the capability of an attack, we installed the plugin on the LLM platform (ChatGPT) and interacted with it to exercise the potentially problematic functionality. When we uncovered a new attack possibility or found that a conjectured attack is infeasible, we revised our attack taxonomy accordingly. It is important to note that the discovered attack potentials (referred to as _risks_) may not be deliberate attempts by malicious actors but could instead be the results of bugs, poor security and privacy practices, poorly defined interfaces, and/or fundamental inabilities to provide stronger security within the current LLM plugin ecosystem. Nonetheless, these practices could result in harm to users. Overall, we find numerous plugins that contain or illustrate potential security, privacy, and safety risks. \begin{table} \begin{tabular}{l|l|l|l} \hline \hline ### Ethics and disclosure In evaluating the ethics and morality of this research, we drew from both consequentialist and deontological traditions [38]. We provide more details of our analysis in Appendix B. From our analysis, we determined that the benefits of conducting our research, including developing our analysis framework, analyzing the potential for attacks within the ChatGPT ecosystem, and (eventually) sharing our results provided significant benefits to society and to users. Contributing to this decision was the fact that LLM-based systems are evolving at an incredibly rapid rate and researchers are continuing to discover vulnerabilities (which means that, if defensive directions are not developed, adversaries could also discover vulnerabilities). Further, we determined that it was important to give OpenAI advance notice about our findings and, hence, we disclosed our findings to OpenAI before disclosing these results publicly. OpenAI responded that they appreciate our effort in keeping the platform secure but have determined that the issues do not pose a security risk to the platform. We clarified to them that our assessment of these issues is that they pose a risk to users, plugins, and the LLM platform and should be seriously considered by OpenAI. For issues related to the core LLM, e.g., hallucination, ignoring instructions, OpenAI suggested that we report them to a different forum [39] so that their researchers can address them, which we also did. While we did not analyze the security of any plugins, we did evaluate the potential for plugins to create attacks or risks for users. Hence, while one might argue that it is not necessary to disclose our findings to plugin vendors, we believe that they have a right to know about our findings that are relevant to their products before the public. We have informed plugin vendors about our results and findings with respect to their plugins. Upon disclosing to plugin vendors, we learned that in at least one case the plugin vendor also disclosed the situation to OpenAI because OpenAI (not them) were in the position to fix the issue, but OpenAI did not. ## 4 Attack surface between plugins & users In this section, we describe our attack taxonomy for the attack surface between plugins and users, interleaved with our application of this taxonomy to OpenAI's current ecosystem. We turn to the attack surface between plugins and the LLM platform in Section 5 and between plugins in Section 6 (see also Table 1 for a summary). We elaborate on each attack goal in a separate subsection along with example mechanisms through which that goal could be achieved. We also present the potential manifestation of some of the attack mechanisms in OpenAI's plugins, discovered by applying our framework. ### _Hijack user machine_ In this attack category, the goal of the attacker is to take control over the user's machine. After an attacker takes over a user's machine, they can abuse it in a number of ways. Potential harms could include stealing data stored on the user machine, locking the users out and demanding ransom, and inserting malware on web services hosted on the machine. At a high level, to hijack a user's machine, the attacker could manipulate users into installing malware or get access to their machines through social engineering. Below, we describe some example mechanisms through which an attacker could hijack a user's machine. #### 4.1.1 Leverage unvetted and unofficial plugins Users may install unvetted plugins and plugins outside the official plugin store (e.g., in developer mode). Attackers could exploit that workflow and trick users into installing malware that is cloaked as a plugin. #### 4.1.2 Make malicious recommendations Users may need to visit external websites to act on the recommendations from a plugin, e.g., clicking a link to visit a travel agent's website to book a flight. Malicious plugin developers could exploit that workflow and trick users into visiting websites that can infect their machines. #### 4.1.3 Exploit information shared for legitimate reason Some use cases supported by LLM platforms, such as remote management of a user's machine, could expose users to severe attacks from plugins. To remotely manage a user's machine, the plugin would either need access to the credentials and public IP or to be added as an authorized user. From there, a plugin could fully control the machine. ### Example of a potential risk Building on the attack from Section 4.1.3, we identified plugins that exfiltrate user credentials. We describe the details in Risk 1. **Risk 1.** **C**ependential **E**neideration **Risk overview.** OpenAI hosts plugins that provide functionality to users to automate their software development operations and infrastructures. These plugins require users to either share their credentials or allow SSH access to their servers. **Risk impact.** The presence of user credentials with third-party plugins could cause serious harm to users. In the worst case, the third-party developer can log into the user's machine and completely take over it. Even when the third party is trustworthy, a compromise at the third party's end could result on the leakage of user credentials to an attacker. **Evidence of risk.** AutoInfra1 [40] and ChatSSH-Plug [41] are two plugins that provide SSH session management functionality. AutoInfra1 asks users to add its public key in their SSH authorized_keys file and then asks them to share their public IP address, as seen in our partial interaction with AutoInfra1 in Figure 2 and in our full interaction by visiting AutoInfra1 interaction link.ab ChatSSHPug on the other hand, directly asks users to share their passwords or private key (more detail can be seen by visiting ChatSSHPug interaction linkc). Analysis conducted on June 07, 2023. [MISSING_PAGE_POST] Footnote 27: [https://github.com/flim-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-interaction.pdf](https://github.com/flim-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-interaction.pdf) Footnote 28: [https://github.com/flim-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutch](https://github.com/flim-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/main/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutchnfra1-platform-security/chatgpt-plugin-eval/blob/dutch) ### Harvest user data In this attack category, the attacker's goal is to collect personal and excessive data on users to gain benefit from it. Among other ways, an attackers could benefit from users' data by selling it to other services (e.g., data brokers) or using it for non-essential and undisclosed purposes (e.g., to profile users for online advertising), both of which are common practices on the internet [44, 45, 46, 36]. Below we describe possible mechanisms to collect user data. #### 4.3.1 Mandate accounts Plugins could mandate users to log in before they can use their services, even when an account is not necessary, e.g., a plugin that provides latest news. Mandating accounts will allow plugins to associate user data with personal identifiers, such as email addresses. Such linking can enable plugin services to track user activities and reach out to them even outside the LLM platform, without their knowledge or consent. #### 4.3.2 Define broad API specifications Similar to over-privileged mobile apps [35], plugins could specify overly broad API parameters to collect excessive amount of user data, even more than necessary for their functionality. For example, a plugin's API specification could include that it needs the entire user query instead of relevant keywords. Note that the collection of excessive user data could just be needed to fulfill the use case offered by the plugin. ### Example of a potential risk Building on the attack described in Section 4.3.2, we identified plugins that exfiltrate user prompt history. We describe the details in Risk 3. **Risk overview.** OpenAI hosts plugins that allow users to export their interactions with the ChatGPT. Plugins that provide these services, exfiltrate either raw or summarized user prompts and ChatGPT responses to their API endpoints. **Risk impact.** The plugins get access to users' conversation with ChatGPT, which can contain sensitive and personal information. Some of these plugins also require users to sign in to their platform before they can use the plugin, which allows them to associate user prompt history to a persistent email address. **Evidence of risk.** PDF Exporter [47] and Reflect Notes [48] are two plugins that exfiltrate user prompt history. PDF Exporter converts ChatGPT interactions into a PDF and Reflect Notes provides functionality to users to "reflect on their interactions". Partial user interaction with PDF Exporter can be seen in Figure 4, which shows that the user's sensitive information, in this particular scenario, their credentials, are sent to the plugin. Full interaction with PDF Exporter and Reflect Notes can be viewed by visiting PDF Exporter interaction link\({}^{a}\) and Reflect Notes interaction link\({}^{b}\), respectively. This analysis was conducted on June 08, 2023. **Precavations by plugin services.** PDF Exporter states in its privacy policy that the plugin does not collect, store, or share any personal information [49]. However, based on our analysis we did not notice any functionality or attempt to restrict or truncate personal information in the API specification, as Figure 4: User interaction with PDF Exporter plugin. Figure 3: Dual presence of Upskillr plugin on the OpenAI plugin store. it is also demonstrated in our interaction with the plugin in Figure 4. Reflect Notes provides a generic privacy policy, which does not seems to be specific to the plugin [50]. Reflect Notes also claims that the user data is end-to-end encrypted, however, we did not find any functionality in its API specification to encrypt user data before it is transmitted. Our interaction also showed the transmission of un-encrypted conversation to Reflect Notes (Reflect Notes interaction link\({}^{b}\)). We present additional examples that demonstrate the risk of user data harvesting in Appendix C. **Observation.** Users might want to share their sensitive data in some contexts but not in others. It would be a key challenge for LLM platforms to support interfaces that take informed user consent for specific contexts, e.g., through permission models, and not expose that consent in other contexts. ### _Benefit partner plugins_ In this attack category, an attacker plugin's goal is to benefit their partner plugins. There are potentially several benefits that plugins can provide each other through several mechanisms. Broadly, the benefits could fit under the objective of improving each other's businesses to make more revenue. It is important to note that the plugin collusion may not be beneficial for users and in fact may result in harms to the users. Below we describe some example mechanisms that plugins can use to benefit each other. #### 4.4.1 Share user data Since plugins can collect unique user identifiers and link user data with them (Section 4.3), they can engage in server-to-server data sharing, similar to how third party trackers share data with each other on the web [51]. Such sharing can enable plugins to better profile users, resulting in the leakage of user privacy. #### 4.4.2 Make recommendations favorable to partners Since LLM platforms encourage cross-plugin synergy [52], plugins could request LLM platforms to initiate their partner plugins to fulfil multipart user requests, e.g., a user request to book a flight and make a hotel reservation. Additionally, plugins could craft their recommendations in a way that would favor their partner services, e.g., a flight reservation plugin could show the best flight for dates when their partner hotel has free rooms available. ### _Manipulate users_ In this attack category, an attacker's goal is to manipulate users. At a high level, an attacker can manipulate users in a number of ways with _problematic recommendations_. The unconventional interaction between users and plugin services, where plugins show limited information, users possess limited information filtering capabilities, and plugin recommendations may not be thoroughly unwetted, exacerbates the likelihood of problematic recommendations. #### 4.5.1 Deploy deceptive design patterns Plugins could exploit the limited interfacing capabilities on LLM platforms to only reveal few recommendations that favor them. For example, a travel reservation plugin service could show flight tickets where it expects to gain the highest profit instead of the cheapest tickets. #### 4.5.2 Recommend inappropriate & harmful content Unvetted plugin recommendations could lead to the transmission of inappropriate content to users, e.g., showing adult content to children. Additionally, users often act on the recommendations of plugins, which could be exploited to deceive users, e.g., sending users to a website that steals their credit card information or fakes the LLM platform. #### 4.5.3 Recommend nonfactual content Plugin recommendations could also lead to latent or inapparent influence and manipulate worldviews of users [7], in cases where the recommendations by plugins contain misinformation or disinformation or biased information. #### 4.5.4 Lie or change functionality Since plugins can show separate functionality descriptions to the users and plugins (Code 1), this feature could be exploited to manipulate users into installing undesired plugins, even on the official plugin store. Additionally, a plugin could also change its functionality on update to deceive users. ### _Refusal of service by plugins_ In this attack category, the attacker's goal is to refuse service to the user. The refusal of service could result in a variety of harm to the users. Among other motivations, an attacker's motivation behind the refusal of service could be to help itself with another attack, even outside the internet. For example, the refusal of service by an IoT door lock plugin could make user vulnerable to theft. Note that the refusal of service could also be initiated by an external attacker and the plugin service itself could be a victim of the attack. Below we discuss some of the potential ways in which an attacker could refuse service to the user. #### 4.6.1 Deliberately refuse service Plugins have full control and autonomy over fulfilling user commands. Miscreant plugins could simply ignore to fulfil the user command. Additionally, a compromised plugin server, by an external adversary, could also deliberately refuse user requests. #### 4.6.2 Unresponsive server Plugins could also fail to fulfill user commands if their back-end servers become unresponsive, e.g., due to internet or power outages or in case the server is experiencing a denial-of-service attack. ### _Denial-of-service by users_ In this attack category, the attacker's goal is to make the plugin service inaccessible. The inaccessibility of plugin service could potentially result in several harms to the plugin users (as described in Section 4.6). The inaccessibility could also harm the plugin service, e.g., potentially leading to loss in revenue and negatively impacting the reputation of the plugin. Possible adversaries who could conduct this attack could include miscreant users and rival plugins, posing as users. Below we discuss some of the potential ways in which an attacker could make the plugin server inaccessible. #### 4.7.1 Make excessive prompts Malicious or compromised user(s) could make frequent prompts to a single or several plugin APIs, that could result in excessive network traffic that can flood and ultimately crash the plugin server. #### 4.7.2 Make malicious prompts Malicious or compromised user(s) could also send malicious prompt inputs that target known vulnerabilities on the plugin server to crash it. These malicious prompts could just be big payloads that the plugin server cannot parse [53]. ## 5 Attack surface between plugins & LLM platform Next, we describe our attack taxonomy for the attack surface between plugins and LLM platform along with the application of taxonomy on the OpenAI's plugin ecosystem. ### _Hijack LLM platform_ In this attack category, an attacker's goal is to take over an LLM and/or an LLM platform session. Taking over an LLM or an LLM platform session would allow the attacker to impersonate the LLM platform and control the interactions between user and the LLM platform. Such a takeover will allow the adversary to achieve several attack goals, including stealing user interaction history with the LLM, list of installed plugins, and other attacks discussed earlier in Section 4. At a high level, an attacker could rely on _prompt injection_[54, 55, 56] techniques to hijack an LLM or an LLM platform session. It is important to note that the takeover of an LLM can be latent, where an adversary succeeds in inserting a backdoor that activates at a later point in time, e.g., after an LLM is retrained using the plugin data [57]. Below we describe some of the ways in which an attacker could hijack an LLM platform. #### 5.1.1 Inject malicious description LLM platforms load the plugin functionality description to build the necessary context for the LLM. Plugins could exploit that workflow by adding instructions in their functionality description to control the LLM platform. A plugin could inject a malicious description through a number of ways, including tricking users into installing unwetted malicious plugins, succeeding in hosting a malicious plugin on the official plugin store, dynamically changing plugin functionality description after it has been accepted on the official plugin store. #### 5.1.2 Inject malicious response While resolving the prompts, LLMs process the data sent by plugins, which could be exploited by plugins to send instructions to control the LLM platform [11]. Plugins may not only directly send the malicious response but instead point the platform to a URL that hosts the malicious response [21]. ### _Example of a potential risk_ Building on the attack described in Section 5.1.1, we identified a plugin that is able to hijack the LLM platform session through instructions in its functionality description. We describe the details in Risk 4. **Risk 4 LLM session Hijack** **Risk overview.** OpenAI hosts plugins that direct the LLM through commands in their functionality descriptions to alter its behavior when it communicates with the user. When LLM platforms load these plugins, the LLM's behavior is altered for the session, as instructed by the plugin, even when user prompts are not directed towards the plugin. **Risk impact.** The plugin is able to takeover the LLM platform session and control the interaction between the user and the LLM platform. Such a takeover can be exploited in a number of ways, including exfiltration of user-LLM platform interaction history, collection of sensitive data, and exposure to misleading information. **Evidence of risk.** AMZPRO [58], a plugin that helps users write product descriptions for Amazon, instructs ChatGPT to always reply in English. Typically, ChatGPT responds in the same language in which a users asks a question (as it can be seen by visiting this ChatGPT interaction link\({}^{a}\)). However, when AMZPRO is enabled, and not even used, ChatGPT only responds in English for the rest of the user's LLM platform session as it can be seen in the partial interaction with AMZPRO in Figure 5 and full interaction in AMZPRO interaction link\({}^{b}\). This analysis was conducted on July 27, 2023. **Figure 5: User interaction with ChatGPT, when AMZPRO is enabled but not used.** **Observation.** Our demonstration of LLM session hijacking with AMZPRO, highlights the need for contextual awareness and context isolation. We see contextual awareness and context isolation, while still supporting plugin synergy as a key challenge for LLM platforms. ### _Hijack plugin prompts_ In this attack category, the LLM platform is the adversary and its goal is to hijack prompts intended for a plugin. This attack is similar to how search engines and online marketplaces prioritize their own offerings or advertisements in response to user queries [59, 60]. There could be several motivations for hijacking user prompts, including serving self interests, benefiting partner plugin services, or harming a plugin service. Below we describe some of the ways in which an attacker could hijack user prompts. #### 5.2.1 Divert prompts to itself An LLM platform could resolve the user prompts intended for the plugin of its own, without consulting the plugin service at all. Another variation of this attack could be that the LLM platform utilizes the plugin data in the background, including cached data from prior prompt resolutions, but does not notify the user and the plugin service that it has used plugin data. #### 5.2.2 Divert prompts to another plugin A platform could unfairly divert user prompts intended for a specific plugin to another plugin that provides the same functionality. Another variation of this attack is to call both plugins. #### 5.2.3 Hallucinate plugin response Since LLMs occasionally hallucinate (i.e., make up fictional content) responses to user queries [61], they may also hallucinate the responses supposedly returned by the plugin API end point. Such hallucinations can deprive plugin user prompts and can also compromise user trust in the plugin service. ### _Example of a potential risk_ Building on the attack described in Section 5.2.3, we identified an instance where the LLM hallucinates a response that is supposed to be returned by the plugin API. We describe the details in Risk 5. **Risk 5. PLIGN RETRONS-LLM-UERVION** **Risk overview.** When users interact with plugins, they may receive LLM hallucinated responses instead of the actual content returned by the plugins. **Risk impact.** Since hallucinated content is fictitious, it may contain inaccurate, misleading, and dangerous recommendations. Acting on these recommendations could cause a variety of harms to the users. Additionally, hallucinations lead to the unintentional refusal of service by the plugin, which may compromise user trust in the plugin service. **Evidence of risk.** We enabled Uniket [62] and Tira [63], two plugins that allow users to shop from their respective marketplaces. We requested Chat-GPT that we want to shop for shoes and specified that we do not have any preference for one plugin over the other. ChatGPT sent requests to both plugins and returned the same product recommendations for both of them. However, the product links provided using Tira, such as [https://www.tirabeauty.com/product/srm-07-lrqaepziomd](https://www.tirabeauty.com/product/srm-07-lrqaepziomd), were unavailable on Tira's marketplace. Upon inspecting Tira's website, we found that it is a marketplace for beauty and health products and very likely does not sell shoes, i.e., the subject of our query. Although, we cannot rule out an implementation issue at the plugin's end, it very likely seems to be a case of LLM hallucinations. Our complete interaction with the plugins can be viewed by visiting the Uniket and Tira interaction link\({}^{a}\). This analysis was conducted on June 09, 2023. **Observation.** We found that LLM hallucinations are not just limited to user-LLM platform interactions, but also translate to user-plugin interactions. While tackling hallucinations in general is non-trivial and in fact one of the biggest challenge faced by LLM platforms, there have been recent advances in targeted problem spaces, such as mathematical reasoning [64]. Tackling hallucinations in plugin responses might even be less challenging, since LLMs act on the content received from plugin API responses and do not necessarily generate content anew. **Proof.** We use the same notation as in the previous section. ### _Steal plugin data_ In this attack category, the LLM platform is the adversary and its goal is to steal plugin-owned, -hosted, or -facilitated data. Plugin could be hosting proprietary financial data, marketing insights, source code from private repositories, emails, and private documents. Stealing such data could result in several harms to the plugin service and to the users, including monetary harm, leakage of secrets, and invasion of privacy. After stealing data, LLM could use it for a variety of purposes, including using data for training future models or selling data to others. Below we discuss some of the ways in which an LLM platform could steal plugin data. #### 5.3.1 Log interaction LLM platforms facilitate all interactions between users and the plugins, which includes parsing plugin responses. LLMs could simply log the data that plugins return while resolving users requests. **Proof.** We use the same notation as in the previous section. ### _Pollute LLM training data_ In this attack category, plugin is the adversary and its goal is to pollute the training data of LLMs that are used by an LLM platform. Feeding such information will hinder an LLM's ability to respond to user with factual and authentic information. At a high level, an attacker could achieve this goal by exposing the LLM platform to misleading and incorrect information. Below we discuss a mechanisms through which an attacker can pollute the LLM training data. #### 5.4.1 Inject misleading response LLM platforms log user interaction for retraining their models [57]. Plugins could exploit that fact and include misleading or incorrect information in their responses. Note that plugin responses could also point LLMs to URLs which host misleading and incorrect information instead of directly including it in responses. ### _Refusal of service by plugin_ The refusal of service by plugins to the user (Section 4.6) could also impact the platform. For example, in OpenAI's current implementation, an unresponsive plugin results in crashing of the user's ChatGPT session. Note that a plugin could also delay its responses instead of not responding to the requests at all. Section 4.6 already described the mechanism through which a plugin could refuse service. ### _Denial-of-service by LLM platform_ Similar to how users can crash a plugin service with a denial-of-service attack (Section 4.7), LLM platforms could do the same. The motivation for the LLM platform could broadly be hostility towards a competitor or an implementation issue. The potential mechanisms through which an LLM platform could launch a denial-of-service attack are also similar to how users would launch this attack. ## 6 Attack surface between plugins Next, we describe our attack taxonomy for the attack surface between plugins along with the application of taxonomy on the OpenAI's plugin ecosystem. ### _Hijack another plugin's prompts_ Here, a plugin can be both an adversary and a victim. The goal of an adversarial plugin is to hijack user prompts intended for another plugin. A plugin could trick or instruct the LLM platform into calling itself, over the plugin that the user intends. We discuss possible ways in which an adversarial plugin could hijack another plugin's prompts. #### 6.1.1 "Squat" another plugin Similar to how adversaries could use plugin squatting to steal user credentials (Section 4.2), they could also use it to hijack user prompts intended for other plugins. #### 6.1.2 "Squat" functionality Plugins could define targeted functionality descriptions to intercept specific user prompts to a particular plugin or an online service. For example a plugin could intercept prompts to an online marketplace by adding in its functionality description that it can recommend products from that marketplace. #### 6.1.3 **Inject malicious response** A plugin could include in its response instructions for the LLM to route the prompts for a particular plugin to its API endpoints. ### _Example of a potential risk_ Building on the attack described in Section 6.1.2, we identified plugins that could potentially squat functionality. We describe the details in Risk 6. **Risk 6**Functionality SQUATING **Risk overview.** Several OpenAI plugins mention the names of well-known online services in their functionality descriptions or define their functionality descriptions similar to other plugins, which allows them to hijack prompts that are not intended for them, i.e., functionality squatting. **Risk impact.** Successful functionality squatting will allow a plugin to deprive other plugins or online services of users, leading to loss in revenue. Plugin might also be able to trick users into sharing their data. Additionally, if the plugin is unable to fulfill the offered service, it could cause harm to users in several ways. **Evidence of risk.** Lexi Shopper [65] recommends products from Amazon.com and mentions the word "Amazon" in its functionality description. Because of the presence of the word "Amazon", user prompts which even specify to not use any third party service are routed to Lexi Shopper, as it can be seen in our partial interaction with the plugin in Figure 6. Lexi Shopper interaction link\({}^{a}\) provides complete interaction with the plugin. This analysis was conducted on June 09, 2023. In another example, two plugins Jio [66] and Tira [63] offer service to shop from tirabeauty. Tira is hosted by tirabeauty.com whereas Jio is hosted by jiocommerce.io, a third-party e-commerce service that allows users to shop from several online shops. In case a user enables both of the plugins and even specifies that it wants to shop from Tira, their queries are routed to the third-party service, i.e., Jio, instead of the first-party service, i.e., Tira. Tira and Jio interaction link\({}^{b}\) provides complete interaction with these plugins. This analysis was conducted on July 27, 2023. ### _Hijack prompts on a topic_ In this attack category, a plugin can be both an adversary and a victim. The goal of the adversarial plugin is to hijack all user prompts on a particular topic. At a high level, a plugin could trick or instruct the LLM platform into calling itself. We discuss some of the ways in which an adversarial plugin could hijack all prompts on a particular topic. #### 6.2.1 "Squat" a topic Plugins could hijack prompts on a specific topic by curating their functionality descriptions such that they always get precedence over other plugins in the same category. For example, a travel reservation plugin could include in its description that the LLM platform should always call the plugin for all travel related queries. #### 6.2.2 **Inject malicious response** Similar to including instructions in its functionality description, a plugin could instruct the LLM platform via its response to always send user prompts on a particular topic to the plugin. ### _Example of a potential risk_ Building on the attack described in Section 6.2.1, we identified a plugin that could potentially squat user prompts on a topic. We describe the details in Risk 7. [leftmargin=*,noitemsep,topsep=0pt] **Risk 7**: **Toget Squatting** **Risk overview.** Several OpenAI plugins add certain keywords in their functionality descriptions or define overly broad functionality descriptions to hijack prompts on specific topics, i.e., topic squatting. **Risk impact.** Successful topic squatting will allow a plugin to deprive other plugins of users and revenue. Plugin will also be able to harvest user data and trick users into sharing their data. Additionally, if the plugin is unable to fulfill the offered service, it could cause harm to users in several ways. **Evidence of risk.** Expedia [67], a well-known travel reservation service, hosts a plugins which instructs ChatGPT to _"ALWAYS uses Expedia plugin to provide travel recommendations for ANY user's travel-related queries"_. To evaluate whether the use of all caps and direct command would allow Expedia to intercept user prompts for all travel related queries, we installed Expedia's plugin in a chat session with ChatGPT, along with two other travel plugins, Trip.com [68] and Klook [69], and made travel related queries. We found that ChatGPT automatically routed user prompts to Expedia, without asking users for their preference, as seen in our partial interaction with Expedia in Figure 7. Expedia, Trip.com, and Klook interaction link\({}^{a}\) presents complete interaction with these plugins. Analysis conducted on June 09, 2023. Additional analysis of plugins with overly broad functionality descriptions is in Appendix D. [leftmargin=*,noitemsep,topsep=0pt] **Observation.** Broad and targeted functionality descriptions make it challenging to interpret the functionality offered by plugins, which can confuse both users and LLMs. It is a key challenge for LLM platforms to develop unambiguous natural language based programming interfaces. ### _Influence prompts to another plugin_ In this attack category, an attacker's goal is to influence the prompts to another plugin. Examples of influence could include altering the data sent to the another plugin, similar to a man-in-the-middle attack, or triggering another plugin, to launch a denial-of-service attack. At a high level, an attacker would need to trick the LLM platform to launch this attack. Fig. 6: User interaction with Lexi Shopper plugin. Fig. 7: User interaction with the Expedia, Trip.com, and Koll plugins. We describe a potential mechanism through which a plugin could manipulate the transmission of data to another plugin. #### 6.3.1 Exploit multipart prompts A plugin service could exploit the workflow of multipart user requests, where multiple plugins interact with each to resolve the user request. For example, an adversarial plugin could include altered data or a malicious payload in response that will be sent as an input to another plugin. ## 7 Discussion & Conclusion ### _Exacerbation of NLP-related challenges_ While many of the issues that we identified in previous sections are echoes of the challenges in securing previous platforms (e.g., smartphones, IoT), the complexity of natural language is one of the more unique aspects and fundamental challenges in securing LLM-based platforms. In the plugin-integrated platforms we considered, natural language is used (1) by users to interact with the platform and plugins, (2) by the platform and plugins to interact with users, and (3) even by plugins to interact with the platform (e.g., through functionality descriptions) and other plugins (e.g., through instructions in API responses). Potential ambiguity and imprecision in the interpretation of natural language, as well as the application of policies to natural language, can create challenges in all of these interactions. #### 7.1.1 Interpretation of functionality defined in natural language In conventional computing platforms, applications define their functionality through constrained programming languages without any ambiguity. In contrast, LLM platform plugins define their functionality through natural language, which can have ambiguous interpretations. For example, the LLM platform may in some cases interpret the functionality too broadly, or too narrowly, both of which could cause problems (see Risks 6 and 7 as examples). Interpreting language also requires contextual awareness, i.e., plugin instructions may need to be interpreted differently in different contexts. For example, it might be okay for the LLM platform to behave a certain way while a user interacts with a plugin, but not okay to persist with that behavior when the plugin is not in use (see Risk 4 as an example). In summary, the key challenge for LLM platforms is to interpret plugin functionality so as to not cause ambiguity, or in other word,s LLM platforms must figure out mechanisms that allow them to interpret functionality similarly to the unambiguous (or, much less ambiguous) interpretation in other computing platforms. #### 7.1.2 Application of policies over natural language content Even if LLM platforms can precisely interpret the functionality defined in natural language or if functionality is precisely defined through some other means, it will still be challenging to apply policies (e.g., content moderation) over the natural language content returned by users, plugins, or within the LLM platform. For example, there may be a mismatch between the interpretation of the policy by the LLM platform, users, and plugins, e.g., on what is considered personal information (by building on attacks in 4.3, of which Appendix C.1 discusses an example). Similarly, in instances where there is a contradiction between the polices specified by the plugin or between the policies specified by the user and the plugin, the LLM platform would need to make a preference to resolve the deadlock, which may not be in favor of users. An LLM platform may also not apply the policies retrospectively, which may diminish its impact. For example, a policy that specifies that no personal data needs to be collected or shared may not apply to already collected data (by building on attacks in 4.3 of which Appendix C.1.1 discusses an example). ### _Towards secure, privacy-respecting, and safe LLM-based computing platforms_ Stepping back from NLP-specific challenges to considering platforms as a whole, we emphasize that security, privacy, and safety should be key considerations in the design process of LLM-based platforms. The restrictions and suggestions provided by LLM platforms (discussed in Section 2.3) are a step in the right direction, but they are insufficient to secure LLM platforms. We recommend that LLM platform designers consider security, privacy, and safety -- e.g., by applying our framework -- _early_ in the design of their platforms, to avoid situations in which addressing issues later requires fundamental changes to the platform's architecture. The systemic nature of our findings and examples of attack potentials suggests that perhaps such a process was not used in the design of ChatGPT plugin ecosystem. In many cases, defensive approaches do not need to be invented from scratch: LLM platform designers can take inspiration from several sources, including from well-established practices to guard against known attacks, by repeating the threat modeling that we did in this paper, and by building on the security principles defined by prior research, such as by Saltzer and Schroeder [70]. We elaborate now on possible practical approaches for securing LLM platforms that wish to integrate untrusted third parties (e.g., plugins), and then step back to consider the potential future of LLM-based platforms more generally. #### 7.2.1 Anticipating and mitigating potentially buggy or malicious third parties One of the core issues underlying many of the attacks we discussed is that third-party plugins may be malicious or buggy in problematic ways -- an issue familiar to us from many past platforms as well [12, 13, 14, 15]. At the highest level, LLM platforms that want to integrate plugins should minimize trust in these third parties and design the platform to manage any potential risk. There is significant precedent in other platforms that can provide design inspiration to LLM platform creators. For example, to ensure that the plugin behavior does not change at run time and that the LLM platforms get an opportunity to review the plugin code each time it is updated, LLM platforms could host the plugin source code instead of plugin developers (elaborated on further in Appendix A.1), similar to more established platforms, such as mobile and web. Another avenue is to technically limit the functionality exposed to plugins. For example, LLM platforms could enforce a permission model, similar to mobile platforms, to regulate the access of data and system resources. Another strategy to minimize the impact of a problematic plugin is to isolate plugin execution from that of other plugins or the rest of the system, e.g., similar to site isolation in browsers through sandboxes [71]. At present (on the OpenAI platform we tested), all plugins execute together in the context of the same conversation. On the one hand, this execution model allows plugins to synergize well with each other, but on the other hand it exposes user interactions with one plugin to another. LLM platforms still could support plugin interaction and eliminate unnecessary data exposure by running each plugin in a sandbox and by clearly defining a protocol for sharing information across sandboxes, similar to cross-document messaging on the web [72]. In addition, LLM platforms should clearly state and enforce their policies and guidelines for plugin behavior, which may not currently be the case (e.g., see Appendix E). #### 7.2.2 Anticipating future LLM-based computing platforms Looking ahead, we can and should anticipate that LLMs will be integrated into other types of platforms as well, and that the plugin-integrated LLM chatbots of today are early indicators of the types of issues that might arise in the future. For example, we can anticipate that LLMs will be integrated into voice assistant platforms (such as Amazon Alexa), which already support third-party components ("skills", for Alexa). Recent work in robotics has also integrated LLMs into a "vision-language-action" model in which an LLM directly provides commands to a physical robot [73]. Future users may even interact with their desktop or mobile operating systems via deeply-integrated LLMs. In all of these cases, the NLP-related challenges with the imprecision of natural language, coupled with the potential risks from untrustworthy third parties, physical world actuation, and more, will raise serious potential concerns if not proactively considered. The designers of future LLM-based computing platforms should architect their platforms to support security, privacy, and safety early, rather than attempting to retroactively address issues later. ## Acknowledgements This work is supported in part by the National Science Foundation under grant number CNS-2127309 (Computing Research Association for the CIFellows 2021 Project) and by the Tech Policy Lab at the University of Washington. We thank Aylin Caliskan, Yizheng Chen, Kaiming Cheng, Inyoung Cheong, Ivan Evtimov, Earlence Fernandes, Michael Flanders, Saadia Gabriel, Alex Gantman, Gregor Haas, Rachel Hong, David Kohlbrenner, Wulf Loh, Alexandra Michael, Jaron Mink, Niloofar Mireshghallah, Kentrell Owens, Noah Smith, Sophie Stephenson, and Christina Yeung for providing feedback on various drafts of this paper.
2309.00047
Dynamic-ADAPT-QAOA: An algorithm with shallow and noise-resilient circuits
The quantum approximate optimization algorithm (QAOA) is an appealing proposal to solve NP problems on noisy intermediate-scale quantum (NISQ) hardware. Making NISQ implementations of the QAOA resilient to noise requires short ansatz circuits with as few CNOT gates as possible. Here, we present Dynamic-ADAPT-QAOA. Our algorithm significantly reduces the circuit depth and the CNOT count of standard ADAPT-QAOA, a leading proposal for near-term implementations of the QAOA. Throughout our algorithm, the decision to apply CNOT-intensive operations is made dynamically, based on algorithmic benefits. Using density-matrix simulations, we benchmark the noise resilience of ADAPT-QAOA and Dynamic-ADAPT-QAOA. We compute the gate-error probability $p_\text{gate}^\star$ below which these algorithms provide, on average, more accurate solutions than the classical, polynomial-time approximation algorithm by Goemans and Williamson. For small systems with $6-10$ qubits, we show that $p_{\text{gate}}^\star>10^{-3}$ for Dynamic-ADAPT-QAOA. Compared to standard ADAPT-QAOA, this constitutes an order-of-magnitude improvement in noise resilience. This improvement should make Dynamic-ADAPT-QAOA viable for implementations on superconducting NISQ hardware, even in the absence of error mitigation.
Nikola Yanakiev, Normann Mertig, Christopher K. Long, David R. M. Arvidsson-Shukur
2023-08-31T18:00:02Z
http://arxiv.org/abs/2309.00047v1
# Dynamic-ADAPT-QAOA: An algorithm with shallow and noise-resilient circuits ###### Abstract The quantum approximate optimization algorithm (QAOA) is an appealing proposal to solve NP problems on noisy intermediate-scale quantum (NISQ) hardware. Making NISQ implementations of the QAOA resilient to noise requires short ansatz circuits with as few CNOT gates as possible. Here, we present Dynamic-ADAPT-QAOA. Our algorithm significantly reduces the circuit depth and the CNOT count of standard ADAPT-QAOA, a leading proposal for near-term implementations of the QAOA. Throughout our algorithm, the decision to apply CNOT-intensive operations is made dynamically, based on algorithmic benefits. Using density-matrix simulations, we benchmark the noise resilience of ADAPT-QAOA and Dynamic-ADAPT-QAOA. We compute the gate-error probability \(p_{\text{gate}}^{*}\) below which these algorithms provide, on average, more accurate solutions than the classical, polynomial-time approximation algorithm by Goemans and Williamson. For small systems with \(6-10\) qubits, we show that \(p_{\text{gate}}^{*}>10^{-3}\) for Dynamic-ADAPT-QAOA. Compared to standard ADAPT-QAOA, this constitutes an order-of-magnitude improvement in noise resilience. This improvement should make Dynamic-ADAPT-QAOA viable for implementations on superconducting NISQ hardware, even in the absence of error mitigation. ## I Introduction NP problems are ubiquitous in computer science, occurring frequently in combinatorial optimization, and machine learning [1; 2]. Finding their solutions is computationally hard. One strategy to solve NP problems, relies on the Ising model [3; 4; 5]. An NP problem is encoded in the real and symmetric matrix \(W_{ij}\). The (approximate) solution is then found by approximating the ground state energy \(E_{0}\) of an Ising Hamiltonian \[H=\frac{1}{4}\sum_{i,j=1}^{N}W_{ij}Z_{i}Z_{j}, \tag{1}\] where \(Z_{i}\) denotes the Pauli-\(z\) operator acting on qubit \(i=1,\dots,N\). Approximate solutions are usually found using heuristics [6; 7; 8; 9] or adiabatic quantum computers [10; 11; 12; 13]. The quality of these solutions can be assessed using the Goemans and Williamson (GW) algorithm [14], which, in the worst case, provides approximate solutions within \(87.8\dots\%\) of the true ground-state energy in polynomial time (using an alternative representation of the NP problem). Recent works [15; 16] have proposed solving NP problems on gate-based quantum computers, using the quantum approximate optimization algorithm (QAOA). The QAOA identifies approximate solutions to NP problems by creating upper bounds to the ground-state energy \(E_{0}\) of \(H\) via the Rayleigh-Ritz variational principle: \[E_{0}\leq E(\vec{\beta},\vec{\gamma})=\langle\Psi(\vec{\beta},\vec{\gamma})|H |\Psi(\vec{\beta},\vec{\gamma})\rangle. \tag{2}\] The classically-hard-to-represent trial state is prepared on a quantum computer by evolving an initial state \(|\Psi_{0}\rangle\): \[|\Psi(\vec{\beta},\vec{\gamma})\rangle=U_{P}(\vec{\beta},\vec{\gamma})|\Psi_ {0}\rangle, \tag{3}\] using a parametrized ansatz circuit \[U_{P}(\vec{\beta},\vec{\gamma})=\prod_{p=1}^{P}\left[e^{-i\beta_{p}A_{p}}e^{-i \gamma_{p}H}\right]. \tag{4}\] The QAOA then optimizes the parameters to minimize the energy expectation value \(E(\vec{\beta},\vec{\gamma})\). In the original proposal of QAOA [15], the form of the ansatz circuit [Eq. (4)] is inspired by a Trotterized form of the adiabatic theorem [17]. By setting the mixer Hamiltonian to \(A_{p}=\prod_{i=1}^{N}X_{i}\) for all \(p\), and the initial state to \(|\Psi_{0}\rangle=|+\rangle\,...\,|+\rangle\), the QAOA finds the ground state exactly as the number of Trotter steps tends to infinity (\(P\rightarrow\infty\)). Unfortunately, large values of \(P\) lead to intractably deep ansatz circuits. In the presence of noise, the need for deep circuits precludes the implementation of the QAOA on existing quantum hardware [18; 19]. To reduce the intractably deep quantum circuits, ADAPT-QAOA [20] was developed. The algorithm improves the ansatz circuit in \(P\) iterations. Further, it allows the mixer Hamiltonian \(A_{p}\) to vary in each iteration \(p\), by choosing it from a mixer pool \(\mathcal{P}\). In noiseless numerical simulations, ADAPT-QAOA has been demonstrated to generate shallower circuits than the QAOA. Despite these improvements, ADAPT-QAOA lies outside the scope of current hardware. Moreover, the resilience of ADAPT-QAOA to noise has never been quantified. In this paper, we benchmark ADAPT-QAOA in the presence of noise. Using density-matrix simulations, we compute the gate-error probability \(p_{\text{gate}}^{*}\) below which the quantum algorithm outputs, on average, better approximate solutions than the classical GW algorithm. For small systems of \(6-10\) qubits, we find that ADAPT-QAOA requires \(p_{\text{gate}}^{*}\) comparable to or smaller than the gate-error probabilities available on current hardware. To reduce the hardware requirements of ADAPT-QAOA further, we develop Dynamic-ADAPT-QAOA. This algorithm removes redundant components from the ansatz circuits. For the problems we study, Dynamic-ADAPT-QAOA reduces the circuit depths significantly. For instance, in noiseless simulations of 6-qubit systems, Dynamic-ADAPT-QAOA achieves a better average performance than the GW algorithm with approximately 80% fewer CNOT gates than the original ADAPT-QAOA. This reduction in CNOT gates leads to improved noise resilience, with \(p_{\text{gate}}^{\star}\) being approximately an order of magnitude better than that of the original ADAPT-QAOA. Dynamic-ADAPT-QAOA may thus be implementable on current superconducting hardware, even in the absence of error mitigation. ## II Dynamic-ADAPT-QAOA In this section, we introduce Dynamic-ADAPT-QAOA. Our presentation strategy is to first review the standard ADAPT-QAOA template. Subsequently, we describe its improvement via Dynamic-ADAPT-QAOA. ## II A Adapt-QAOA As depicted in Fig. 1, ADAPT-QAOA grows the ansatz circuit in \(P\) steps. In each step \(p\), unitary evolutions generated by \(H\) and \(A_{p}\) are appended to the circuit from the previous step: \[U_{p}(\vec{\beta}_{p},\vec{\gamma}_{p})=e^{-i\beta_{p}A_{p}}e^{-i\gamma_{p}H} U_{p-1}(\vec{\beta}_{p-1},\vec{\gamma}_{p-1}). \tag{5}\] The process starts from \(U_{0}=\text{id}\). Concurrently, the real parameter vectors are updated as \[\vec{\beta}_{p}=(\beta_{p},\vec{\beta}_{p-1})\quad\text{and}\quad\vec{\gamma} _{p}=(\gamma_{p},\vec{\gamma}_{p-1}), \tag{6}\] starting from empty vectors \(\vec{\beta}_{0}=()\) and \(\vec{\gamma}_{0}=()\). In each step, an optimal mixer Hamiltonian \(A_{p}\) is picked from a pool \(\mathcal{P}\) such that the energy gradient is maximized (see below). The circuit parameters are then optimized: \[\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^{\star}=\underset{\vec{\beta}_{p}, \vec{\gamma}_{p}}{\text{argmin}}\left[E_{p}(\vec{\beta}_{p},\vec{\gamma}_{p}) \right], \tag{7}\] to minimize the energy expectation value \[E_{p}(\vec{\beta}_{p},\vec{\gamma}_{p})=\bra{\Psi_{0}}U_{p}^{\dagger}(\vec{ \beta}_{p},\vec{\gamma}_{p})HU_{p}(\vec{\beta}_{p},\vec{\gamma}_{p})\ket{\Psi_ {0}}. \tag{8}\] This yields an upper bound \(\mathcal{E}_{p}=E_{p}(\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^{\star})\) on the ground-state energy \(E_{0}\), and an optimal trial state \(\ket{\Psi_{p}^{\star}}\equiv U_{p}(\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^ {\star})\ket{\Psi_{0}}\). Iterating this process, provides a hierarchy of bounds \(\mathcal{E}_{0}>\mathcal{E}_{1}>\cdots>\mathcal{E}_{p}>\cdots\geq E_{0}\). The algorithm terminates, when \(p=P\) or if \(|\mathcal{E}_{p-1}-\mathcal{E}_{p}|\) falls below a pre-defined threshold \(\varepsilon\). To accelerate convergence, ADAPT-QAOA picks the mixer Hamiltonian which maximizes the energy gradient. To evaluate this gradient, the optimal trial state is augmented by appending a cost and a mixer unitary: \[\ket{\Psi_{p}(\beta_{p},\gamma_{p};A)}=e^{-i\beta_{p}A}e^{-i\gamma_{p}H}\ket{ \Psi_{p-1}^{\star}}. \tag{9}\] The energy variation due to the added parameters \[\delta E_{p}(\beta_{p},\gamma_{p};A)=\bra{\Psi_{p}(\beta_{p},\gamma_{p};A)}H \ket{\Psi_{p}(\beta_{p},\gamma_{p};A)}, \tag{10}\] enables the definition of a corresponding energy gradient: \[\mathcal{G}_{p}(\gamma_{p};A)\equiv\left.\frac{\partial}{\partial\beta_{p}} \delta E_{p}(\beta_{p},\gamma_{p};A)\right|_{\beta_{p}=0}. \tag{11}\] Evaluating this gradient for each \(A\in\mathcal{P}\) allows for selecting the optimal mixer: \[A_{p}=\underset{A\in\mathcal{P}}{\text{argmax}}\left[\ket{\mathcal{G}_{p}( \gamma_{p};A)}\right]. \tag{12}\] Throughout this work, we use the same mixer pool as in ADAPT-QAOA [20], comprising of QAOA mixers as well as Pauli strings of length one and two: \[\mathcal{P} =\left\{\sum_{i=1}^{N}X_{i},\sum_{i=1}^{N}Y_{i}\right\}\cup\left\{ X_{i},Y_{i}\,|\,i=1,...,N\right\} \tag{13}\] \[\cup\left\{\sigma_{i}\sigma_{j}^{\prime}\,|\,\sigma,^{\prime}\in \left\{X,Y,Z\right\}\wedge i,j=1,...,N\wedge i\neq j\right\}.\] ## II Dynamic-ADAPT-QAOA _Motivation:_--Our motivation for developing Dynamic-ADAPT-QAOA comes from two observations. First, In each step \(p\), the quantum circuit representing the cost unitary \(e^{-i\gamma_{p}H}\) requires \(\mathcal{O}(N^{2})\) CNOT gates (see App. A). On the other hand, the quantum circuit representing the mixer unitary \(e^{-i\beta_{p}A_{p}}\) requires only \(\mathcal{O}(1)\) CNOT gates [21]. As CNOT gates induce noise, minimizing the number of cost unitaries in the ansatz circuit could be valuable [22]. Second, in standard ADAPT-QAOA, the vector of optimal parameters \(\vec{\gamma}_{p}^{\star}\) tends to be sparse, with many parameters taking values close to zero (see Sec. III B). As cost unitaries \(e^{-i\gamma_{p}H}\) with \(\gamma_{p}\approx 0\) Figure 1: The \(p\)th iteration of ADAPT-QAOA: After initialization, the ansatz circuit from the previous iteration \(U_{p-1}\) is augmented by appending unitary evolutions generated by \(H\) and \(A_{p}\). The optimal circuit parameters \(\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^{\star}\) are identified by minimizing the measured energy expectation. hardly affect the final quantum circuit, it could be advantageous to exclude them altogether. _Idea:_--In general, the energy expectation value in Eq. (8) is a nontrivial function of the circuit parameters. Hence, it is not obvious how to predict which entries in \(\vec{\gamma}_{p}^{\star}\) would take optimal values close to zero. Yet, in ADAPT-QAOA, optimal circuit parameters of the \(p\)th iteration are usually well approximated by the circuit parameters of the previous iteration: \[\vec{\beta}_{p}^{\star}\approx(\beta_{p}^{\star},\vec{\beta}_{p-1}^{\star}) \quad\text{and}\quad\vec{\gamma}_{p}^{\star}\approx(\gamma_{p}^{\star},\vec{ \gamma}_{p-1}^{\star}). \tag{14}\] Thus, we can estimate the optimal circuit parameters \(\beta_{p}^{\star},\gamma_{p}^{\star}\) of the \(p\)th iteration, by studying the minima of \[\delta E_{p}(\beta_{p},\gamma_{p})\equiv\delta E_{p}(\beta_{p},\gamma_{p};A_{p }). \tag{15}\] As explained in App. B, for Pauli-string mixers \(A_{p}\), we can identify whether \(\delta E_{p}(\beta_{p},\gamma_{p})\) has minima near \(\gamma_{p}^{\star}=0\). To this end, we split the cost Hamiltonian into two parts \(H=H_{-}+H_{+}\), such that \(H_{-}\) commutes and \(H_{+}\) anticommutes with \(A_{p}\). This enables the evaluation of three additional expectation values: \[B_{p} =\left\langle\Psi_{p-1}^{\star}\right|iA_{p}H_{+}\left|\Psi_{p-1 }^{\star}\right\rangle\equiv\mathcal{G}_{p}(0;A_{p}), \tag{16a}\] \[C_{p} =\left\langle\Psi_{p-1}^{\star}\right|A_{p}H_{+}^{2}\left|\Psi_{ p-1}^{\star}\right\rangle,\] (16b) \[D_{p} =\left\langle\Psi_{p-1}^{\star}\right|iA_{p}H_{+}^{3}\left|\Psi_ {p-1}^{\star}\right\rangle. \tag{16c}\] As shown in App. B, \(\delta E_{p}(\beta_{p},\gamma_{p})\) has a local minimum at \(\gamma_{p}^{\star}=0\) if \[C_{p}=0\quad\text{and}\quad B_{p}D_{p}>0. \tag{17}\] _Algorithm:_--Dynamic-ADAPT-QAOA excludes the cost unitary of the \(p\)th iteration if \(A_{p}\) is a Pauli-string and Condition (17) holds. Otherwise, the algorithm follows the standard mixer-selection procedure of ADAPT-QAOA. That is, the gradients for all \(A\in\mathcal{P}\) are re-evaluated at some given offset \(\gamma_{p}=\pm\bar{\gamma}\), and the optimal mixer is determined: \[A_{p}=\operatorname*{argmax}_{A\in\mathcal{P}}\left[\max(|\mathcal{G}_{p}(+ \bar{\gamma};A)|,|\mathcal{G}_{p}(-\bar{\gamma};A)|)\right]. \tag{18}\] After determining \(A_{p}\), the ansatz circuit and parameter vectors are grown as described in Eqs. (5) and (6). Pseudocode summarizing Dynamic-ADAPT-QAOA is given in Algorithm 1. _Remarks:_--In App. C, we discuss two alterations of Dynamic-ADAPT-QAOA. In the first alteration, all cost unitaries are, _a priori_, removed from the ansatz circuit. In the second alteration, the algorithm does not re-evaluate the optimal mixer \(A_{p}\) at \(\gamma_{p}=\pm\bar{\gamma}\) if condition (17) fails. As shown in App. C, both of these alterations worsen the algorithmic performance. Common worries regarding variational quantum algorithms concern barren plateaus (vanishing gradients) and the presence of bad local minima [23; 24; 25; 26; 27; 28; 29; 30]. A promising way to mitigate these issues is to reduce the circuit depths [31; 30], which is precisely what our algorithm does. Moreover, since the gates of adaptive variational quantum algorithms are tailored to the optimization problem itself, there are indications that these algorithms avoid such issues better than other variational quantum algorithms [31; 32; 33; 30; 34; 35]. In the instances studied below, Dynamic-ADAPT-QAOA efficiently implements the variational optimization. ``` Init pool \(\mathcal{P}\); state \(\left|\Psi_{0}\right\rangle\leftarrow\left|+\right\rangle\ldots\left|+\right\rangle\); unitary \(U_{0}\gets I\). Init accuracies \(\varepsilon,\delta_{1},\delta_{2}\); and offset \(\bar{\gamma}\). Init optimal params \(\vec{\beta}_{0}^{\star}\leftarrow(\gamma_{0}^{\star}\leftarrow()\); Init \(p\gets 1\). while not converged do Prepare \(\left|\Psi_{p-1}^{\star}\right\rangle\gets U_{p-1}(\vec{\beta}_{p-1}^{ \star},\vec{\gamma}_{p-1}^{\star})\left|\Psi_{0}\right\rangle\) Evaluate gradients \(\mathcal{G}_{p}(A)\leftarrow\left\langle\Psi_{p-1}^{\star}\right|\left[iA,H \right]\left|\Psi_{p-1}^{\star}\right\rangle\) Find optimal mixer: \(A_{p}\leftarrow\operatorname*{argmax}_{A\in\mathcal{P}}\left[\left[\mathcal{G}_ {p}(0;A)\right]\right]\) Evaluate \(B_{p}\), \(C_{p}\), \(D_{p}\) in Eq.(16) if\(\left|C_{p}\right|\leq\delta_{1}\)and\(B_{p}\cdot D_{p}>\delta_{2}\)then Update \(\bar{\gamma}_{p}\leftarrow\bar{\gamma}_{p-1}\); \(\bar{\beta}_{p}\leftarrow(\beta_{p},\bar{\beta}_{p-1})\) Append \(U_{p}(\bar{\beta}_{p},\bar{\gamma}_{p})\gets e^{-i\beta_{p}A_{p}}U_{p-1}( \bar{\beta}_{p-1},\bar{\gamma}_{p-1})\) else Prepare \(\left|\bar{\Psi}_{p}^{\pm}\right\rangle\gets e^{\bar{\gamma}_{p}\pm \bar{\gamma}_{p}}[\left|iA,H\right|]\bar{\Psi}_{p}^{\pm}\) Measure gradient \(\mathcal{G}_{p}(\pm\bar{\gamma},A)\leftarrow\left\langle\bar{\Psi}_{p}^{\pm} \right|[iA,H]\left|\bar{\Psi}_{p}^{\pm}\right\rangle\) \(A_{p}\leftarrow\operatorname*{argmax}_{A\in\mathcal{P}}\left[\max\left(| \mathcal{G}_{p}(\bar{\gamma},A)|,|\mathcal{G}_{p}(-\bar{\gamma},A)|\right) \right]\) Update \(\bar{\gamma}_{p}\leftarrow(\gamma_{p},\bar{\gamma}_{p-1})\); \(\bar{\beta}_{p}\leftarrow(\beta_{p},\bar{\beta}_{p-1})\) Add \(U_{p}(\bar{\beta}_{p},\bar{\gamma}_{p})\gets e^{-i\beta_{p}A_{p}}e^{-i\gamma_ {p}H}U_{p-1}(\bar{\beta}_{p-1},\bar{\gamma}_{p-1})\) Optimize params \(\vec{\beta}_{p}^{\star}\), \(\vec{\gamma}_{p}^{\star}\leftarrow\operatorname*{argmax}_{\vec{\beta}_{p},\vec{ \gamma}_{p}}[E_{p}(\bar{\beta}_{p},\bar{\gamma}_{p})]\) Set bound \(\mathcal{E}_{p}\gets E_{p}(\vec{\beta}_{p}^{\star},\vec{\gamma}_{p}^{\star})\) if\(p=P\)or\(\left|\mathcal{E}_{p-1}-\mathcal{E}_{p}\right|<\varepsilon\)then converged \(\leftarrow\) True Sample bit strings from \(\left|\Psi_{p}^{\star}\right\rangle\) and compute \(\mathcal{E}_{p}\) Return bit strings, \(\mathcal{E}_{p}\), circuit \(U_{p}\), params \(\vec{\beta}_{p}^{\star}\), \(\vec{\gamma}_{p}^{\star}\) ``` **Algorithm 1** Dynamic-ADAPT-QAOA ## III Benchmarking In this section, we benchmark Dynamic- and standard ADAPT-QAOA in numerical simulations. Our investigation will demonstrate that Dynamic-ADAPT-QAOA can remove redundant components from the ansatz circuits of standard ADAPT-QAOA. We show that this leads to a reduced CNOT count and an increased noise resilience. ### Benchmarking methodology _Max-Cut:_--In what follows, we benchmark ADAPT-QAOAs on random instances of weighted Max-Cut problems. Consider allocating weights to the edges of an \(N\)-vertex graph. In this work, we consider complete, i.e., fully connected, graphs. The edge weights between vertices \(i\in N\) and \(j\in N\) form a real symmetric matrix \(W_{ij}\) with zeros on its diagonal. A binary vector \(\vec{b}\in\{0,1\}^{N}\) defines a _cut_, a splitting of all vertices into two disjoint sets. A cut value is defined as the sum of edge weights between the two partitions: \[V(\vec{b})=\sum_{i,j=1}^{N}W_{ij}b_{i}(1-b_{j}). \tag{19}\] The weighted Max-Cut problem is to find the binary vector \(\vec{b}^{\star}\) that maximizes the cut value: \(\vec{b}^{\star}=\text{argmax}_{\vec{b}}V(\vec{b})\). \(\vec{b}^{\star}\) corresponds to the optimal partition, which yields the maximal cut value \(V_{\text{max}}=V(\vec{b})^{\star}\). By mapping binary variables \(b_{i}=(1+z_{i})/2\) to the eigenvalues \(z_{i}\in\{-1,1\}\) of \(Z_{i}\), the weighted Max-Cut problem becomes equivalent to finding the ground state of the Ising model, Eq. (1). We create random Max-Cut instances by uniformly sampling edge weights \(W_{ij}\in[0,1]\). This is known to generate NP-hard problems [36, 37]. For a visualization of Max-Cut, see Fig. 2. _Approximation ratio:_--Our benchmarks compare the average performance of three algorithms: Dynamic- and standard ADAPT-QAOA, as well as the classical, polynomial-time approximation algorithm by Goemans and Williamson (GW). Rather than solving Max-Cut exactly, all three algorithms sample a collection of bitstrings [38]. This leads to a distribution of approximate cut values, Eq. (19), with average cut-values \(V_{\text{d}}\), \(V_{\text{s}}\), and \(V_{\text{GW}}\), respectively. Algorithms providing a higher average cut value tend to provide better-quality solutions. Further, normalizing the average cut value by the maximal achievable value \(V_{\text{max}}\) allows for averaging various instances of Max-Cut. This defines our key performance metric--the average approximation ratio: \[\alpha_{\text{d}}\equiv\frac{V_{\text{d}}}{V_{\text{max}}},\,\alpha_{\text{s} }\equiv\frac{V_{\text{s}}}{V_{\text{max}}},\,\text{and}\,\,\alpha_{\text{GW} }\equiv\frac{V_{\text{GW}}}{V_{\text{max}}}. \tag{20}\] The GW algorithm is the classical, polynomial-time algorithm that achieves the best worst-case approximation ratio: \(\alpha_{\text{GW}}>87.8\ldots\%\)[14]. Below, we will compare \(\alpha_{\text{GW}}\) to numerically computed values of \(\alpha_{\text{d}}\) and \(\alpha_{\text{s}}\). In our simulations, we average the results over 100 random instances of the Max-Cut problem. In real applications of QAOA, one would return the cut corresponding to the sampled bit string with minimum cost, not the average. However, in the small problem sizes studied here, the final wavefunction has substantial overlap with all bit strings. Thus, for a relatively small number of shots the true solution will always be obtained. Therefore, we compare the average approximation ratios. Further, we emphasize that our comparison between QAOAs and the GW algorithm focuses on the final-results, i.e., average approximation ratios, not their computational time complexity. _Simulations:_--To assess the approximation ratios of Dynamic- and standard ADAPT-QAOA in the presence of noise, we use full density-matrix simulations, as previously described in Ref. [34]. First, the unitaries in Eq. (5) are compiled to standard circuit representations [21]. To simulate the effect of noise, we work with density matrices. In the evolution of the quantum states, we apply a depolarizing channel after each CNOT gate: \[\mathcal{D}(i,p_{\text{gate}})[\rho]\coloneqq(1-p_{\text{gate}})\rho+\frac{ p_{\text{gate}}}{3}\sum_{\sigma_{i}}\sigma_{i}\rho\sigma_{i}. \tag{21}\] Here, \(\rho\) is the density matrix prior to the CNOT gate, \(i\) denotes the target qubit of the CNOT gate, \(p_{\text{gate}}\in[0,1]\) denotes the gate-error probability, and the \(\sigma_{i}\)-summation is over the three Pauli matrices acting on qubit \(i\). Owing to the diverse nature of current quantum hardware, a noise model cannot be both platform agnostic and realistically detailed. Nevertheless, our noise model captures the depolarizing effect of two-qubit gates, which is the dominant noise source across several platforms [39, 40]. We deem our model a reasonably hardware-agnostic compromise, which should be sufficient to assess fundamental quantitative features. Since full density-matrix simulations require extensive computing time, we apply an approximation similar to that outlined in Ref. [34]. In more detail, we simulate ADAPT-QAOAs by growing their ansatz circuits in the absence of noise. We store the optimal ansatz circuits \(U_{p}\) at each iteration step \(p\). Subsequently, we investigate the effect of noise by simulating the pre-optimized circuit \(U_{p}\) at various noise levels \(p_{\text{gate}}\) on our density matrix simulator. As demonstrated in App. D, the noiseless-growth approximation has little effect on our results. Figure 2: Diagramatic representation of a 5-vertex weighted graph. The vertices are labelled 1-5. The weights are shown next to the corresponding edges. The partition resulting in a Max-Cut, (135)(24), is depicted using different shades of gray. The Max-Cut value is 40. Directly above the graph we illustrate how the problem maps onto a qubit system. The qubits’ spins point in different vertical half-planes, corresponding to which set of the Max-Cut partition they are in. _Parameters:--_Before presenting our findings, we specify the hyperparameters used in our simulations. By setting \(\varepsilon=0\), we ensure that the convergence criterion corresponds to having reached a certain circuit depth. The depth is determined by the number of iterations, which we set to \(P=12\). For Dynamic-ADAPT-QAOA, the cost-unitary offset (see Algorithm 1) was set to \(\tilde{\gamma}=0.1\), following the settings used in [20]. In Algorithm 1, \(\delta_{1}>0\) would mitigate some experimental errors in the identification of a local minimum where, in ideal scenarios, \(C_{p}=0\). Similarly, \(\delta_{2}>0\) would mitigate some experimental errors in establishing whether \(B_{p}\cdot D_{p}\) is positive. In our simulations, we set \(\delta_{1}=0\). To emulate practical implementations, we choose \(\delta_{2}\in(0,\,10^{-4})\) after performing a hyperparameter search for each separate graph. ## III B. Vanishing cost parameters As mentioned in Sec. II B, our motivation to develop Dynamic-ADAPT-QAOA stems from the observation that standard ADAPT-QAOA appends cost unitaries to the quantum circuit in cases where they do not lead to any significant improvement in convergence. In Figure 3, we show data which support this conclusion. The histogram of optimal cost parameters \(\gamma^{\star}\) of standard ADAPT-QAOA exhibits a well-defined peak at \(\gamma^{\star}=0\). A majority (\(\approx 70\%\)) of the cost unitaries do not contribute to the algorithm's convergence. This peak is absent in the corresponding histogram for Dynamic-ADAPT-QAOA: Our algorithm successfully removes redundant cost unitaries from the ansatz circuits. ### Benchmarking the CNOT-count reduction Now, we show that Dynamic-ADAPT-QAOA significantly reduces the number of CNOT gates needed to reach a certain algorithmic precision. In Section II, we described how Dynamic-ADAPT-QAOA prunes unnecessary circuit elements. To investigate the effect on the CNOT count, we consider how the approximation ratio \(\alpha\), averaged over 100 instances of Max-Cut, improves as the algorithm grows the quantum circuit. Our results are shown in FIG. 4. We plot data from both noiseless and noisy simulations of Dynamic- and standard ADAPT-QAOA. In both scenarios, Dynamic-ADAPT-QAOA uses significantly fewer CNOT gates to reach a fixed average approximation ratio. For a fixed gate-error probability this CNOT reduction allows Dynamic-ADAPT-QAOA to calculate more accurate approximation ratios than standard ADAPT-QAOA. In noiseless simulations, we see that Dynamic-ADAPT-QAOA needs approximately \(80\%\) fewer CNOT gates than ADAPT-QAOA to calculate average approximation ratios that outperform those achievable with the classical GW algorithm for 6-vertex complete graphs. Moreover, at a gate-error probability of \(p_{\text{gate}}=0.122\%\), the Dynamic-ADAPT-QAOA can achieve better average approximation ratios than the GW algorithm, whilst the standard ADAPT-QAOA cannot. In the next section, we widen our analysis of how noise affects the quantum algorithms' achieved approximation ratios. Figure 3: A histogram of optimized circuit parameters \(\gamma_{p}^{\star}\), taken from the cost unitaries from all layers of the ansatz circuits grown with Dynamic- and standard ADAPT-QAOA. The data were acquired in noiseless simulations of 100 instances of Max-Cut on 6-vertex graphs. The algorithms were run until a maximum circuit depth of \(P=12\). Figure 4: Convergence curves for Dynamic- and standard ADAPT-QAOA, applied to 6-vertex complete graphs, with and without noise. \(1-\alpha\) is plotted as a function of the number of CNOT gates present in the ansatz circuits \(U_{P}\). The dashed horizontal curve corresponds to the classical GW algorithm. The shaded regions correspond to the \(95\%\) confidence intervals. The convergence curves for three gate-error probabilities are shown: \(p_{\text{gate}}=0.0\%,0.122\%\), and \(0.263\%\). These are depicted using solid, dashed, and dash-dotted line styles, respectively. Stars indicate the maximally attainable approximation ratio \(\alpha^{*}\). ## III D Benchmarking the noise resilience In this section, we analyze how noise affects the quality of approximation ratios of Dynamic- and standard ADAPT-QAOA. The convergence curves presented in Fig. 4 show that increasing the gate-error probability \(p_{\text{gate}}\) worsens the best attainable average approximation ratio \(\alpha^{\star}\). More specifically, as ADAPT-QAOA grows the circuit (leading to an increase of CNOT gates on the abscissa) the approximation ratio improves initially. However, as the circuit acquires more CNOT-gates, the effect of noise starts to dominate, leading to a subsequent deterioration of the approximation ratio. This causes the characteristic "smirk" shape of the convergence curves in Fig. 4. The dip of each convergence curve marks the best attainable average approximation ratio \(\alpha^{\star}\) at a certain gate-error probability \(p_{\text{gate}}\). Figure 4 indicates that Dynamic-ADAPT-QAOA outperforms the solution quality of standard ADAPT-QAOA in the presence of noise. To quantify this benefit of our algorithm, we investigate \(\alpha^{\star}\) as a function of \(p_{\text{gate}}\) in FIG. 5. For all values of \(p_{\text{gate}}\), Dynamic-ADAPT-QAOA calculates better approximation ratios than standard ADAPT-QAOA. Evidently, our algorithm exhibits better noise resilience. As can be seen from the left-most portion of FIG. 5, given sufficiently weak noise, both Dynamic- and standard ADAPT-QAOA can provide better average approximation ratios than the GW algorithm. We now investigate the range of gate-error probabilities for which Dynamic- and standard ADAPT-QAOAs achieve such an improvement. To this end, we define the gate-error probability \(p_{\text{gate}}^{\star}\), below which the quantum algorithms achieve a better average approximation ratio than the GW algorithm. In FIG. 6, we plot \(p_{\text{gate}}^{\star}\) with respect to the number of graph vertices. Compared to standard ADAPT-QAOA, Dynamic-ADAPT-QAOA can achieve a better max-cut approximation ratio than the classical GW algorithm at roughly an order of magnitude larger values of \(p_{\text{gate}}^{\star}\). In particular, the critical probability at which Dynamic-ADAPT-QAOA achieves higher approximation ratios than the GW algorithm is \(p_{\text{gate}}^{\star}=1.3\pm 0.2\%\) for 6-vertex graphs and \(p_{\text{gate}}^{\star}=0.13\pm 0.05\%\) for 10-vertex graphs. Both these values are well above achieved gate-error probabilities [41], implying that one may execute Dynamic-ADAPT-QAOA on existing hardware. On the other hand, for standard ADAPT-QAOA, the critical probability is currently achievable only for graphs with less than 7 vertices. ## IV Discussion We have introduced Dynamic-ADAPT-QAOA, a quantum algorithm for combinatorial optimization. Similar to the original ADAPT-QAOA algorithm, our algorithm variationally approximates the ground state of an Ising Hamiltonian. Thus, it can provide approximate solutions to NP problems. By dynamically assessing the importance of unitaries before they are added in the variationally grown algorithms, Dynamic-ADAPT-QAOA can operate with remarkably few CNOT gates. Above, we benchmarked the average (as opposed to the worst-case) performance of our algorithm. For example, in the idealized case of no noise, Dynamic-ADAPT-QAOA requires on average about 35 (350) CNOT gates to outperform the GW algorithm on 6-vertex (10-vertex) graphs. Moreover, we have shown that for graphs with \(6-10\) ver Figure 5: Best attainable approximation ratio \(\alpha^{\star}\) as a function of the gate-error probability \(p_{\text{gate}}\). The data were acquired in noisy simulations of 6-vertex graphs. The error bars show the standard error in the mean approximation ratio. The dashed curve corresponds to the classical GW algorithm. The shaded regions correspond to the 95% confidence intervals. Figure 6: \(p_{\text{gate}}^{\star}\) with respect to different graph sizes. At gate-error probabilities below \(p_{\text{gate}}^{\star}\) the quantum algorithms outperform the solution quality of the classical GW algorithm. The horizontal line shows the experimentally-achieved two-qubit gate-error probability in state-of-the-art superconducting hardware [41]. The error bars show the standard error. tices, Dynamic-ADAPT-QAOA can provide better average solutions than the GW algorithm, even in the presence of noise levels comparable with current state-of-the-art hardware [41]. This should make Dynamic-ADAPT-QAOA an attractive candidate to showcase proof-of-principle computations on NISQ hardware. Finally, we conclude this work with a few comments. _Other QAOAs:--_There are plenty of promising QAOA algorithms in the literature [42; 43; 44; 45; 46; 47; 48; 49; 50]. However, this work focuses on ADAPT-QAOAs [20]--mainly due to their relatively shallow ansatz circuits. In the future, it would be of interest to expand the benchmarks of noise resilience to other types of QAOA. _Other algorithms:--_This study focuses on investigating the utility of gate-based quantum computers for solving NP-problems. However, adiabatic quantum computers [10; 11; 12; 13] and state-of-the-art annealing heuristics [51; 6; 7; 8; 9] can comfortably handle systems with up to 5 thousand and 100 thousand spins, respectively, most likely at a higher solution accuracy. Moreover, other approximation algorithms [52; 53] could also lead to high average solution accuracy. This shows that QAOA still has a long way to go before reaching practical quantum advantage. _Error mitigation:--_Applying error-mitigation techniques [54; 55; 56; 57] to boost expectation values would straightforwardly improve the approximation ratios of standard and Dynamic-ADAPT-QAOA, see App. E. However, to the best of our knowledge, error-mitigation methods have never been used to improve the underlying bit-strings. Consequently, error-mitigation methods would not improve the cut value provided by the experimentally accessible bit-strings. An interesting direction of future research is to consider how error-mitigation techniques could be used to improve not only the cut value, but also the bit-strings provided by a QAOA. **Acknowledgements:** We thank Kieran Dalton, Yordanordanov, Bobak Kiani, Nicholas Mayhall, Sophia Economou, Edwin Barnes, and members of the Hitachi QI team for useful discussions.
2309.06436
Holographic Tensor Networks with Bulk Gauge Symmetries
Tensor networks are useful toy models for understanding the structure of entanglement in holographic states and reconstruction of bulk operators within the entanglement wedge. They are, however, constrained to only prepare so-called "fixed-area states" with flat entanglement spectra, limiting their utility in understanding general features of holographic entanglement. Here, we overcome this limitation by constructing a variant of random tensor networks that enjoys bulk gauge symmetries. Our model includes a gauge theory on a general graph, whose gauge-invariant states are fed into a random tensor network. We show that the model satisfies the quantum-corrected Ryu-Takayanagi formula with a nontrivial area operator living in the center of a gauge-invariant algebra. We also demonstrate nontrivial, n-dependent contributions to the R\'enyi entropy and R\'enyi mutual information from this area operator, a feature shared by general holographic states.
Xi Dong, Sean McBride, Wayne W. Weng
2023-09-12T17:56:02Z
http://arxiv.org/abs/2309.06436v1
# Holographic Tensor Networks with Bulk Gauge Symmetries ###### Abstract Tensor networks are useful toy models for understanding the structure of entanglement in holographic states and reconstruction of bulk operators within the entanglement wedge. They are, however, constrained to only prepare so-called "fixed-area states" with flat entanglement spectra, limiting their utility in understanding general features of holographic entanglement. Here, we overcome this limitation by constructing a variant of random tensor networks that enjoys bulk gauge symmetries. Our model includes a gauge theory on a general graph, whose gauge-invariant states are fed into a random tensor network. We show that the model satisfies the quantum-corrected Ryu-Takayanagi formula with a nontrivial area operator living in the center of a gauge-invariant algebra. We also demonstrate nontrivial, \(n\)-dependent contributions to the Renyi entropy and Renyi mutual information from this area operator, a feature shared by general holographic states. ###### Contents * 1 Introduction * 2 The Gauged Random Tensor Network * 3 Deriving the Gauge-Invariant Algebra * 3.1 The structure of the gauge-invariant Hilbert space * 3.2 The gauge-invariant subregion algebra * 3.3 The center of the algebra * 3.4 Traces in \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\) * 3.5 Reduced states * 4 Entropies in the Gauged Random Tensor Network * 4.1 Entanglement entropy * 4.2 Renyi entropy and Renyi mutual information * 5 Discussion and Outlook ## 1 Introduction The ultimate goal of the AdS/CFT correspondence is to understand, concretely, the relationship between a bulk gravitational theory and its dual boundary conformal field theory. Holographic duality posits that the partition functions of the two theories are equal and that there exists an isomorphism between the Hilbert space of states of a theory of quantum gravity \(\mathcal{H}_{\text{bulk}}\) and the Hilbert space of a seemingly unrelated quantum mechanical system \(\mathcal{H}_{\text{boundary}}\). If we were to understand the precise relation between these Hilbert spaces, we would have a tractable handle with which to study quantum gravity, in whatever form it may ultimately arise. In practice, the UV degrees of freedom in the bulk are not well-understood, so one must often be satisfied with studying a subspace of states given by small fluctuations around a fixed semiclassical saddle. These states span a code subspace of the quantum gravity Hilbert space, and are thus embedded in the larger Hilbert space of the dual boundary theory, in the same way as the logical qubits of a quantum error correcting code (QECC) are embedded in a larger Hilbert space of physical qubits [1]. In the last decade, a useful tool for developing intuition about the bulk-to-boundary map has been tensor networks. Tensor networks, specifically projected entangled pair states (PEPS) and PEPS-inspired tensor networks, originally arose in many-body physics as a generalization of matrix product states, which allowed one to efficiently prepare spin chain states with area law entanglement [2]. As a toy model for holography, tensor networks found their niche due to the fact that they obey the Ryu-Takayanagi (RT) formula [3] and its refinements [4; 5; 6; 7]. In particular, random tensor networks (RTNs) [8] reproduce several desirable properties of a holographic QECC, namely satisfying a quantum-corrected RT formula and the Petz reconstruction of local operators [9]. We now give a short overview of holographic RTNs and their entanglement properties, as well as their issues. A rank-\(k\) tensor can be represented by its components \(T_{\mu_{1}\cdots\mu_{k}}\), with \(\mu_{i}=1,\ldots,D_{i}\) (the bond dimension). We can associate to each leg a \(D_{i}\)-dimensional Hilbert space \(\mathcal{H}_{i}\) spanned by an orthonormal basis of states \(\{|\mu_{i}\rangle,\ \mu_{i}=1,\cdots,D_{i}\}\). The tensor \(T\) can then be thought of as a state on the tensor product Hilbert space \(\bigotimes_{i=1}^{k}\mathcal{H}_{i}\): \[|T\rangle=\sum_{\mu_{1},\cdots,\mu_{k}}T_{\mu_{1}\cdots\mu_{k}}|\mu_{1} \rangle\otimes\cdots\otimes|\mu_{k}\rangle. \tag{1}\] To construct a tensor network, we consider a set of vertices and links which form a network. To each vertex \(x\) we associate a state \(|T_{x}\rangle\), such that the collection of all tensors defines a product state \(\otimes_{x}|T_{x}\rangle\). Adjacent tensors are those connected by a link; their corresponding legs are contracted by projecting onto a maximally entangled state. For simplicity, we assume that all contracted legs have the same bond dimension \(D\). Denoting the tensor product Hilbert space on the two legs connecting the tensors at vertices \(x\) and \(y\) as \(\mathcal{H}_{xy}\otimes\mathcal{H}_{yx}\), this means that we project onto the state \(|xy\rangle=D^{-1/2}\sum_{\mu=1}^{D}|\mu_{xy}\rangle\otimes|\mu_{yx}\rangle\). Uncontracted legs are called "dangling" and come in two types: bulk legs (viewed as input) and boundary legs (viewed as output). We write the boundary state in the following way:1 Footnote 1: Here, we have chosen a pure state as the bulk input, but generalizing to mixed states is straightforward. \[|\Psi_{\partial}\rangle = \left(\langle\Phi_{b}|\otimes\bigotimes_{\langle xy\rangle} \langle xy|\right)\left(\bigotimes_{x}|T_{x}\rangle\right), \tag{2}\] where we project the bulk input legs onto a bulk state \(|\Phi_{b}\rangle\). In an RTN, we choose \(T_{x}\) to be independent random tensors and take \(D\) to be large. We will not go into details on how one computes Renyi entropy in the RTN here; the important point is that, for a boundary subregion \(R\), one finds the following answer for the Renyi entropy \(S_{n}(R)\): \[S_{n}(R)=|\gamma_{R}|\log D+S_{n}(\rho_{r}), \tag{3}\] where \(|\gamma_{R}|\) is the number of links cut by the minimal surface \(\gamma_{R}\) homologous to \(R\) and \(S_{n}(\rho_{r})\) is the Renyi entropy of the bulk subregion \(r\) bounded by \(R\cup\gamma_{R}\) (we will call \(r\) the entanglement wedge). Analytically continuing to \(n=1\) recovers the Faulkner-Lewkowycz-Maldacena (FLM) formula \[S_{\rm vN}(R)=\frac{\left<\hat{A}\right>}{4G_{N}}+S_{\rm vN}(\rho_{r}), \tag{4}\] with \(|\gamma_{R}|\log D\) identified with the expectation value of the area operator \(\langle\hat{A}\rangle/4G_{N}\). In a state with vanishing bulk Renyi entropy (such as a product state), the boundary Renyi entropy (3) is consequently independent of \(n\). The RTN thus exhibits a flat entanglement spectrum due to the projection of contracted legs onto maximally mixed states.2 This differs sharply from what we expect from generic situations in AdS/CFT. For example, the Renyi entropy for an interval \(R\) of length \(\ell\) in the vacuum state of a two-dimensional CFT takes the form Footnote 2: The HaPPY code [10] also features a flat Rényi spectrum for similar reasons. \[S_{n}(R)=\frac{c}{6}\left(1+\frac{1}{n}\right)\log\left(\frac{\ell}{\epsilon} \right), \tag{5}\] which is manifestly \(n\)-dependent. One possible solution is to instead project contracted legs onto a non-maximally entangled link state [11; 12]. By tuning the entanglement spectrum appropriately, this allows one to reproduce the correct single-interval CFT vacuum Renyi entropy (5), but does not work in more general cases such as that of multiple disjoint intervals. To see this, consider two disjoint intervals \(R_{1}\) and \(R_{2}\) (see Figure 1), and for simplicity consider the case where the mutual information between the intervals is small in the sense that the RT surfaces are always in a disconnected phase. The boundary Renyi entropy can be obtained by inserting appropriate cosmic branes into the bulk [13]. The tension of the cosmic branes is proportional to \(1-1/n\). In a fully gravitating system, the two cosmic branes homologous to \(R_{1}\), \(R_{2}\) will backreact and affect each other in an \(n\)-dependent way. This results in a nonzero Renyi mutual information between the two intervals that cannot be reproduced in RTNs by simply adding non-maximally entangled links, because they would not allow the minimal surfaces to affect each other. rom the gravity point of view, the RTN prepares a so-called fixed-area state [14; 15], which is an eigenstate of the area operator \(\hat{A}\) in (4). Such eigenstates form a complete basis for semiclassical states prepared via the gravitational path integral, so in principle any semiclassical state can be represented as a superposition over fixed-area basis states \(|\alpha\rangle\), where \(\alpha\) labels the eigenvalues of the area operator. As the area operator lives on the RT surface dividing the entanglement wedge \(r\) and its complement \(\overline{r}\), it naturally belongs to the center of the algebra of bulk operators in \(r\). This view was espoused in [16], where it was shown that the FLM formula (4) can be derived from a quantum error correcting code with complementary recovery. In that language, the area operator is a specific element of the center of the bulk von Neumann algebra on \(r\). The usual RTN implements a special case of this where the algebra has a trivial center, i.e., the center consists of \(c\)-numbers only and is therefore isomorphic to \(\mathbb{C}\). In particular, this means that the area operator must be a \(c\)-number, which, as previously discussed, is incongruous with what one observes in gravitational holography. The goal of this paper is to construct a model where the algebra on \(r\) has a nontrivial center and to identify a nontrivial area operator living in the center.3 An _ad hoc_ way of getting a nontrivial center is to "stack" multiple layers of tensor networks by hand Figure 1: The cosmic branes that arise in computing the Rényi entropy for disjoint subregions. These branes have nonzero, \(n\)-dependent tension, and so would backreact in a realistic holographic system. to form superpositions of fixed-area states. We will not do this but will instead pursue a more physically motivated approach. In particular, one would like to incorporate something akin to "edge states", degrees of freedom which live on the minimal surface, in order to go beyond fixed-area states and produce a nontrivial area operator.4 Our goal in this work is to give a model which provides a physical origin for these edge states. Inspired by similar operators found in gauge theory [19], we will add a second layer on top of the standard RTN which imposes gauge invariance. This alters the algebra of operators in the bulk, and as we will show, it introduces a nontrivial contribution to the area operator of the following form: Footnote 4: Initial work in this direction was taken in [18] by generalizing the HaPPY code. \[\Delta\widetilde{A}=\bigoplus_{\alpha}\widetilde{P}^{\alpha}\log d_{\alpha}, \tag{6}\] where roughly speaking \(\alpha\) denotes a superselection sector in the gauge-invariant Hilbert space, \(\widetilde{P}^{\alpha}\) is the projection onto that superselection sector, and \(d_{\alpha}\) is the dimension of \(\alpha\) viewed as an irreducible representation. The important thing to note at the moment is that this operator is not a \(c\)-number and is therefore nontrivial. The structure of this paper is as follows. In Section 2 we will set up our model - a two-layer gauged random tensor network - and introduce the formalism for gauge theory on a graph. In Section 3 we will analyze the Hilbert space of gauge-invariant states and the algebras of gauge-invariant operators for a subregion. In Section 4 we will compute entanglement and Renyi entropies in both the pre-gauged and gauge-invariant algebras, which we will use to derive the new area operator for our model. We conclude with some discussion and future directions. ## 2 The Gauged Random Tensor Network We now construct our model. It has two layers: a top layer consisting of a gauge theory on a graph, and a bottom layer made of a standard random tensor network. We illustrate some examples of this two-layer model in Figure 2. The top layer produces a gauge-invariant state which is then fed into the bottom layer as input. The final output of the model is the boundary state produced by the bottom RTN. We can then analyze properties of the boundary state (such as its entropy) using the usual techniques for the random tensor network. This construction has some nice properties. In particular, one might be worried that if the structure of the RTN is altered, Petz reconstruction of local operators might no longer hold. Here we avoid this potential issue by keeping the tensor network the same, but changing the space of states that can be fed into the network. Given this construction, we would like to understand what set of gauge-invariant states we will be feeding into the bottom layer. The following is based on a non-dynamical version of the standard Kogut-Susskind construction in lattice gauge theory [20].5 As we do not require our graph to be a lattice, i.e. there is not necessarily a regular tiling, we will refrain from calling our top layer a lattice gauge theory. Footnote 5: See also related discussion in [21]. Our starting point is an arbitrary directed graph \(\Lambda=(V,E)\) consisting of vertices \(V=\{v\}\) and edges \(E=\{e\}\). We require the graph to be directed so we have a well-defined orientation on each edge, though we emphasize that the choice of orientation is arbitrary. We impose no additional conditions on the graph. In particular, the graph could have loops, adjacent vertices could be connected by multiple edges, and the graph does not need to be planar. We start with a gauge group, which we choose to be a compact Lie group \(G\). It does not have to be connected, and in particular, we could consider finite groups such as \(\mathbb{Z}_{2}\) if we wish. We assign a (pre-gauged) Hilbert space to each vertex and edge of the graph \(\Lambda\). The Hilbert space \(\mathcal{H}_{e}\) on each edge \(e\) is taken to be \(L^{2}(G)\), the space of square-integrable functions on \(G\). A state \(\ket{\psi}_{e}\) in this \(\mathcal{H}_{e}=L^{2}(G)\) can be written as an integral6 over orthonormal basis elements \(\ket{g}\) labeled by \(g\in G\): Footnote 6: In cases where \(G\) is finite, the integral is understood as a sum: \(\ket{\psi}_{e}=\sum_{g\in G}\frac{1}{\ket{G}}\psi(g)\ket{g}_{e}\), where \(\ket{G}\) is the order of \(G\). \[\ket{\psi}_{e}=\int dg\psi(g)\ket{g}_{e}, \tag{1}\] Figure 2: Some examples of our two-layer model, with a gauge theory on a directed graph on the top layer and a random tensor network with dangling boundary legs on the bottom. In these examples, we choose each tensor in the bottom layer to have a bulk input leg which is either a vertex or edge on the graph. The light gray planes in the right example are included for visual clarity. where \(dg\) is the Haar measure7 on \(G\). For our purposes, it will be useful to work with another orthonormal basis Footnote 7: The Haar measure on \(G\) is invariant under left and right group multiplication (\(g\to g^{\prime}g\) and \(g\to gg^{\prime}\)) and is normalized such that \(\int dg=1\). \[\left|\alpha ij\right\rangle_{e},\quad i,j=1,2,\cdots,d_{\alpha} \tag{2}\] for the same \(\mathcal{H}_{e}=L^{2}(G)\), where \(\alpha\) labels irreducible representations (irreps) of \(G\) and \(d_{\alpha}\) is the dimension of the representation \(\alpha\). This representation basis is orthonormal: \[{}_{e}\langle\alpha ij|\beta k\ell\rangle_{e}=\delta_{\alpha\beta}\delta_{ik} \delta_{j\ell}, \tag{3}\] and can be written in terms of the previously defined group basis \(\left|g\right\rangle_{e}\): \[\left|\alpha ij\right\rangle_{e}=\sqrt{d_{\alpha}}\int dg\,D^{\alpha}_{ij}(g) \left|g\right\rangle_{e}, \tag{4}\] where \(D^{\alpha}_{ij}(g)\) are elements of a unitary matrix \(D^{\alpha}(g)\) representing \(g\) in \(\alpha\). This can be viewed as a "Fourier transform" between the representation basis and the group basis. The group action induces a set of unitaries \(L_{e}(g)\) and \(R_{e}(g)\) which act as left and right group multiplications on the group basis: \[L_{e}(g)\left|h\right\rangle_{e}=\left|gh\right\rangle_{e},\quad R_{e}(g^{-1} )\left|h\right\rangle_{e}=\left|hg^{-1}\right\rangle_{e}. \tag{5}\] In the representation basis, the group unitaries instead act as unitary matrix multiplication on one of the two indices \(i\), \(j\): \[L_{e}(g)\left|\alpha ij\right\rangle_{e}=\sum_{k}D^{\overline{\alpha}}_{ki}(g )\left|\alpha kj\right\rangle_{e},\quad R_{e}(g^{-1})\left|\alpha ij\right\rangle _{e}=\sum_{k}D^{\alpha}_{kj}(g)\left|\alpha ik\right\rangle_{e}, \tag{6}\] where \(\overline{\alpha}\) denotes the complex conjugate representation of \(\alpha\) defined by \(D^{\overline{\alpha}}_{ij}(g)=D^{\alpha\star}_{ij}(g)\). Thus there are two copies of \(G\) acting on \(\mathcal{H}_{e}\): \(L_{e}(g)\) gives the group action of the first copy under which \(\left|\alpha ij\right\rangle_{e}\) transforms in the representation \(\overline{\alpha}\), and \(R_{e}(g^{-1})\) gives the action of the second copy under which \(\left|\alpha ij\right\rangle_{e}\) transforms in the representation \(\alpha\). Altogether, \(\left|\alpha ij\right\rangle_{e}\) transforms in the external tensor product8 representation \(\overline{\alpha}\boxtimes\alpha\) of \(G\times G\). Using this, we decompose \(\mathcal{H}_{e}\) as Footnote 8: For representations \(\alpha_{1}\), \(\alpha_{2}\) of \(G\), their external tensor product \(\alpha_{1}\boxtimes\alpha_{2}\) is a representation of \(G\times G\) with an underlying vector space \(\mathcal{H}^{\alpha_{1}}\otimes\mathcal{H}^{\alpha_{2}}\), where \(\mathcal{H}^{\alpha_{1}}\) transforms under the first \(G\) in the \(\alpha_{1}\) representation and \(\mathcal{H}^{\alpha_{2}}\) transforms under the second \(G\) in the \(\alpha_{2}\) representation. Note that this is different from the (usual) tensor product \(\alpha_{1}\otimes\alpha_{2}\) which is a representation of \(G\) (not \(G\times G\)), with an underlying vector space \(\mathcal{H}^{\alpha_{1}}\otimes\mathcal{H}^{\alpha_{2}}\) where \(\mathcal{H}^{\alpha_{1}}\) and \(\mathcal{H}^{\alpha_{2}}\) transform under the same \(G\). \[\mathcal{H}_{e}\cong\bigoplus_{\alpha}\mathcal{H}^{\overline{\alpha}}\otimes \mathcal{H}^{\alpha}\cong\bigoplus_{\alpha}\left(\mathcal{H}^{\alpha}\right) ^{\oplus d_{\alpha}}, \tag{7}\] where the sum runs over all irreducible representations \(\alpha\) of \(G\) and \(\mathcal{H}^{\alpha}\) is a Hilbert space of dimension \(d_{\alpha}\) transforming in the \(\alpha\) representation. It will be convenient to use the representation basis for the remainder of the paper. Now we turn to the (pre-gauged) Hilbert space \(\mathcal{H}_{v}\) on a vertex \(v\). In general, \(\mathcal{H}_{v}\) may be chosen quite arbitrarily (corresponding to specifying any number of matter degrees of freedom including the case of no matter), but it needs to furnish some representation under the group action of \(G\). This representation could be reducible or trivial, but it can always be decomposed into a direct sum of irreducible representations of \(G\). Using this, we may decompose a general \(\mathcal{H}_{v}\) as \[\mathcal{H}_{v}=\bigoplus_{\alpha}\left(\mathcal{H}_{v}^{\alpha}\right)^{ \oplus n_{\alpha}}. \tag{8}\] Here the sum again runs over all distinct irreducible representations \(\alpha\) of \(G\) and \(n_{\alpha}\) is the multiplicity of the representation \(\alpha\) in \(\mathcal{H}_{v}\). Note that \(n_{\alpha}\) could be any nonnegative integer, and in particular, it could be zero (representing the absence of a given representation \(\alpha\) in \(\mathcal{H}_{v}\)). Thus, the simplest choice of \(\mathcal{H}_{v}\) is a trivial Hilbert space with no matter (corresponding to \(n_{\alpha}=0\) for all \(\alpha\)), but in the discussion below we will consider the general case (8) with arbitrary \(n_{\alpha}\). Furthermore, we will allow \(\mathcal{H}_{v}\) to vary from one vertex to another. An orthonormal basis of states for the Hilbert space \(\mathcal{H}_{v}\) can be written as \[\left|\alpha ij\right\rangle_{v},\quad i=1,\cdots,n_{\alpha},\quad j=1,\cdots,d_{\alpha}, \tag{9}\] where the first index \(i\) runs over the multiplicity \(n_{\alpha}\) and the second runs over the dimension \(d_{\alpha}\). The group action of \(G\) on \(\mathcal{H}_{v}\) is given by unitary operators \(U_{v}(g)\), which act on the \(\left|\alpha ij\right\rangle_{v}\) basis as \[U_{v}(g)\left|\alpha ij\right\rangle_{v}=\sum_{k}D_{kj}^{\alpha}(g)\left| \alpha ik\right\rangle_{v}. \tag{10}\] Note that \(U_{v}(g)\) only acts on the second index \(j\) and is analogous to the action of \(R_{e}(g^{-1})\) in (5). Thus, we find an important distinction between the vertex Hilbert space \(\mathcal{H}_{v}\) and the edge Hilbert space \(\mathcal{H}_{e}\). To see this, first note that the two Hilbert spaces share some similarities. In particular, \(\mathcal{H}_{e}\) is a direct sum of irreducible representations \(\alpha\) with multiplicity \(d_{\alpha}\) as shown on the right-hand side of (7), and this is the analogue of (8) for \(\mathcal{H}_{v}\). The representation basis (2) of \(\mathcal{H}_{e}\) is similar to the basis (9) of \(\mathcal{H}_{v}\). However, the difference is that an edge has the additional structure of allowing another group action \(L_{e}(g)\) that acts on the first index \(i\) of \(\left|\alpha ij\right\rangle_{e}\), whereas at a vertex the first index \(i\) of \(\left|\alpha ij\right\rangle_{v}\) is a multiplicity index that does not admit a natural group action. The pre-gauged Hilbert space for the entire graph is then \[\mathcal{H}=\left(\bigotimes_{v\in V}\mathcal{H}_{v}\right)\otimes\left(\bigotimes _{e\in E}\mathcal{H}_{e}\right). \tag{11}\] We refer to the algebra of all bounded operators on \(\mathcal{H}\) as \(\mathcal{A}=\mathcal{B}\left(\mathcal{H}\right)\). As \(\mathcal{H}\) completely factorizes over the vertices and edges, so too does the algebra of operators \[\mathcal{A}=\left(\bigotimes_{v\in V}\mathcal{A}_{v}\right)\otimes\left(\bigotimes _{e\in E}\mathcal{A}_{e}\right). \tag{12}\] Using the representation basis (2) of \(\mathcal{H}_{e}\), \(\mathcal{A}_{e}\) can be written as \[\mathcal{A}_{e}=\text{span}\{\left|\alpha ij\right\rangle_{e}\langle\beta k \ell|\}, \tag{13}\] where the indices \(i,j,k,\ell\) run over the irrep dimension. Similarly, using (9) we write \[\mathcal{A}_{v}=\text{span}\{\left|\alpha ij\right\rangle_{v}\langle\beta k \ell|\}, \tag{14}\] where \(i,k\) run over the irrep multiplicity and \(j,\ell\) run over the irrep dimension. For each vertex \(v\), we now define a gauge transformation \(A_{v}(g)\) as the following unitary operator acting on \(v\) and all its associated edges: \[A_{v}(g)\equiv U_{v}(g)\prod_{e\in E^{-}(v)}L_{e}(g)\prod_{e\in E^{+}(v)}R_{e} (g^{-1}), \tag{15}\] where \(E^{-}(v)\) consists of edges associated to \(v\) oriented away from the vertex and \(E^{+}(v)\) consists of edges oriented into the vertex. Physical states are defined to be those invariant under gauge transformations \(A_{v}(g)\) for all \(g\) and \(v\). The easiest way of generating a gauge-invariant state is to average over all gauge transformations acting on a state in \(\mathcal{H}\). The operator that implements this averaging on a vertex \(v\) is the following projector: \[\Pi_{v}=\int dgA_{v}(g). \tag{16}\] \(\Pi_{v}\) obeys the usual properties of a projector such that \(\Pi_{v}^{2}=\Pi_{v}\) and \(\Pi_{v}=\Pi_{v}^{\dagger}\). The gauge-invariant projector on the entire graph is simply the product of individual projectors on all vertices: \[\Pi_{\text{GI}}=\prod_{v\in V}\Pi_{v}. \tag{17}\] It is easy to verify that \([A_{v}(g),A_{v^{\prime}}(g^{\prime})]=0\) for all \(v\), \(v^{\prime}\), \(g\), \(g^{\prime}\), and therefore \([\Pi_{v},\Pi_{v^{\prime}}]=0\). Throughout the paper, we will denote fully gauge-invariant spaces, states, and operators with a tilde; for instance, the gauge-invariant states \(\left|\widetilde{\psi}\right\rangle\) are elements of \(\widetilde{\mathcal{H}}\) defined via \[\widetilde{\mathcal{H}}\equiv\Pi_{\text{GI}}\mathcal{H}. \tag{18}\] The gauge-invariant algebra \(\widetilde{\mathcal{A}}\) is defined as the space of bounded operators on \(\widetilde{\mathcal{H}}\). \(\widetilde{\mathcal{A}}\) can alternatively be represented by conjugation of the pre-gauged algebra \(\mathcal{A}\) with the projector \(\Pi_{\text{GI}}\): \[\widetilde{\mathcal{A}}=\Pi_{\text{GI}}\mathcal{A}\Pi_{\text{GI}}. \tag{19}\] We should comment on the interpretation of the operators in this gauge-invariant algebra. Every operator \(\widetilde{\mathcal{O}}\in\widetilde{\mathcal{A}}\) can be extended to a pre-gauged operator \(\mathcal{O}\in\mathcal{A}\) which acts identically on gauge-invariant states. There is generally more than one extension to \(\mathcal{A}\), and to choose a unique extension one must specify the action of the pre-gauged operator on the orthogonal complement of \(\widetilde{\mathcal{H}}\). We make the natural choice that the extension \(\mathcal{O}\) should annihilate the orthogonal complement. Moreover, for notational simplicity, we identify every \(\widetilde{\mathcal{O}}\in\widetilde{\mathcal{A}}\) with its natural extension \(\mathcal{O}\in\mathcal{A}\) (which annihilates the orthogonal complement), as we have done in (19). The reason for this natural extension will become clearer in later sections. We now feed any gauge-invariant state \(\left|\widetilde{\psi}\right\rangle\) as the bulk input into the RTN on the the bottom layer, in a manner illustrated by Figure 2. In particular, the bulk dangling legs of the RTN should match and connect to the edges and vertices of the graph \(G\) on the top layer, for \(\left|\widetilde{\psi}\right\rangle\) lives on these edges and vertices. In other words, each edge or vertex of \(G\) is fed into a bulk dangling leg of the RTN.9 Footnote 9: In principle, the RTN could also take any pre-gauged state as the bulk input, but we choose to feed only gauge-invariant states because as we will see, this restriction leads to a nontrivial area operator. In order to utilize the full machinery of the original RTN, we would like the Hilbert spaces associated with the tensors on the bottom layer to be finite-dimensional (as is the case for the original RTN). When \(G\) is an infinite group, \(\mathcal{H}_{e}=L^{2}(G)\) is infinite-dimensional and there are an infinite number of irreducible representations to sum over, so in order to avoid a tensor in the bottom layer having an infinite-dimensional leg, we impose a cutoff on our edge and vertex Hilbert spaces. This can take the form of, e.g., a cutoff in the sums in (7) and (8). Therefore, we are only feeding in states that live in a finite-dimensional subspace of \(\widetilde{\mathcal{H}}\). This does not affect the discussion in the next section of the gauge-invariant algebra; the cutoff is only relevant when we compute entanglement measures in Section 4. Deriving the Gauge-Invariant Algebra Now that we have defined our gauge-invariant states, we would like to understand the structure of the algebra of gauge-invariant operators. Our overarching goal is to write down the gauge-invariant subalgebra for a subregion \(r\) of the top layer which we will later use to derive an FLM formula for the gauged RTN. ### The structure of the gauge-invariant Hilbert space We now study the decomposition of \(\widetilde{\mathcal{H}}\) when our graph \(\Lambda\) is divided into a subregion and its complement. We define a subregion \(r\) of \(\Lambda\) to be an arbitrary subset of vertices and edges (without further restrictions). We call the complement subregion \(\overline{r}\). In order to work out a useful basis for gauge-invariant states, it is convenient to divide the set \(V\) of all vertices into three types: those strictly in \(r\) (meaning that the vertex and its associated edges are all in \(r\)), those strictly in \(\overline{r}\), and vertices "on the cut" (meaning that the vertex and its associated edges are partly in \(r\) and partly in \(\overline{r}\)). We call these sets \(V_{r}\), \(V_{\overline{r}}\), and \(V_{c}\equiv V/\left(V_{r}\cup V_{\overline{r}}\right)\), respectively. Consequently, the gauge-invariant projector can be decomposed in the following way: \[\Pi_{\text{GI}}=\Pi_{V_{r}}\Pi_{V_{c}}\Pi_{V_{\overline{r}}}, \tag{10}\] where \(\Pi_{V_{i}}\) is defined as the product of individual projections \(\Pi_{v}\) over all vertices \(v\in V_{i}\), for \(i=r,c,\overline{r}\). First, let us discuss a partial gauging of the pre-gauged Hilbert space. Using the tensor decomposition of \(\mathcal{H}=\mathcal{H}_{r}\otimes\mathcal{H}_{\overline{r}}\), we can write \(\widetilde{\mathcal{H}}=\Pi_{\text{GI}}\mathcal{H}\) as \[\widetilde{\mathcal{H}} =\Pi_{V_{r}}\Pi_{V_{c}}\Pi_{V_{\overline{r}}}\left(\mathcal{H}_{ r}\otimes\mathcal{H}_{\overline{r}}\right)\] \[=\Pi_{V_{c}}\left(\left(\Pi_{V_{r}}\mathcal{H}_{r}\right)\otimes \left(\Pi_{V_{\overline{r}}}\mathcal{H}_{\overline{r}}\right)\right). \tag{11}\] We define the two terms in the parentheses as \[\hat{\mathcal{H}}_{r}\equiv\Pi_{V_{r}}\mathcal{H}_{r},\quad\hat{\mathcal{H}}_ {\overline{r}}\equiv\Pi_{V_{\overline{r}}}\mathcal{H}_{\overline{r}}. \tag{12}\] These are "partially gauged" Hilbert spaces, in the sense that states in \(\hat{\mathcal{H}}_{r}\) (\(\hat{\mathcal{H}}_{\overline{r}}\)) are invariant under gauge transformations associated to vertices in \(V_{r}\) (\(V_{\overline{r}}\)), but not so under gauge transformations on the cut. We denote the partially gauged Hilbert space on the full graph as \[\hat{\mathcal{H}}=\hat{\mathcal{H}}_{r}\otimes\hat{\mathcal{H}}_{\overline{r}}. \tag{13}\] As \(\hat{\mathcal{H}}\) tensor factorizes, the algebra of operators on \(\hat{\mathcal{H}}\) also factorizes as \[\hat{\mathcal{A}}=\hat{\mathcal{A}}_{r}\otimes\hat{\mathcal{A}}_{\overline{r}}. \tag{14}\] Now that we have a partially gauged Hilbert space \(\hat{\cal H}\), it remains to impose gauge invariance "on the cut" and obtain the fully gauged Hilbert space \(\widetilde{\cal H}=\Pi_{V_{c}}\hat{\cal H}\). The gauge transformation (15) associated to each vertex \(v_{i}\in V_{c}\) can be decomposed into unitary operators in \(r\) and \(\overline{r}\): \[A_{v_{i}}(g_{i})=A_{v_{i},r}(g_{i})A_{v_{i},\overline{r}}(g_{i}). \tag{16}\] Let \(n\equiv|V_{c}|\) be the number of vertices on the cut. The gauge-invariant projector on the cut \(\Pi_{V_{c}}\) acts by integrating over the gauge transformations associated to the \(n\) vertices in \(V_{c}\): \[\Pi_{V_{c}} =\int dg_{1}\cdots dg_{n}A_{v_{1}}(g_{1})\cdots A_{v_{n}}(g_{n})\] \[=\int dg_{1}\cdots dg_{n}A_{v_{1},r}(g_{1})\cdots A_{v_{n},r}(g_{ n})A_{v_{1},\overline{r}}(g_{1})\cdots A_{v_{n},\overline{r}}(g_{n})\] \[\equiv\int dgA_{r}(g)A_{\overline{r}}(g), \tag{17}\] where we have defined \(A_{r}(g)=\prod_{i=1}^{n}A_{v_{i},r}(g_{i})\) (and similarly for \(A_{\overline{r}}(g)\)), \(g=(g_{1},\cdots,g_{n})\) is a element of \(G^{n}\) (the direct product of \(n\) copies of \(G\) on the cut), and \(dg\) is the Haar measure on \(G^{n}\). Thus \(A_{r}(g)\) is a \(G^{n}\) action on \(\hat{\cal H}_{r}\), and \(\hat{\cal H}_{r}\) can be decomposed into irreps of \(G^{n}\). We decompose \(\hat{\cal H}_{r}\) into the following way: \[\hat{\cal H}_{r}\cong\bigoplus_{\alpha,i}\hat{\cal H}_{r}^{\alpha i}, \tag{18}\] where \(\alpha\) as an irreducible representation of \(G^{n}\) can also be thought of as the external tensor product of \(n\) irreps of \(G\), i.e., \(\alpha\) denotes the external tensor product \(\alpha_{1}\boxtimes\alpha_{2}\boxtimes\cdots\boxtimes\alpha_{n}\). Thus, we will sometimes write \(\alpha\) as a tuple of \(G\) irreps \((\alpha_{1},\alpha_{2},\cdots,\alpha_{n})\). The index \(i=1,\cdots,n_{\alpha}\) denotes the multiplicity of the \(\alpha\) irrep. The sum ranges over all \(G^{n}\) irreps but some irreps may appear with zero multiplicity, as in the single vertex Hilbert space (8). From the decomposition (18), we write an orthonormal basis for \(\hat{\cal H}_{r}\) as \(\left\{\left|\alpha ik\right\rangle_{r}\right\}\), where again the first index \(i=1,\cdots,n_{\alpha}\) runs over the irrep multiplicity and the second index \(k=1,\cdots,d_{\alpha}\) labels an orthonormal basis for each \(\hat{\cal H}_{r}^{\alpha i}\). Similarly, we write an orthonormal basis for \(\hat{\cal H}_{\overline{r}}\) as \(\left\{\left|\overline{\beta}j\ell\right\rangle_{\overline{r}}\right\}\), where \(j=1,\cdots,\overline{n}_{\overline{\beta}}\), and \(\overline{n}_{\overline{\beta}}\) is the multiplicity of the \(\overline{\beta}\) irrep on \(\overline{r}\). Explicitly, \(A_{r}(g)\) (\(A_{\overline{r}}(g)\)) acts on the basis states of \(\hat{\cal H}_{r}\) (\(\hat{\cal H}_{\overline{r}}\)) via \[A_{r}(g)\left|\alpha ik\right\rangle_{r} =\sum_{k^{\prime}}D_{k^{\prime}k}^{\alpha}(g)\left|\alpha ik^{ \prime}\right\rangle_{r},\] \[A_{\overline{r}}(g)\left|\overline{\beta}j\ell\right\rangle_{ \overline{r}} =\sum_{\ell^{\prime}}D_{\ell^{\prime}\ell}^{\overline{\beta}}(g) \left|\overline{\beta}j\ell^{\prime}\right\rangle_{\overline{r}}. \tag{19}\] Combining the basis for \(\hat{\mathcal{H}}_{r}\) and for \(\hat{\mathcal{H}}_{\overline{r}}\), we write an orthonormal basis for \(\hat{\mathcal{H}}\) as \(\left\{\left|\alpha ik\right\rangle_{r}\left|\overline{\beta}j\ell\right\rangle_ {\overline{r}}\right\}\). It is worth noting that the multiplicities \(\overline{n}_{\overline{\alpha}}\) on \(\overline{r}\) are generally independent from the multiplicities \(n_{\overline{\alpha}}\) on \(r\); in particular, \(\overline{n}_{\overline{\alpha}}\) could vanish while \(n_{\overline{\alpha}}\) is nonzero, and vice versa. In a sense, we have done as much gauging as we can while keeping the factorization of the Hilbert space between \(r\) and \(\overline{r}\). \(\hat{\mathcal{H}}\) is similar to what is often called the extended Hilbert space [22, 23, 24, 19], which is a choice of Hilbert space into which one can embed gauge-invariant states such that the extended Hilbert space factorizes across the cut. Here we arrive at a similar prescription by restricting from a larger Hilbert space \(\mathcal{H}\). Now we will write a basis of states for the fully gauge-invariant Hilbert space \(\widetilde{\mathcal{H}}\). **Lemma 1**.: The fully gauge-invariant Hilbert space \(\widetilde{\mathcal{H}}=\Pi_{V_{c}}\left(\hat{\mathcal{H}}_{r}\otimes\hat{ \mathcal{H}}_{\overline{r}}\right)\) is given by \[\widetilde{\mathcal{H}}=\left\{\sum_{\alpha ijk}\widetilde{\psi}_{\alpha ij} \left|\alpha ik\right\rangle_{r}\left|\overline{\alpha}jk\right\rangle_{ \overline{r}}:\widetilde{\psi}_{\alpha ij}\in\mathbb{C}\right\}. \tag{3.10}\] Proof.: Since we already have a basis for the partially gauged Hilbert space, it suffices to demonstrate the action of \(\Pi_{V_{c}}\) on these basis states, which is given by \[\Pi_{V_{c}}\left|\alpha ik\right\rangle_{r}\left|\overline{\beta}j\ell\right\rangle _{\overline{r}}=\sum_{k^{\prime}\ell^{\prime}}\int dgD^{\alpha}_{k^{\prime}k }(g)D^{\overline{\beta}}_{\ell^{\prime}\ell}(g)\left|\alpha ik^{\prime} \right\rangle_{r}\left|\overline{\beta}j\ell^{\prime}\right\rangle_{ \overline{r}}. \tag{3.11}\] We recall the Schur orthogonality relation for compact groups: \[\int dgD^{\alpha}_{k^{\prime}k}(g)D^{\overline{\beta}}_{\ell^{\prime}\ell}(g )=\frac{\delta_{\alpha\beta}\delta_{k^{\prime}\ell^{\prime}}\delta_{k\ell}}{ d_{\alpha}}, \tag{3.12}\] so that the fully gauge-invariant basis states are \[\Pi_{V_{c}}\left|\alpha ik\right\rangle_{r}\left|\overline{\beta}j\ell \right\rangle_{\overline{r}}=\frac{1}{d_{\alpha}}\delta_{\alpha\beta}\delta_{ k\ell}\sum_{k^{\prime}\ell^{\prime}}\delta_{k^{\prime}\ell^{\prime}}\left| \alpha ik^{\prime}\right\rangle_{r}\left|\overline{\beta}j\ell^{\prime} \right\rangle_{\overline{r}}=\frac{1}{d_{\alpha}}\delta_{\alpha\beta}\delta_{ k\ell}\sum_{k^{\prime}}\left|\alpha ik^{\prime}\right\rangle_{r}\left| \overline{\beta}jk^{\prime}\right\rangle_{\overline{r}}. \tag{3.13}\] Choosing \(\alpha=\beta\) and \(k=\ell\) gives the desired form (3.10). _Remark 1_.: (3.10) immediately implies a natural Hilbert space isomorphism \[\widetilde{\mathcal{H}}\cong\bigoplus_{\alpha}\widetilde{\mathcal{H}}^{ \alpha}_{r}\otimes\widetilde{\mathcal{H}}^{\overline{\alpha}}_{\overline{r}}. \tag{3.14}\] Here \(\widetilde{\mathcal{H}}^{\alpha}_{r}\) denotes a Hilbert space of dimension \(n_{\alpha}\) with orthonormal basis states \(\left|\alpha i\right\rangle_{r}\) transforming in the \(\alpha\) representation of \(G^{n}\), and \(\widetilde{\mathcal{H}}^{\overline{\alpha}}_{\overline{r}}\) similarly denotes a Hilbert space of dimension \(\overline{n}_{\overline{\alpha}}\) with orthonormal basis states \(\left|\overline{\alpha}j\right\rangle_{\overline{r}}\) transforming in the \(\overline{\alpha}\) representation. Note that although irrep labels such as \(\alpha\) appear in the basis states, they are fixed within each Hilbert space \(\widetilde{\mathcal{H}}_{r}^{\alpha}\) or \(\widetilde{\mathcal{H}}_{\overline{r}}^{\overline{\alpha}}\). More explicitly, the natural isomorphism (3.14) maps an arbitrary state of (3.10) in the following way: \[\left|\widetilde{\psi}\right\rangle=\sum_{\alpha ijk}\widetilde{\psi}_{\alpha ij }\left|\alpha ik\right\rangle_{r}\left|\overline{\alpha}jk\right\rangle_{ \overline{r}}\quad\rightarrow\quad\sum_{\alpha ij}\sqrt{d_{\alpha}} \widetilde{\psi}_{\alpha ij}\left|\alpha i\right\rangle_{r}\left|\overline{ \alpha}j\right\rangle_{\overline{r}}. \tag{3.15}\] The \(\sqrt{d_{\alpha}}\) is a crucial factor which ensures that the isomorphism preserves the inner product. Given this decomposition, our next goal will be to define an algebra of gauge-invariant operators on \(r\), which we will call \(\widetilde{\mathcal{A}}_{r}\). Given the lack of factorization of \(\widetilde{\mathcal{H}}\) as indicated by (3.14), we cannot easily write \(\widetilde{\mathcal{A}}_{r}\) as \(\mathcal{B}(\widetilde{\mathcal{H}}_{r})\) for some putative Hilbert space \(\widetilde{\mathcal{H}}_{r}\). Rather, we will use the known algebra of operators on \(\mathcal{H}_{r}\) and \(\hat{\mathcal{H}}_{r}\) to define \(\widetilde{\mathcal{A}}_{r}\). ### The gauge-invariant subregion algebra It is tempting to define the algebra of gauge-invariant operators in a subregion \(r\) via restriction of the pre-gauged algebra in that region \[\widetilde{\mathcal{A}}_{r}=\Pi_{\text{GI}}\mathcal{A}_{r}\Pi_{\text{GI}}, \tag{3.16}\] similar to (2.19). There is a second possible description of the gauge-invariant algebra, which is that \(\widetilde{\mathcal{A}}_{r}\) consists of the set of operators \(\{\widetilde{\mathcal{O}}_{r}=\mathcal{O}_{r}\Pi_{\text{GI}}\}\) for all operators \(\mathcal{O}_{r}\in\mathcal{A}_{r}\) which commute with the gauge-invariant projector: \([\mathcal{O}_{r},\Pi_{\text{GI}}]=0\). We will call this algebra \(\widetilde{\mathcal{A}}_{r}^{(1)}\), and the algebra (3.16) defined by conjugation by the gauge-invariant projector \(\widetilde{\mathcal{A}}_{r}^{(2)}\). At first blush it is only obvious that \(\widetilde{\mathcal{A}}_{r}^{(1)}\) is a subset of \(\widetilde{\mathcal{A}}_{r}^{(2)}\), as \[\mathcal{O}_{r}\Pi_{\text{GI}}=\mathcal{O}_{r}\Pi_{\text{GI}}^{2}=\Pi_{\text{ GI}}\mathcal{O}_{r}\Pi_{\text{GI}}\Rightarrow\widetilde{\mathcal{A}}_{r}^{(1)} \subseteq\widetilde{\mathcal{A}}_{r}^{(2)}, \tag{3.17}\] but it is not obvious the two definitions are equivalent. Here we aim to show that. **Lemma 2**.: \(\widetilde{\mathcal{A}}_{r}^{(1)}=\widetilde{\mathcal{A}}_{r}^{(2)}\)_._ Proof.: We again use the group action on the cut \(A(g)=A_{r}(g)A_{\overline{r}}(g)\) and the gauge-invariant projector on the cut \(\Pi_{V_{c}}\) which integrates over the group action: \[\Pi_{V_{c}}=\int dgA(g). \tag{3.18}\] We define an element of \(\widehat{\mathcal{A}}_{r}^{(2)}\) by acting on an arbitrary pre-gauged operator \(\mathcal{O}_{r}\in\mathcal{A}_{r}\) via \[\Pi_{\rm GI}\mathcal{O}_{r}\Pi_{\rm GI} =\Pi_{V_{\overline{r}}}\left(\Pi_{V_{c}}\Pi_{V_{r}}\mathcal{O}_{r} \Pi_{V_{r}}\Pi_{V_{c}}\right)\Pi_{V_{\overline{r}}}\] \[=\left(\Pi_{V_{c}}\Pi_{V_{r}}\mathcal{O}_{r}\Pi_{V_{r}}\Pi_{V_{c} }\right)\Pi_{V_{\overline{r}}}\] \[\equiv\left(\Pi_{V_{c}}\hat{\mathcal{O}}_{r}\Pi_{V_{c}}\right)\Pi _{V_{\overline{r}}} \tag{3.19}\] where \(\hat{\mathcal{O}}_{r}\equiv\Pi_{V_{r}}\mathcal{O}_{r}\Pi_{V_{r}}\in\hat{ \mathcal{A}}_{r}\) is an operator on the partially gauged Hilbert space \(\hat{\mathcal{H}}_{r}\). Conjugation via the gauge-invariant projector on the cut yields \[\Pi_{V_{c}}\hat{\mathcal{O}}_{r}\Pi_{V_{c}}=\int dgdg^{\prime}A(g)\hat{ \mathcal{O}}_{r}A(g^{\prime}). \tag{3.20}\] Using the right-invariance of the Haar measure, we can shift \(g\to g(g^{\prime})^{-1}\) to obtain \[\Pi_{V_{c}}\hat{\mathcal{O}}_{r}\Pi_{V_{c}} =\int dgA(g)\int dg^{\prime}A((g^{\prime})^{-1})\hat{\mathcal{O}} _{r}A(g^{\prime}) \tag{3.21}\] \[=\Pi_{V_{c}}\int dg^{\prime}A((g^{\prime})^{-1})\hat{\mathcal{O}} _{r}A(g^{\prime})\equiv\Pi_{V_{c}}\hat{\mathcal{O}}_{r}^{\prime}, \tag{3.22}\] where \(\hat{\mathcal{O}}_{r}^{\prime}\) is defined by the integral over \(g^{\prime}\). We could equivalently send \(g^{\prime}\to g^{-1}g^{\prime}\) to obtain \[\Pi_{V_{c}}\hat{\mathcal{O}}_{r}\Pi_{V_{c}} =\int dgA(g)\hat{\mathcal{O}}_{r}A(g^{-1})\Pi_{V_{c}}\] \[=\int dgA(g^{-1})\hat{\mathcal{O}}_{r}A(g)\Pi_{V_{c}}=\hat{ \mathcal{O}}_{r}^{\prime}\Pi_{V_{c}} \tag{3.23}\] where we use the fact that the Haar measure is invariant under inversion \(dg\to d(g^{-1})\). This shows \(\hat{\mathcal{O}}_{r}^{\prime}\Pi_{V_{c}}=\Pi_{V_{c}}\hat{\mathcal{O}}_{r}^{\prime}\), so \(\hat{\mathcal{O}}_{r}^{\prime}\) commutes with the gauge-invariant projector on the cut. By construction, \(\hat{\mathcal{O}}_{r}^{\prime}\) also commutes with \(\Pi_{V_{r}}\) and \(\Pi_{V_{\overline{r}}}\), so it commutes with \(\Pi_{\rm GI}\). Now we show that \(\hat{\mathcal{O}}_{r}^{\prime}\) is an element of \(\mathcal{A}_{r}\), which is not obvious as \(A(g)\) on the cut acts on both \(r\) and \(\overline{r}\). However, we can write \[\hat{\mathcal{O}}_{r}^{\prime} =\int dgA(g^{-1})\hat{\mathcal{O}}_{r}A(g)=\int dgA_{r}(g^{-1})A_{ \overline{r}}(g^{-1})\hat{\mathcal{O}}_{r}A_{\overline{r}}(g)A_{r}(g)\] \[=\int dgA_{r}(g^{-1})\hat{\mathcal{O}}_{r}A_{r}(g), \tag{3.24}\] as \(\hat{\mathcal{O}}_{r}\) commutes with operators in \(\overline{r}\). Thus \(\hat{\mathcal{O}}_{r}^{\prime}\) is in \(\mathcal{A}_{r}\). Combining the above, we can write any element of \(\widehat{\mathcal{A}}_{r}^{(2)}\) as \[\Pi_{\rm GI}\mathcal{O}_{r}\Pi_{\rm GI}=\hat{\mathcal{O}}_{r}^{\prime}\Pi_{V_ {c}}\Pi_{V_{\overline{r}}}=\hat{\mathcal{O}}_{r}^{\prime}\Pi_{V_{r}}\Pi_{V_{c} }\Pi_{V_{\overline{r}}}=\hat{\mathcal{O}}_{r}^{\prime}\Pi_{\rm GI}, \tag{3.25}\] which belong to \(\widetilde{\mathcal{A}}_{r}^{(1)}\) as \(\hat{\mathcal{O}}_{r}^{\prime}\) is an operator in \(\mathcal{A}_{r}\) that commutes with \(\Pi_{\rm GI}\). Therefore, \(\widetilde{\mathcal{A}}_{r}^{(2)}\subseteq\widetilde{\mathcal{A}}_{r}^{(1)}\). Morevoer, as we argued earlier, we have \(\widetilde{\mathcal{A}}_{r}^{(1)}\subseteq\widetilde{\mathcal{A}}_{r}^{(2)}\). Thus, we have shown \(\widetilde{\mathcal{A}}_{r}^{(1)}=\widetilde{\mathcal{A}}_{r}^{(2)}\). _Remark 2_.: It will be more convenient to use \(\widetilde{\mathcal{A}}_{r}^{(1)}\) as our definition of \(\widetilde{\mathcal{A}}_{r}\) in later discussions. We now rewrite it by introducing the following notation. For the rest of the paper, we will denote the subset of operators in an algebra that commute with the gauge-invariant projector with a superscript \(\Pi\); for example, the algebra \(\mathcal{A}^{\Pi}\) is defined by \[\mathcal{A}^{\Pi}\equiv\{\mathcal{O}\in\mathcal{A}:[\mathcal{O},\Pi_{\rm GI}] =0\}. \tag{3.26}\] It is clear that this subset is itself a von Neumann algebra, as it contains the identity, which necessarily commutes with any projector, and is closed under addition, multiplication, and involution10. Similarly, we define the subalgebra \(\mathcal{A}_{r}^{\Pi}\) as Footnote 10: Closure of \(\mathcal{A}^{\Pi}\) under addition and multiplication is obvious, and closure under involution follows from the projector being Hermitian. \[\mathcal{A}_{r}^{\Pi}=\{\mathcal{O}_{r}\in\mathcal{A}_{r}:[\mathcal{O}_{r}, \Pi_{\rm GI}]=0\}, \tag{3.27}\] and define \(\hat{\mathcal{A}}_{r}^{\Pi}\) as \[\hat{\mathcal{A}}_{r}^{\Pi}=\{\hat{\mathcal{O}}_{r}\in\hat{\mathcal{A}}_{r}:[ \hat{\mathcal{O}}_{r},\Pi_{\rm GI}]=0\}. \tag{3.28}\] So far, we have shown \[\widetilde{\mathcal{A}}_{r}=\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}. \tag{3.29}\] **Lemma 3**.: \(\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}=\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\)_._ Proof.: It is clear that \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\subseteq\mathcal{A}_{r}^{\Pi}\Pi_{ \rm GI}\), so we only need to show the opposite inclusion. Consider any operator \(\mathcal{O}_{r}\in\mathcal{A}_{r}^{\Pi}\). As this operator commutes with \(\Pi_{\rm GI}\), we can use the decomposition of the gauge-invariant projector to write \(\mathcal{O}_{r}\Pi_{\rm GI}\) as \[\mathcal{O}_{r}\Pi_{\rm GI}=\Pi_{\rm GI}\mathcal{O}_{r}\Pi_{\rm GI}=\Pi_{\rm GI }\left(\Pi_{V_{r}}\mathcal{O}_{r}\Pi_{V_{r}}\right)\Pi_{\rm GI}=\left(\Pi_{V_ {r}}\mathcal{O}_{r}\Pi_{V_{r}}\right)\Pi_{\rm GI}. \tag{3.30}\] Note that \(\Pi_{V_{r}}\mathcal{O}_{r}\Pi_{V_{r}}\) is an operator on \(\hat{\mathcal{H}}_{r}\) that commutes with \(\Pi_{\rm GI}\), so it belongs to \(\hat{\mathcal{A}}_{r}^{\Pi}\). Thus, every element of \(\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}\) is an element of \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\). This shows the inclusion \(\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}\subseteq\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{ \rm GI}\), from which we conclude \(\mathcal{A}_{r}^{\Pi}\Pi_{\rm GI}=\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\). **Corollary 4**.: \(\widetilde{\mathcal{A}}_{r}=\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\)_._ Using the corollary above, we will now construct a generic operator in \(\widetilde{\mathcal{A}}_{r}\). **Lemma 5**.: \(\widetilde{\mathcal{A}}_{r}\) can be written in the following two forms: \[\widetilde{\mathcal{A}}_{r} =\left\{\hat{\mathcal{O}}_{r}\Pi_{\text{GI}}:\hat{\mathcal{O}}_{r} =\sum_{\alpha ijk}\hat{\mathcal{O}}_{\alpha ij}\left|\alpha ik\right\rangle_{r }\left\langle\alpha jk\right|\otimes\mathbb{1}_{\overline{r}},\,\hat{\mathcal{ O}}_{\alpha ij}\in\mathbb{C}\right\} \tag{3.31}\] \[=\left\{\widetilde{\mathcal{O}}_{r}=\sum_{\alpha ii^{\prime}jk \ell}\widetilde{\mathcal{O}}_{\alpha ij}\left|\alpha ik\right\rangle_{r} \left\langle\alpha j\ell\right|\otimes\left|\overline{\alpha}i^{\prime}k \right\rangle_{\overline{r}}\left\langle\overline{\alpha}i^{\prime}\ell \right|:\widetilde{\mathcal{O}}_{\alpha ij}\in\mathbb{C}\right\}, \tag{3.32}\] with \(\widetilde{\mathcal{O}}_{r}\) in (3.32) identified with \(\hat{\mathcal{O}}_{r}\Pi_{\text{GI}}\) in (3.31) under \(\widetilde{\mathcal{O}}_{\alpha ij}=\hat{\mathcal{O}}_{\alpha ij}/d_{\alpha}\).11 Footnote 11: In a slight abuse of notation, we have referred to the matrix elements of an operator with the same symbol as the operator itself, but with irrep labels and indices such as \(\alpha\), \(i\), and \(j\). We could have referred to \(\hat{\mathcal{O}}_{\alpha ij}\) as \(\hat{\mathcal{O}}_{r,\alpha ij}\) in (3.31), but for simplicity, we will use the former. Proof.: We show this by noting \(\widetilde{\mathcal{A}}_{r}=\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\text{GI}}\) and constructing a generic operator therein. Recall that \(\left\{\left|\alpha ik\right\rangle_{r}\right\}\) is a basis for \(\hat{\mathcal{H}}_{r}\), so an operator \(\hat{\mathcal{O}}_{r}\in\hat{\mathcal{A}}_{r}\) (not necessarily gauge-invariant) can be written as \[\hat{\mathcal{O}}_{r}=\sum_{\alpha\beta ijk\ell}\hat{\mathcal{O}}_{\alpha \beta ijk\ell}\left|\alpha ik\right\rangle_{r}\left\langle\beta j\ell\right| \otimes\mathbb{1}_{\overline{r}} \tag{3.33}\] with some \(\hat{\mathcal{O}}_{\alpha\beta ijk\ell}\in\mathbb{C}\). Now we require \(\hat{\mathcal{O}}_{r}\in\hat{\mathcal{A}}_{r}^{\Pi}\), so we will try to impose \(\hat{\mathcal{O}}_{r}\Pi_{\text{GI}}=\Pi_{\text{GI}}\hat{\mathcal{O}}_{r}\). We find \[\hat{\mathcal{O}}_{r}\Pi_{\text{GI}} =\hat{\mathcal{O}}_{r}\Pi_{V_{r}}\Pi_{V_{\overline{r}}}\Pi_{V_{c}}\] \[=\left(\sum_{\alpha\beta ijk\ell}\hat{\mathcal{O}}_{\alpha\beta ijk \ell}\left|\alpha ik\right\rangle_{r}\left\langle\beta j\ell\right|\otimes \mathbb{1}_{\overline{r}}\right)\Pi_{V_{\overline{r}}}\Pi_{V_{c}}\] \[=\left(\sum_{\alpha\beta\gamma ijk\ell i^{\prime}k^{\prime}} \hat{\mathcal{O}}_{\alpha\beta ijk\ell}\left|\alpha ik\right\rangle_{r}\left \langle\beta j\ell\right|\otimes\left|\overline{\gamma}i^{\prime}k^{\prime} \right\rangle_{\overline{r}}\left\langle\overline{\gamma}i^{\prime}k^{\prime} \right|\right)\Pi_{V_{c}}\] \[=\sum_{\alpha\beta\gamma ijk\ell i^{\prime}k^{\prime}}\hat{ \mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\beta}}\delta_{\beta\gamma} \delta_{\ell k^{\prime}}\left|\alpha ik\right\rangle_{r}\left\langle\beta j\ell ^{\prime}\right|\otimes\left|\overline{\gamma}i^{\prime}k^{\prime}\right\rangle _{\overline{r}}\left\langle\overline{\gamma}i^{\prime}\ell^{\prime}\right|\] \[=\sum_{\alpha\beta ijk\ell i^{\prime}\ell^{\prime}}\hat{ \mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\beta}}\left|\alpha ik\right\rangle _{r}\left\langle\beta j\ell^{\prime}\right|\otimes\left|\overline{\beta}i^{ \prime}\ell\right\rangle_{\overline{r}}\left\langle\overline{\beta}i^{\prime} \ell^{\prime}\right|, \tag{3.34}\] where we have used the basis of \(\hat{\mathcal{H}}_{\overline{r}}\) in going to the third line and used (3.13) in going to the fourth line. We can apply the same procedure to write \(\Pi_{\rm GI}\hat{\mathcal{O}}_{r}\) as \[\Pi_{\rm GI}\hat{\mathcal{O}}_{r} =\Pi_{V_{c}}\sum_{\alpha\beta\gamma ijk\ell i^{\prime}k^{\prime}} \hat{\mathcal{O}}_{\alpha\beta ijk\ell}\ket{\alpha ik}_{r}\bra{\beta j\ell} \otimes\ket{\overline{\gamma}i^{\prime}k^{\prime}}_{\mathbf{\tau}}\bra{\overline {\gamma}i^{\prime}k^{\prime}}\] \[=\sum_{\alpha\beta\gamma ijk\ell i^{\prime}k^{\prime}\ell^{\prime }}\hat{\mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\alpha}}\delta_{\alpha \gamma}\delta_{kk^{\prime}}\ket{\alpha i\ell^{\prime}}_{r}\bra{\beta j\ell} \otimes\ket{\overline{\gamma}i^{\prime}\ell^{\prime}}_{\mathbf{\tau}}\bra{\overline {\gamma}i^{\prime}k^{\prime}}\] \[=\sum_{\alpha\beta ijk\ell i^{\prime}\ell^{\prime}}\hat{ \mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\alpha}}\ket{\alpha i\ell^{ \prime}}_{r}\bra{\beta j\ell}\otimes\ket{\overline{\alpha}i^{\prime}\ell^{ \prime}}_{\mathbf{\tau}}\bra{\overline{\alpha}i^{\prime}k}. \tag{3.35}\] One way to proceed is to find conditions on \(\hat{\mathcal{O}}_{\alpha\beta aijk\ell}\) such that the two expressions (3.34), (3.35) are equal, but doing this explicitly turns out to be slightly complicated (in cases where the multiplicities \(\overline{n}_{\overline{\alpha}}\), \(\overline{n}_{\overline{\beta}}\) vanish but \(n_{\alpha}\), \(n_{\beta}\) do not). Instead, we will use the equality of (3.34) and (3.35) to directly show that \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\) contains and is contained in the right-hand side of (3.31), which we now define as \[\widetilde{\mathcal{A}}_{r}^{(3)}\equiv\left\{\hat{\mathcal{O}}_{r}\Pi_{\rm GI }:\hat{\mathcal{O}}_{r}=\sum_{\alpha ijk}\hat{\mathcal{O}}_{\alpha ij}\ket{ \alpha ik}_{r}\bra{\alpha jk}\otimes\mathbb{1}_{\mathbf{\tau}},\,\hat{\mathcal{O}} _{\alpha ij}\in\mathbb{C}\right\}. \tag{3.36}\] First, we show that \(\widetilde{\mathcal{A}}_{r}^{(3)}\) defined by (3.36) is equal to (3.32) as claimed. To see this, we simply apply (3.34) to the special case of \(\hat{\mathcal{O}}_{r}=\sum_{\alpha ijk}\hat{\mathcal{O}}_{\alpha ij}\ket{ \alpha ik}_{r}\bra{\alpha jk}\otimes\mathbb{1}_{\mathbf{\tau}}\) and find \(\hat{\mathcal{O}}_{r}\Pi_{V_{c}}\) to be identical to \(\widetilde{\mathcal{O}}_{r}\) in (3.32) under \(\widetilde{\mathcal{O}}_{\alpha ij}=\hat{\mathcal{O}}_{\alpha ij}/d_{\alpha}\). Moreover, applying (3.35) to this case yields the same operator, so we find that this special \(\hat{\mathcal{O}}_{r}\) commutes with \(\Pi_{\rm GI}\). Thus, \(\widetilde{\mathcal{A}}_{r}^{(3)}\) is contained in \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\). Finally, we will show that \(\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\) is contained in \(\widetilde{\mathcal{A}}_{r}^{(3)}\). Any \(\hat{\mathcal{O}}_{r}\Pi_{\rm GI}\in\hat{\mathcal{A}}_{r}^{\Pi}\Pi_{\rm GI}\) can be written explicitly as \[\hat{\mathcal{O}}_{r}\Pi_{\rm GI} =\Pi_{\rm GI}\hat{\mathcal{O}}_{r}\Pi_{\rm GI}=\Pi_{V_{c}}\sum_{ \alpha\beta ijk\ell i^{\prime}\ell^{\prime}}\hat{\mathcal{O}}_{\alpha\beta ijk \ell}\frac{1}{d_{\beta}}\ket{\alpha ik}_{r}\bra{\beta j\ell^{\prime}}\otimes \ket{\overline{\beta}i^{\prime}\ell}_{\mathbf{\tau}}\bra{\overline{\beta}i^{ \prime}\ell^{\prime}}\] \[=\sum_{\alpha\beta ijk\ell i^{\prime}k^{\prime}\ell^{\prime}}\hat {\mathcal{O}}_{\alpha\beta ijk\ell}\frac{1}{d_{\alpha}d_{\beta}}\delta_{ \alpha\beta}\delta_{k\ell}\ket{\alpha ik^{\prime}}_{r}\bra{\beta j\ell^{\prime}} \otimes\ket{\overline{\beta}i^{\prime}k^{\prime}}_{\mathbf{\tau}}\bra{\overline {\beta}i^{\prime}\ell^{\prime}}\] \[=\sum_{\alpha ijk\ell i^{\prime}k^{\prime}}\hat{\mathcal{O}}_{ \alpha\alpha ijk^{\prime}k^{\prime}}\frac{1}{d_{\alpha}^{2}}\ket{\alpha ik}_{r} \bra{\alpha j\ell}\otimes\ket{\overline{\alpha}i^{\prime}k}_{\mathbf{\tau}}\bra{ \overline{\alpha}i^{\prime}\ell}, \tag{3.37}\] which is identical to \(\widetilde{\mathcal{O}}_{r}\) in (3.32) under \(\widetilde{\mathcal{O}}_{\alpha ij}=\sum_{k^{\prime}}\hat{\mathcal{O}}_{ \alpha\alpha ijk^{\prime}k^{\prime}}/d_{\alpha}^{2}\), and thus belongs to \(\widetilde{\mathcal{A}}_{r}^{(3)}\). Combining the above results, we conclude \(\widetilde{\mathcal{A}}_{r}=\widetilde{\mathcal{A}}_{r}^{(3)}\). After all of this machinery, it is clear that one is justified in writing the algebra \(\widetilde{\mathcal{A}}_{r}\) of gauge-invariant operators on a subregion \(r\) as a restriction via \(\Pi_{\rm GI}\) of the pre-gauged algebra \(\mathcal{A}_{r}\) on \(r\). Crucially, however, \(\widetilde{\mathcal{A}}_{r}\) is _not_ a subalgebra of \(\mathcal{A}_{r}\), as is obvious from the nontrivial action of (3.32) on \(\hat{\mathcal{H}}_{\overline{r}}\). This is manifest from the fact that \(\Pi_{\mathrm{GI}}\) is an element of \(\mathcal{A}\), not of \(\mathcal{A}_{r}\), and so the projection takes one out of the pre-gauged subregion algebra \(\mathcal{A}_{r}\). ### The center of the algebra For spatial subregions the following inclusion is obvious: \[\mathcal{A}_{\overline{r}}\subseteq\left(\mathcal{A}_{r}\right)^{\prime}, \tag{3.38}\] as causally disconnected operators must commute. Here \(\left(\mathcal{A}_{r}\right)^{\prime}\) denotes the commutant of \(\mathcal{A}_{r}\). Haag duality is the saturation of the above bound: \[\mathcal{A}_{\overline{r}}=\left(\mathcal{A}_{r}\right)^{\prime}, \tag{3.39}\] that is, the commutant of the algebra of operators in a subregion is equal to the algebra of operators in the complement region.12 Footnote 12: There are counterexamples to Haag duality in quantum field theories with global or gauge symmetries; see for example [25]. In our model, Haag duality certainly holds for the pre-gauged algebras, but does it also hold for the gauge-invariant algebras? We will now show that it does, i.e., \[\widetilde{\mathcal{A}}_{\overline{r}}=\left(\widetilde{\mathcal{A}}_{r} \right)^{\prime}. \tag{3.40}\] **Proposition 6**.: The Hilbert space isomorphism (3.14) induces the following isomorphisms between algebras: \[\widetilde{\mathcal{A}}_{r}\cong\bigoplus_{\alpha}\widetilde{\mathcal{A}}_{r} ^{\alpha}\otimes 1\overline{r},\qquad\widetilde{\mathcal{A}}_{\overline{r}} \cong\bigoplus_{\alpha}1_{r}^{\alpha}\otimes\widetilde{\mathcal{A}}_{\overline {r}}^{\overline{\alpha}}, \tag{3.41}\] where \(\widetilde{\mathcal{A}}_{r}^{\alpha}\equiv\mathcal{B}(\widetilde{\mathcal{H}}_ {r}^{\alpha})\), the algebra of bounded operators on \(\widetilde{\mathcal{H}}_{r}^{\alpha}\), and similarly we define \(\widetilde{\mathcal{A}}_{\overline{r}}^{\overline{\alpha}}\equiv\mathcal{B}( \widetilde{\mathcal{H}}_{\overline{r}}^{\overline{\alpha}})\). Moreover, \(1_{r}^{\alpha}\), \(1_{\overline{r}}^{\overline{\alpha}}\) denote the identity operators on \(\widetilde{\mathcal{H}}_{r}^{\alpha}\), \(\widetilde{\mathcal{H}}_{\overline{r}}^{\overline{\alpha}}\), respectively. Proof.: Recall from (3.14) that \(\widetilde{\mathcal{H}}\) is isomorphic to a direct sum of factorizing Hilbert spaces: \[\widetilde{\mathcal{H}}\cong\bigoplus_{\alpha}\widetilde{\mathcal{H}}_{r}^{ \alpha}\otimes\widetilde{\mathcal{H}}_{\overline{r}}^{\overline{\alpha}}; \tag{3.42}\] where the two sides are identified under the natural isomorphism (3.15), which we reproduce here: \[\left|\widetilde{\psi}\right\rangle=\sum_{\alpha ijk}\widetilde{\psi}_{ \alpha ij}\left|\alpha ik\right\rangle_{r}\left|\overline{\alpha}jk\right\rangle _{\overline{r}}\quad\rightarrow\quad\sum_{\alpha ij}\sqrt{d_{\alpha}} \widetilde{\psi}_{\alpha ij}\left|\alpha i\right\rangle_{r}\left|\overline{ \alpha}j\right\rangle_{\overline{r}}. \tag{3.43}\] We now apply this isomorphism to our algebra \(\widetilde{\mathcal{A}}_{r}\). Consider a general element of \(\widetilde{\mathcal{A}}_{r}\) defined via (3.32). Under (3.43), this element becomes \[\sum_{\alpha ii^{\prime}jk\ell}\widetilde{\mathcal{O}}_{\alpha ij} \left|\alpha ik\right\rangle_{r}\left\langle\alpha j\ell\right|\otimes\left| \overline{\alpha}i^{\prime}k\right\rangle_{\overline{r}}\left\langle \overline{\alpha}i^{\prime}\ell\right| \rightarrow\sum_{\alpha ii^{\prime}j}d_{\alpha}\widetilde{\mathcal{O}}_{ \alpha ij}\left|\alpha i\right\rangle_{r}\left\langle\alpha j\right|\otimes \left|\overline{\alpha}i^{\prime}\right\rangle_{\overline{r}}\left\langle \overline{\alpha}i^{\prime}\right|\] \[=\sum_{\alpha ij}d_{\alpha}\widetilde{\mathcal{O}}_{\alpha ij} \left|\alpha i\right\rangle_{r}\left\langle\alpha j\right|\otimes 1\tfrac{\overline{ \alpha}}{\overline{r}}, \tag{3.44}\] which is an element of \(\widetilde{\mathcal{A}}_{r}^{\alpha}\otimes 1\tfrac{\overline{\alpha}}{\overline{r}}\). Thus, we have demonstrated the isomorphism for \(\widetilde{\mathcal{A}}_{r}\) in (3.41). The isomorphism for \(\widetilde{\mathcal{A}}_{\overline{r}}\) follows from a similar argument. **Corollary 7**.: \(\widetilde{\mathcal{A}}_{r}\) obeys Haag duality, such that \(\left(\widetilde{\mathcal{A}}_{r}\right)^{\prime}=\widetilde{\mathcal{A}}_{ \overline{r}}\), where the commutant is defined with respect to the full gauge-invariant algebra \(\widetilde{\mathcal{A}}\). Proof.: This immediately follows from the algebra isomorphisms (3.41) and \[\left(\bigoplus_{\alpha}\widetilde{\mathcal{A}}_{r}^{\alpha}\otimes 1 \tfrac{\overline{\alpha}}{\overline{r}}\right)^{\prime}=\bigoplus_{\alpha} \left(\widetilde{\mathcal{A}}_{r}^{\alpha}\otimes 1\tfrac{\overline{\alpha}}{ \overline{r}}\right)^{\prime}=\bigoplus_{\alpha}1\,_{r}^{\alpha}\otimes \widetilde{\mathcal{A}}_{\overline{r}}^{\overline{\alpha}}. \tag{3.45}\] The center of an algebra is defined to be the intersection of the algebra with its commutant. As our gauge-invariant subalgebra \(\widetilde{\mathcal{A}}_{r}\) obeys Haag duality, the center is \[\widetilde{\mathcal{Z}}_{r}=\widetilde{\mathcal{A}}_{r}\cap\widetilde{\mathcal{ A}}_{r}^{\prime}=\widetilde{\mathcal{A}}_{r}\cap\widetilde{\mathcal{A}}_{ \overline{r}}. \tag{3.46}\] **Lemma 8**.: The center \(\widetilde{\mathcal{Z}}_{r}\) is \[\widetilde{\mathcal{Z}}_{r}=\left\{z_{\alpha}\widetilde{P}^{\alpha}:z_{\alpha} \in\mathbb{C}\right\}, \tag{3.47}\] where \(\widetilde{P}^{\alpha}\) are mutually orthogonal projections defined via \[\widetilde{P}^{\alpha}=\frac{1}{d_{\alpha}}\sum_{ijk\ell}\left|\alpha ik \right\rangle_{r}\left\langle\alpha i\ell\right|\otimes\left|\overline{\alpha} jk\right\rangle_{\overline{r}}\left\langle\overline{\alpha}j\ell\right|. \tag{3.48}\] Proof.: Under the algebra isomorphisms (3.41) for \(\widetilde{\mathcal{A}}_{r}\) and \(\widetilde{\mathcal{A}}_{\overline{r}}\), we can immediately identify the center as \[\widetilde{\mathcal{Z}}_{r}\cong\bigoplus_{\alpha}\mathbb{C}\left(1\,_{r}^{ \alpha}\otimes 1\tfrac{\overline{\alpha}}{\overline{r}}\right). \tag{3.49}\] That is, the center \(\widetilde{\mathcal{Z}}_{r}\) is the direct sum of complex multiples of the identity within each superselection sector \(\alpha\). We can write the identity in a superselection sector as \[1\,_{r}^{\alpha}\otimes 1\,_{\overline{r}}^{\overline{\alpha}}=\sum_{ij} \left|\alpha i\right\rangle_{r}\left\langle\alpha i\right|\otimes\left| \overline{\alpha}j\right\rangle_{\overline{r}}\left\langle\overline{\alpha}j \right|, \tag{3.50}\] and examine the pullback of these operators under the natural isomorphism (3.43) to find the corresponding operators in \(\widetilde{\mathcal{A}}\). We obtain \[\mathbb{1}_{r}^{\,\alpha}\otimes\mathbb{1}_{\overline{r}}^{\overline{\alpha}} \quad\Rightarrow\quad\frac{1}{d_{\alpha}}\sum_{ijk\ell}\left|\alpha ik\right\rangle _{r}\left\langle\alpha i\ell\right|\otimes\left|\overline{\alpha}jk\right\rangle _{\overline{r}}\left\langle\overline{\alpha}j\ell\right|=\widetilde{P}^{ \alpha}. \tag{3.51}\] We identify these operators \(\widetilde{P}^{\alpha}\) as the (properly normalized) projections onto the \(\alpha\) superselection sector, where we remind the reader that \(\alpha\) is an irreducible representation of \(G^{n}\). These operators can alternatively be written as \[\widetilde{P}^{\alpha}=\left(\hat{P}_{r}^{\alpha}\otimes\mathbb{1}_{\overline {r}}\right)\Pi_{\text{GI}}=\left(\mathbb{1}_{r}\otimes\hat{P}_{\overline{r}}^ {\overline{\alpha}}\right)\Pi_{\text{GI}}, \tag{3.52}\] where \(\hat{P}_{r}^{\alpha}\) and \(\hat{P}_{\overline{r}}^{\overline{\alpha}}\) are orthogonal projections in \(\hat{\mathcal{H}}_{r}\) and \(\hat{\mathcal{H}}_{\overline{r}}\), respectively: \[\hat{P}_{r}^{\alpha}=\sum_{ik}\left|\alpha ik\right\rangle_{r}\left\langle \alpha ik\right|,\quad\hat{P}_{\overline{r}}^{\overline{\alpha}}=\sum_{ik} \left|\overline{\alpha}ik\right\rangle_{\overline{r}}\left\langle\overline{ \alpha}ik\right|. \tag{3.53}\] One can show the \(\widetilde{P}^{\alpha}\) are orthogonal and idempotent such that \(\widetilde{P}^{\alpha}\widetilde{P}^{\beta}=\delta_{\alpha\beta}\widetilde{P} ^{\alpha}\). ### Traces in \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\) We now define traces in our von Neumann algebras. When an algebra is \(\mathcal{B}(\mathcal{H})\) for some Hilbert space \(\mathcal{H}\), we can simply identify the minimal projections as projections onto a pure state in \(\mathcal{H}\), and the trace is the usual trace of a square matrix. Our algebras are not always of this form; an example is \(\widetilde{\mathcal{A}}_{r}\). Therefore, we will first identify the minimal projections, which are then used to define a normalized trace on the algebra. In particular, for \(\widetilde{\mathcal{A}}_{r}\) our task is to find the minimal projections \(\widetilde{P}_{r}\) in \(\widetilde{\mathcal{A}}_{r}\) and use them to define a "rescaled" trace \(\widetilde{\text{Tr}}_{r}\) which satisfies \[\widetilde{\text{Tr}}_{r}\widetilde{P}_{r}=1. \tag{3.54}\] Let us first discuss the case that we understand well: that of \(\mathcal{A}_{r}\), and by extension \(\hat{\mathcal{A}}_{r}\). As \(\mathcal{A}_{r}=\mathcal{B}(\mathcal{H}_{r})\), the minimal projections are projections onto a pure state in \(\mathcal{H}_{r}\), and we define the trace \(\text{Tr}_{r}\) in \(\mathcal{A}_{r}\) such that the minimal projections have trace \(1\). As \(\hat{\mathcal{A}}_{r}=\mathcal{B}(\hat{\mathcal{H}}_{r})\), we proceed similarly. Recall that the basis states of \(\hat{\mathcal{H}}_{r}\) are \(\left\{\left|\alpha ik\right\rangle_{r}\right\}\), and so we define the trace \(\hat{\text{Tr}}_{r}\) in \(\hat{\mathcal{A}}_{r}\) via \[\hat{\text{Tr}}_{r}\left|\alpha ik\right\rangle_{r}\left\langle\alpha ik \right|=1. \tag{3.55}\] As the minimal projections in \(\hat{\mathcal{A}}_{r}\) are also minimal projections in \(\mathcal{A}_{r}\), the two traces agree (on \(\hat{\mathcal{A}}_{r}\)): \[\text{Tr}_{r}=\hat{\text{Tr}}_{r}, \tag{3.56}\] so we will use only \(\mathrm{Tr}_{r}\) (not \(\hat{\mathrm{Tr}}_{r}\)) moving forward. Now consider \(\widetilde{\mathcal{A}}_{r}\). Although \(\widetilde{\mathcal{A}}_{r}\) is not the algebra of all bounded operators on a Hilbert space, the algebra isomorphism (3.41) shows that we can write it as a direct sum of algebras for which we can easily identify minimal projections. In particular, the pullback of minimal projections onto pure states \(\ket{\alpha i}_{r}\in\widetilde{\mathcal{H}}_{r}^{\alpha}\) under the natural isomorphism (3.43) gives minimal projections in \(\widetilde{\mathcal{A}}_{r}\). Thus, we write these minimal projections \(\widetilde{P}_{r}^{\alpha i}\in\widetilde{\mathcal{A}}_{r}\) as \[\widetilde{P}_{r}^{\alpha i}=\frac{1}{d_{\alpha}}\sum_{jk\ell}\ket{\alpha ik}_ {r}\bra{\alpha i\ell}\otimes\ket{\overline{\alpha}jk}_{\pi}\bra{\overline{ \alpha}j\ell} \tag{3.57}\] for all non-empty sectors \(\alpha\), defined as those with nonzero \(n_{\alpha}\), \(\overline{n}_{\overline{\alpha}}\). If \(n_{\alpha}\) vanishes, the index \(i\) above has an empty range, and if \(\overline{n}_{\overline{\alpha}}\) vanishes, \(\widetilde{P}_{r}^{\alpha i}\) vanishes due to the empty sum over \(j\) in (3.57). We can alternatively write \(\widetilde{P}_{r}^{\alpha i}\) as \[\widetilde{P}_{r}^{\alpha i}=\hat{P}_{r}^{\alpha i}\Pi_{\mathrm{GI}}, \tag{3.58}\] where the projections \(\hat{P}_{r}^{\alpha i}\) are defined similarly to (3.53): \[\hat{P}_{r}^{\alpha i}\equiv\sum_{k}\ket{\alpha ik}\bra{\alpha ik}_{r}\otimes 1 _{\overline{r}}. \tag{3.59}\] Although we already argued that \(\widetilde{P}_{r}^{\alpha i}\) are minimal projections using the natural isomorphism, we now show it more directly. **Lemma 9**.: The projections \(\widetilde{P}_{r}^{\alpha i}\) (for non-empty sectors \(\alpha\)) are minimal projections in \(\widetilde{\mathcal{A}}_{r}\). Proof.: We recall that minimal projections are nonzero and have the property that any subprojection \(\widetilde{Q}_{r}\) of \(\widetilde{P}_{r}^{\alpha i}\) is either zero or \(\widetilde{P}_{r}^{\alpha i}\). As an element of \(\widetilde{\mathcal{A}}_{r}\), \(\widetilde{Q}_{r}\) must be of the form \[\widetilde{Q}_{r}=\sum_{\beta jj^{\prime}k}\hat{Q}_{\beta jj^{\prime}}\left( \ket{\beta jk}_{r}\bra{\beta j^{\prime}k}\otimes 1_{\overline{r}}\right)\Pi_{ \mathrm{GI}} \tag{3.60}\] with complex coefficients \(\hat{Q}_{\beta jj^{\prime}}\). The subprojection \(\widetilde{Q}_{r}\) is left fixed under conjugation via \(\widetilde{P}_{r}^{\alpha i}\), so we have \[\widetilde{Q}_{r}=\widetilde{P}_{r}^{\alpha i}\widetilde{Q}_{r}\widetilde{P}_ {r}^{\alpha i}=\sum_{k}\hat{Q}_{\alpha ii}\left(\ket{\alpha ik}_{r}\bra{\alpha ik }\otimes 1_{\overline{r}}\right)\Pi_{\mathrm{GI}}=\hat{Q}_{\alpha ii}\widetilde{P}_ {r}^{\alpha i}. \tag{3.61}\] Additionally imposing \(\widetilde{Q}_{r}^{2}=\widetilde{Q}_{r}\), we find \[\widetilde{Q}_{r}^{2}=\hat{Q}_{\alpha ii}^{2}\widetilde{P}_{r}^{\alpha i}=\hat {Q}_{\alpha ii}\widetilde{P}_{r}^{\alpha i}. \tag{3.62}\] Unless \(\widetilde{Q}_{r}\) is zero, we obtain \(\hat{Q}_{\alpha ii}=1\) and thus \(\widetilde{Q}_{r}=\widetilde{P}_{r}^{\alpha i}\). So \(\widetilde{P}_{r}^{\alpha i}\) (for a non-empty sector \(\alpha\)) is indeed a minimal projection. Therefore, we define the trace \(\widetilde{\mathrm{Tr}}_{r}\) in \(\widetilde{\mathcal{A}}_{r}\) by imposing \[\widetilde{\mathrm{Tr}}_{r}\widetilde{P}_{r}^{\alpha i}=1 \tag{3.63}\] for every non-empty sector \(\alpha\) and every \(i=1,\cdots,n_{\alpha}\). How do we understand this trace acting on a general operator in \(\widetilde{\mathcal{A}}_{r}\)? Such an operator can always be written in the form (3.31): \[\widetilde{\mathcal{O}}_{r}=\hat{\mathcal{O}}_{r}\Pi_{\mathrm{GI}},\quad\hat {\mathcal{O}}_{r}=\sum_{\alpha ijk}\hat{\mathcal{O}}_{\alpha ij}\left|\alpha ik \right\rangle_{r}\left\langle\alpha jk\right|\otimes 1_{\overline{r}}. \tag{3.64}\] Taking the trace using \(\widetilde{\mathrm{Tr}}_{r}\), we find \[\widetilde{\mathrm{Tr}}_{r}\widetilde{\mathcal{O}}_{r}=\widetilde{\mathrm{Tr }}_{r}\sum_{\alpha i}\hat{\mathcal{O}}_{\alpha ii}\widetilde{P}_{r}^{\alpha i }=\sum_{\alpha i}\hat{\mathcal{O}}_{\alpha ii}. \tag{3.65}\] If we were to take the trace \(\mathrm{Tr}_{r}\) of the corresponding \(\hat{\mathcal{O}}_{r}\), we would instead find \[\mathrm{Tr}_{r}\,\hat{\mathcal{O}}_{r}=\mathrm{Tr}\sum_{\alpha i}\hat{ \mathcal{O}}_{\alpha ii}\hat{P}_{r}^{\alpha i}=\sum_{\alpha i}d_{\alpha}\hat{ \mathcal{O}}_{\alpha ii}. \tag{3.66}\] Thus, it is tempting to relate the trace \(\widetilde{\mathrm{Tr}}_{r}\) to \(\mathrm{Tr}_{r}\) using an appropriate rescaling by \(1/d_{\alpha}\) in each sector. A more precise version of this statement is the following: for any operator \(\widetilde{\mathcal{O}}_{r}^{\alpha}\in\widetilde{\mathcal{A}}_{r}\) that acts only in the \(\alpha\) sector such that it can be written as \[\widetilde{\mathcal{O}}_{r}^{\alpha}=\hat{\mathcal{O}}_{r}^{\alpha}\Pi_{ \mathrm{GI}},\quad\hat{\mathcal{O}}_{r}^{\alpha}=\sum_{ijk}\hat{\mathcal{O}} _{\alpha ij}\left|\alpha ik\right\rangle_{r}\left\langle\alpha jk\right| \otimes 1_{\overline{r}}, \tag{3.67}\] i.e., with no sum over \(\alpha\), the two traces are related by \[\widetilde{\mathrm{Tr}}_{r}\widetilde{\mathcal{O}}_{r}^{\alpha}=\frac{1}{d_{ \alpha}}\,\mathrm{Tr}_{r}\,\hat{\mathcal{O}}_{r}^{\alpha}. \tag{3.68}\] Summing both sides over \(\alpha\) recovers (3.65) and (3.66). ### Reduced states Our ultimate goal is to relate the von Neumann entropies for the same gauge-invariant state \(\widetilde{\rho}\) in \(\widetilde{\mathcal{H}}\) on two different subalgebras: \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\). The first thing to note is that, when we consider the full graph (instead of restricting to a subregion \(r\)), we have \(\mathcal{A}=\mathcal{B}(\mathcal{H})\) and \(\widetilde{\mathcal{A}}=\mathcal{B}(\widetilde{\mathcal{H}})\) where \(\widetilde{\mathcal{H}}\) is a subspace of \(\mathcal{H}\) so minimal projections in \(\widetilde{\mathcal{A}}\) are also minimal projections in \(\mathcal{A}\), and the trace \(\widetilde{\mathrm{Tr}}\) in \(\widetilde{\mathcal{A}}\) therefore agrees with the trace \(\mathrm{Tr}\) in \(\mathcal{A}\) when acting on gauge-invariant states. Hence, a gauge-invariant state \(\widetilde{\rho}\) on the full graph that is properly normalized under the \(\widetilde{\mathrm{Tr}}\) trace is also properly normalized under the \(\mathrm{Tr}\) trace, and can therefore be viewed as a properly normalized state \(\rho=\widetilde{\rho}\) in \(\mathcal{H}\) (albeit a special one). Thus, we will use only \(\rho\) (not \(\widetilde{\rho}\)) for notational simplicity in the following discussions. We should still remember that \(\rho\) is a special state that belongs to \(\widetilde{\mathcal{A}}\). The above statements do not hold for reduced states on subregions. In particular, we need to distinguish a properly normalized state \(\rho_{r}\) in \(\mathcal{A}_{r}\) from a properly normalized \(\widetilde{\rho}_{r}\) in \(\widetilde{\mathcal{A}}_{r}\). Now we derive the relation between these two states. Recall that to find \(S(\rho,\mathcal{A}_{r})\) for a general subalgebra \(\mathcal{A}_{r}\subset\mathcal{A}\), we need to find a reduced state \(\rho_{r}\in\mathcal{A}_{r}\) satisfying \[\mathrm{Tr}_{r}(\rho_{r}\mathcal{O}_{r})=\mathrm{Tr}(\rho\mathcal{O}_{r}) \tag{3.69}\] for all \(\mathcal{O}_{r}\in\mathcal{A}_{r}\). For our particular \(\mathcal{A}_{r}\) (the pre-gauged algebra on \(r\)), the answer is, of course, \(\rho_{r}=\mathrm{Tr}_{\overline{r}}\,\rho\). Now we work out the reduced state in the subalgebra \(\widetilde{\mathcal{A}}_{r}\). **Lemma 10**.: The reduced state \(\widetilde{\rho}_{r}\in\widetilde{\mathcal{A}}_{r}\) satisfying \[\widetilde{\mathrm{Tr}}_{r}(\widetilde{\rho}_{r}\widetilde{\mathcal{O}}_{r}) =\mathrm{Tr}\Big{(}\rho\widetilde{\mathcal{O}}_{r}\Big{)} \tag{3.70}\] for all \(\widetilde{\mathcal{O}}_{r}\in\widetilde{\mathcal{A}}_{r}\) is of the form \[\widetilde{\rho}_{r}=\hat{\rho}_{r}\Pi_{\mathrm{GI}},\quad\hat{\rho}_{r}= \sum_{\alpha ijk}\hat{\rho}_{\alpha ij}\left|\alpha ik\right\rangle_{r}\left \langle\alpha jk\right|\otimes 1_{\overline{r}}, \tag{3.71}\] with \(\hat{\rho}_{\alpha ij}=d_{\alpha}\rho_{\alpha ij}\), where \(\rho_{\alpha ij}\) is defined by \[\rho_{r}=\sum_{\alpha ijk}\rho_{\alpha ij}\left|\alpha ik\right\rangle_{r} \left\langle\alpha jk\right|. \tag{3.72}\] Proof.: A general gauge-invariant state \(\rho\in\widetilde{\mathcal{A}}\) can be written as \[\rho=\sum_{\alpha\beta ijk^{\prime}j^{\prime}k^{\prime}}\rho_{\alpha\beta ii ^{\prime}jj^{\prime}}\left|\alpha ik\right\rangle_{r}\left|\overline{\alpha} jk\right\rangle_{\overline{r}}\left\langle\beta^{\prime}k^{\prime} \right|_{r}\left\langle\overline{\beta}j^{\prime}k^{\prime}\right|_{\overline{ r}} \tag{3.73}\] using the basis states for \(\widetilde{\mathcal{H}}\). Tracing over \(\overline{r}\), we find \[\rho_{r}=\mathrm{Tr}_{\overline{r}}\,\rho =\sum_{\alpha\beta ijki^{\prime}j^{\prime}k^{\prime}}\rho_{\alpha \beta ii^{\prime}jj^{\prime}}\left\langle\overline{\beta}j^{\prime}k^{\prime} |\overline{\alpha}jk\right\rangle_{\overline{r}}\left|\alpha i^{\prime}k^{ \prime}\right\rangle_{r}\left\langle\beta ik\right|\] \[=\sum_{\alpha ii^{\prime}jk}\rho_{\alpha\alpha ii^{\prime}jj} \left|\alpha ik\right\rangle_{r}\left\langle\alpha^{\prime}k\right|. \tag{3.74}\] This verifies (3.72) and determines \(\rho_{aij}\). Now recall that as an element of \(\widetilde{\mathcal{A}}_{r}\), \(\widetilde{\rho}_{r}\) must be of the form (3.71) with some complex coefficients \(\hat{\rho}_{\alpha ij}\). It remains to determine what they are from (3.70). In order to impose it, we define the following basis for \(\widetilde{\mathcal{A}}_{r}\): \[\widetilde{\mathcal{O}}_{r}^{\alpha ij}=\hat{\mathcal{O}}_{r}^{\alpha ij}\Pi_{ \text{GI}},\quad\hat{\mathcal{O}}_{r}^{\alpha ij}=\sum_{k}\left|\alpha ik \right\rangle_{r}\left\langle\alpha jk\right|\otimes\mathbbm{1}_{\overline{r}}, \tag{3.75}\] such that we can rewrite the reduced gauge-invariant density matrix as \[\widetilde{\rho}_{r}=\sum_{\alpha ij}\hat{\rho}_{\alpha ij}\widetilde{ \mathcal{O}}_{\alpha ij}. \tag{3.76}\] Note that the basis elements \(\widetilde{\mathcal{O}}_{r}^{\alpha ij}\) and their corresponding basis elements \(\hat{\mathcal{O}}_{r}^{\alpha ij}\in\hat{\mathcal{A}}_{r}\) obey the following relations: \[\widetilde{\mathcal{O}}_{r}^{\alpha ij}\widetilde{\mathcal{O}}_{r }^{\beta i^{\prime}j^{\prime}}=\delta_{\alpha\beta}\delta_{i^{\prime}j} \widetilde{\mathcal{O}}_{r}^{\alpha ij^{\prime}},\quad\widetilde{\text{Tr}}_ {r}\widetilde{\mathcal{O}}_{r}^{\alpha ij}=\delta_{ij} \tag{3.77}\] \[\hat{\mathcal{O}}_{r}^{\alpha ij}\hat{\mathcal{O}}_{r}^{\beta i ^{\prime}j^{\prime}}=\delta_{\alpha\beta}\delta_{i^{\prime}j}\hat{\mathcal{O} }_{r}^{\alpha ij^{\prime}},\quad\text{Tr}_{r}\,\hat{\mathcal{O}}_{r}^{\alpha ij }=d_{\alpha}\delta_{ij}. \tag{3.78}\] From these relations we can check both sides of (3.70) for \(\widetilde{\mathcal{O}}_{r}\) set to one of the basis elements \(\widetilde{\mathcal{O}}_{r}^{\alpha ij}\). The trace in the gauge-invariant algebra becomes \[\widetilde{\text{Tr}}_{r}\left(\widetilde{\rho}_{r}\widetilde{\mathcal{O}}_{ r}^{\alpha ij}\right)=\widetilde{\text{Tr}}_{r}\left(\sum_{\beta i^{\prime}j^{ \prime}}\hat{\rho}_{\beta i^{\prime}j^{\prime}}\widetilde{\mathcal{O}}_{r}^{ \beta i^{\prime}j^{\prime}}\widetilde{\mathcal{O}}^{\alpha ij}\right)=\sum_{ \beta i^{\prime}j^{\prime}}\hat{\rho}_{\beta i^{\prime}j^{\prime}}\delta_{ \alpha\beta}\delta_{i^{\prime}j}\delta_{ij^{\prime}}=\hat{\rho}_{\alpha ji}. \tag{3.79}\] We need to equate this with the trace in the pre-gauged algebra, which we begin to evaluate by simplifying to the trace in \(\mathcal{A}_{r}\). We have \[\text{Tr}\Big{(}\rho\widetilde{\mathcal{O}}_{r}^{\alpha ij}\Big{)}=\text{Tr} \Big{(}\rho\hat{\mathcal{O}}_{r}^{\alpha ij}\Pi_{\text{GI}}\Big{)}=\text{Tr} \Big{(}\Pi_{\text{GI}}\rho\hat{\mathcal{O}}_{r}^{\alpha ij}\Big{)}=\text{Tr} \Big{(}\rho\hat{\mathcal{O}}_{r}^{\alpha ij}\Big{)}=\text{Tr}_{r}(\rho_{r} \hat{\mathcal{O}}_{r}^{\alpha ij}), \tag{3.80}\] where we have used the cyclicity of the trace, the gauge invariance of \(\rho\), and the fact that \(\hat{\mathcal{O}}_{r}^{\alpha ij}\in\mathcal{A}_{r}\). We further simplify this and obtain \[\text{Tr}_{r}(\rho_{r}\hat{\mathcal{O}}_{r}^{\alpha ij})=\text{Tr}_{r}\left( \sum_{\beta i^{\prime}j^{\prime}}\rho_{\beta i^{\prime}j^{\prime}}\hat{ \mathcal{O}}_{r}^{\beta i^{\prime}j^{\prime}}\hat{\mathcal{O}}_{r}^{\alpha ij }\right)=\sum_{\beta i^{\prime}j^{\prime}}d_{\alpha}\rho_{\beta i^{\prime}j^{ \prime}}\delta_{\alpha\beta}\delta_{i^{\prime}j}\delta_{ij^{\prime}}=d_{ \alpha}\rho_{\alpha ji}. \tag{3.81}\] Thus we identify the reduced density matrix \(\widetilde{\rho}_{r}\in\widetilde{\mathcal{A}}_{r}\) as a density matrix of the form (3.71) with \[\hat{\rho}_{\alpha ij}=d_{\alpha}\rho_{\alpha ij}. \tag{3.82}\] Entropies in the Gauged Random Tensor Network Having written down the reduced states in \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\), we are now ready to compute the von Neumann entropies with respect to the two algebras. As we will see, the difference between the two entropies in the gauged random tensor network is precisely accounted for by an additional contribution to the area operator in the non-trivial center \(\widetilde{\mathcal{Z}}_{r}\). ### Entanglement entropy From (3.71) and (3.72), we proceed by defining the reduced states projected onto a superselection sector \(\alpha\): \[\rho_{r}^{\alpha}=\sum_{ijk}\rho_{\alpha ij}|\alpha ik\rangle_{r} \langle\alpha jk|,\qquad\hat{\rho}_{r}^{\alpha}=\sum_{ijk}\hat{\rho}_{\alpha ij }|\alpha ik\rangle_{r}\langle\alpha jk|=d_{\alpha}\rho_{r}^{\alpha}. \tag{4.1}\] Note that these density matrices are not properly normalized with respect to their appropriate traces. The reduced states (3.71) and (3.72) can be written as a direct sum over representations: \[\rho_{r}=\bigoplus_{\alpha}\rho_{r}^{\alpha},\qquad\widetilde{ \rho}_{r}=\bigoplus_{\alpha}\hat{\rho}_{r}^{\alpha}\Pi_{\text{GI}}. \tag{4.2}\] Furthermore, functions of the reduced states are superselected in the same way. In particular, \[\rho_{r}\log\rho_{r}=\bigoplus_{\alpha}\rho_{r}^{\alpha}\log\rho_ {r}^{\alpha},\qquad\widetilde{\rho}_{r}\log\widetilde{\rho}_{r}=\bigoplus_{ \alpha}(\hat{\rho}_{r}^{\alpha}\log\hat{\rho}_{r}^{\alpha})\Pi_{\text{GI}}, \tag{4.3}\] where we used the fact that \([\hat{\rho}_{r}^{\alpha},\Pi_{\text{GI}}]=0\). We are now ready to compute the subregion entropies (in the bulk). The von Neumann entropy of \(\rho\) with respect to \(\mathcal{A}_{r}\) is simply given by \[S(\rho,\mathcal{A}_{r})=-\operatorname{Tr}_{r}\rho_{r}\log\rho_{ r}=-\sum_{\alpha}\operatorname{Tr}_{r}\rho_{r}^{\alpha}\log\rho_{r}^{\alpha}. \tag{4.4}\] On the other hand, using the relation between the traces (3.68), we can write the von Neumann entropy with respect to \(\widetilde{\mathcal{A}}_{r}\) as \[S(\rho,\widetilde{\mathcal{A}}_{r})=-\widetilde{\operatorname{Tr} _{r}}\widetilde{\rho}_{r}\log\widetilde{\rho}_{r}=-\sum_{\alpha}d_{\alpha}^{- 1}\operatorname{Tr}_{r}\hat{\rho}_{r}^{\alpha}\log\hat{\rho}_{r}^{\alpha}. \tag{4.5}\] Using \(\hat{\rho}_{r}^{\alpha}=d_{\alpha}\rho_{r}^{\alpha}\) and \(\operatorname{Tr}_{r}\rho_{r}^{\alpha}=\widetilde{\operatorname{Tr}_{r}} \widetilde{\rho}_{r}^{\alpha}\), we can rewrite each term in the sum as \[d_{\alpha}^{-1}\operatorname{Tr}_{r}\hat{\rho}_{r}^{\alpha}\log \hat{\rho}_{r}^{\alpha} =\operatorname{Tr}_{r}\rho_{r}^{\alpha}\log\rho_{r}^{\alpha}+ \operatorname{Tr}_{r}\rho_{r}^{\alpha}\log d_{\alpha}\] \[=\operatorname{Tr}_{r}\rho_{r}^{\alpha}\log\rho_{r}^{\alpha}+ \widetilde{\operatorname{Tr}_{r}}\widetilde{\rho}_{r}^{\alpha}\log d_{\alpha}. \tag{4.6}\] The von Neumann entropy with respect to \(\widetilde{\mathcal{A}}_{r}\) can thus be written as \[S(\rho,\widetilde{\mathcal{A}}_{r}) =-\sum_{\alpha}\Big{(}\mathrm{Tr}_{r}\,\rho_{r}^{\alpha}\log\rho_{r }^{\alpha}+\widetilde{\mathrm{Tr}}_{r}\widetilde{\rho}_{r}^{\alpha}\log d_{ \alpha}\Big{)}\] \[=S(\rho,\mathcal{A}_{r})-\widetilde{\mathrm{Tr}}_{r}\left( \widetilde{\rho}_{r}\Delta\widetilde{A}\right), \tag{100}\] where we have defined a new "extra area operator" via \[\Delta\widetilde{A}\equiv\bigoplus_{\alpha}\widetilde{P}^{\alpha}\log d_{ \alpha}. \tag{101}\] The projections \(\widetilde{P}^{\alpha}\) are precisely the projections (101) which generate the center \(\widetilde{\mathcal{Z}}_{r}\), so \(\Delta\widetilde{A}\) is manifestly an operator in the center. We have now arrived at our final relation between the entropies with respect to \(\mathcal{A}_{r}\) and \(\widetilde{\mathcal{A}}_{r}\), \[S(\rho,\mathcal{A}_{r})=S(\rho,\widetilde{\mathcal{A}}_{r})+\widetilde{ \mathrm{Tr}}_{r}\left(\widetilde{\rho}_{r}\Delta\widetilde{A}\right), \tag{102}\] which we now use in our two-layer gauged RTN defined in Section 2. In particular, we would like to derive an FLM formula relating the boundary entropy with the gauged bulk entropy \(S(\rho,\widetilde{\mathcal{A}}_{r})\). Recall that when we feed any bulk state \(\rho\) in the pre-gauged algebra \(\mathcal{A}\) into the RTN, the entropy \(S(R)\) of the resulting boundary state on a boundary subregion \(R\) satisfies an FLM formula: \[S(R)=|\gamma_{R}|\log D+S(\rho,\mathcal{A}_{r}), \tag{103}\] where the bulk subregion \(r\) is chosen to be the entanglement wedge between \(R\) and its minimal surface \(\gamma_{R}\). Now specializing to a gauge-invariant bulk state \(\rho\in\widetilde{\mathcal{A}}\) and using (102), we find that the boundary entropy can now be written as a new FLM formula: \[S(R)=\widetilde{\mathrm{Tr}}_{r}\Big{(}\widetilde{\rho}_{r}\widetilde{A} \Big{)}+S\left(\rho,\widetilde{\mathcal{A}}_{r}\right), \tag{104}\] where the full area operator \(\widetilde{A}\) is \[\widetilde{A}\,=\,|\gamma_{R}|\log D\,+\,\bigoplus_{\alpha}\widetilde{P}^{ \alpha}\log d_{\alpha}\,=\,|\gamma_{R}|\log D\ +\bigoplus_{\alpha_{1},\cdots,\alpha_{n}}\widetilde{P}^{(\alpha_{1},\cdots, \alpha_{n})}\sum_{i=1}^{n}\log d_{\alpha_{i}}. \tag{105}\] Again, we sum over all irreps \(\alpha=(\alpha_{1},\cdots,\alpha_{n})\) of \(G^{n}\) acting on the cut, although some \(\alpha\) sectors may be emtpy (i.e., \(n_{\alpha}\) or \(\overline{n}_{\overline{\alpha}}\) is zero) in which case \(\widetilde{P}^{\alpha}\) vanishes. This is our main result. We note that this area operator looks like what arises in a superposition of a stack of standard RTNs with probabilities determined by the projections \(\widetilde{P}^{\alpha}\) and with bond dimensions augmented by \(d_{\alpha_{i}}\). ### Renyi entropy and Renyi mutual information As discussed in Section 2, one can modify the entanglement structure of the links in the standard RTN to obtain a non-flat Renyi spectrum for boundary states. However, this is not enough to reproduce the properties of holographic Renyi entropies on general boundary subregions. In particular, it fails to account for the lack of backreaction, displayed in the tensor network as a lack of (Renyi) correlation between disconnected boundary subregions when the RT surface is in a disconnected phase. This problem becomes clear when one calculates the Renyi mutual information between two such boundary subregions \(R_{1}\) and \(R_{2}\), defined as13 Footnote 13: The Rényi index \(n\) should not be confused with the number of vertices on the cut \(n=|V_{c}|\). \[I_{n}(R_{1}:R_{2})\equiv S_{n}(R_{1})+S_{n}(R_{2})-S_{n}(R_{1}\cup R_{2}). \tag{4.13}\] As the area operator in the original RTN is a \(c\)-number, using (1.3) we find that the area operator contribution cancels out in \(I_{n}(R_{1}:R_{2})\) for all \(n\) (as long as the minimal surface \(\gamma_{R}\) is in a disconnected phase), leaving the boundary mutual information equal to the bulk mutual information: \[I_{n}(R_{1}:R_{2})=I_{n}(r_{1}:r_{2},\mathcal{A}_{r_{1}r_{2}}). \tag{4.14}\] This implies that, if one wants a contribution to the Renyi mutual information of the same order as the area, that is \(\mathcal{O}(\log D)\), one must input by hand a highly entangled bulk state. Doing this is unsatisfying and quite arbitrary. We will now see that our gauged RTN solves this problem in a natural way, due to our nontrivial area operator. In general, the presence of a nontrivial area operator will lead to a nontrivial, \(n\)-dependent boundary Renyi mutual information, even for states with vanishing bulk Renyi mutual information. To see how this is realized in the gauged RTN, we will study a simple example shown in Figure 3, where the top layer is disconnected but the bottom layer is connected.14 We allow the bond dimensions in the bottom layer to be different for different links, and in fact design them so that the minimal surfaces associated with \(R_{1}\), \(R_{2}\), and their union \(R_{1}\cup R_{2}\) are fixed as we vary the Renyi index \(n\) at \(\mathcal{O}(1)\) values. We will feed in a gauge-invariant bulk state \(\rho\) with the following reduced state on \(r_{1}\cup r_{2}\): Footnote 14: This connection is unnecessary to prove our point, as the internal leg connecting \(r_{1}\) and \(r_{2}\) never contributes to the area term, but it is more intuitively satisfying to discuss a connected spatial slice for the purposes of demonstrating backreaction. \[\rho_{r_{1}r_{2}}=\sum_{\alpha\beta}(d_{\alpha}d_{\beta})^{-1}P(\alpha,\beta) \sum_{k\ell}\left|\alpha ik\right\rangle_{r_{1}}\left|\beta j\ell\right\rangle _{r_{2}}\left\langle\alpha ik\right|_{r_{1}}\left\langle\beta j\ell\right|_{r_ {2}}, \tag{4.15}\] for some particular choice of \(i\), \(j\). This state has classical correlations between \(r_{1}\) and \(r_{2}\) as described by a probability distribution \(P(\alpha,\beta)\), but has no quantum correlations. For simplicity, we consider the following distribution \(P(\alpha,\beta)\) that has support on only two superselection sectors \(\alpha_{1}\), \(\alpha_{2}\) on \(r_{1}\) and only two sectors \(\beta_{1}\), \(\beta_{2}\) on \(r_{2}\): \[P(\alpha_{1},\beta_{1})=p,\quad P(\alpha_{2},\beta_{1})=P(\alpha_{1},\beta_{2 })=p^{\prime},\quad P(\alpha_{2},\beta_{2})=p^{\prime\prime}, \tag{4.16}\] subject to the constraint \(p+2p^{\prime}+p^{\prime\prime}=1\). The Renyi entropy of \(\rho\) in the pre-gauged algebra \({\cal A}_{r_{1}r_{2}}\) is defined as \[S_{n}(\rho,{\cal A}_{r_{1}r_{2}})\equiv\frac{1}{1-n}\log\left({\rm Tr}_{r_{1} r_{2}}\,\rho_{r_{1}r_{2}}^{n}\right). \tag{4.17}\] Figure 3: A simple gauged RTN in which we compute the Rényi mutual information between \(R_{1}\) and \(R_{2}\). The input from the top layer lives on four edges of a disconnected graph \(G\), as we choose to have no matter on any of the vertices. In the bottom layer, the thick legs have a bond dimension much larger than that of the thin legs, such that the minimal surfaces for the three boundary regions \(R_{1}\), \(R_{2}\), and \(R_{1}\cup R_{2}\) only involve the light internal legs. Consequently, the associated bulk regions will be \(r_{1}\), \(r_{2}\), and \(r_{1}\cup r_{2}\). Using our \(\rho_{r_{1}r_{2}}\), we find \[S_{n}(\rho,\mathcal{A}_{r_{1}r_{2}})=\frac{1}{1-n}\log \bigg{(}d_{\alpha_{1}}d_{\beta_{1}}\left(\frac{p}{d_{\alpha_{1}}d_{ \beta_{1}}}\right)^{n}+d_{\alpha_{2}}d_{\beta_{1}}\left(\frac{p^{\prime}}{d_{ \alpha_{2}}d_{\beta_{1}}}\right)^{n}\] \[+d_{\alpha_{1}}d_{\beta_{2}}\left(\frac{p^{\prime}}{d_{\alpha_{1}} d_{\beta_{2}}}\right)^{n}+d_{\alpha_{2}}d_{\beta_{2}}\left(\frac{p^{\prime\prime}}{d_{ \alpha_{2}}d_{\beta_{2}}}\right)^{n}\bigg{)}. \tag{111}\] We can also compute the reduced density matrices on \(r_{1}\) and \(r_{2}\), as well as their corresponding Renyi entropies in the pre-gauged algebra. We find the reduced density matrices to be \[\rho_{r_{1}} =\sum_{k=1}^{d_{\alpha_{1}}}d_{\alpha_{1}}^{-1}(p+p^{\prime}) \left|\alpha_{1}ik\right\rangle_{r_{1}}\left\langle\alpha_{1}ik\right|+\sum_{ k^{\prime}=1}^{d_{\alpha_{2}}}d_{\alpha_{2}}^{-1}(p^{\prime}+p^{\prime\prime}) \left|\alpha_{2}ik^{\prime}\right\rangle_{r_{1}}\left\langle\alpha_{2}ik^{ \prime}\right|,\] \[\rho_{r_{2}} =\sum_{k=1}^{d_{\beta_{1}}}d_{\beta_{1}}^{-1}(p+p^{\prime}) \left|\beta_{1}jk\right\rangle_{r_{2}}\left\langle\beta_{1}jk\right|+\sum_{k^ {\prime}=1}^{d_{\beta_{2}}}d_{\beta_{2}}^{-1}(p^{\prime}+p^{\prime\prime}) \left|\beta_{2}jk^{\prime}\right\rangle_{r_{2}}\left\langle\beta_{2}jk^{ \prime}\right|, \tag{112}\] and the bulk Renyi entropies are \[S_{n}(\rho,\mathcal{A}_{r_{1}}) =\frac{1}{1-n}\log\left(d_{\alpha_{1}}\left(\frac{p+p^{\prime}}{ d_{\alpha_{1}}}\right)^{n}+d_{\alpha_{2}}\left(\frac{p^{\prime}+p^{\prime\prime}}{ d_{\alpha_{2}}}\right)^{n}\right)\] \[S_{n}(\rho,\mathcal{A}_{r_{2}}) =\frac{1}{1-n}\log\left(d_{\beta_{1}}\left(\frac{p+p^{\prime}}{ d_{\beta_{1}}}\right)^{n}+d_{\beta_{2}}\left(\frac{p^{\prime}+p^{\prime\prime}}{ d_{\beta_{2}}}\right)^{n}\right). \tag{113}\] In the gauge-invariant algebra, the dependence on irrep dimensions drops out and the Renyi entropies become purely Shannon terms: \[S_{n}(\rho,\widetilde{\mathcal{A}}_{r_{1}}) =S_{n}(\rho,\widetilde{\mathcal{A}}_{r_{2}})=\frac{1}{1-n}\log \left(\left(p+p^{\prime}\right)^{n}+\left(p^{\prime}+p^{\prime\prime}\right)^{ n}\right)\] \[S_{n}(\rho,\widetilde{\mathcal{A}}_{r_{1}r_{2}}) =\frac{1}{1-n}\log\left(p^{n}+2(p^{\prime})^{n}+(p^{\prime\prime} )^{n}\right), \tag{114}\] which we choose to be parametrically suppressed relative to the Renyi entropies in the pre-gauged algebra. When the sum inside the logarithm is dominated by one term, we can approximate it using \[\log\left(\sum_{i}x_{i}\right)\approx\log\Big{(}\max_{i}\left\{x_{i}\right\} \Big{)}. \tag{115}\] To simplify our calculation, we will enter a parameter regime where all three (pre-gauged) Renyi entropies satisfy the approximation above and have phase transitions. First consider \(S_{n}(\rho,\mathcal{A}_{r_{1}})\). We take \(d_{\alpha_{1}}>d_{\alpha_{2}}\). The two terms in the sum are equal at some critical \(n_{*}\), given by \[\left(\frac{p+p^{\prime}}{p^{\prime}+p^{\prime\prime}}\right)^{n_{*}}=\left( \frac{d_{\alpha_{1}}}{d_{\alpha_{2}}}\right)^{n_{*}-1}\quad\Rightarrow\quad \frac{n_{*}}{n_{*}-1}=\frac{\log\left(\frac{d_{\alpha_{1}}}{d_{\alpha_{2}}} \right)}{\log\left(\frac{p+p^{\prime}}{p^{\prime}+p^{\prime\prime}}\right)}. \tag{116}\] Thus, in order to have a phase transition at \(n_{*}>1\) we require \[\log\left(\frac{d_{\alpha_{1}}}{d_{\alpha_{2}}}\right)>\log\left(\frac{p+p^{ \prime}}{p^{\prime}+p^{\prime\prime}}\right). \tag{100}\] The width of this transition is controlled by the corrections to (101). This depends on the curvature of \(S_{n}(\rho,{\cal A}_{r_{1}})\) at \(n_{*}\); explicitly we can diagnose this with the following quantity: \[\frac{d^{2}}{dn^{2}}(1-n)S_{n}(\rho,{\cal A}_{r_{1}})\bigg{|}_{n=n_{*}}=\frac{1 }{4}\left(\log\frac{d_{\alpha_{1}}(p^{\prime}+p^{\prime\prime})}{d_{\alpha_{2} }(p+p^{\prime})}\right)^{2}. \tag{101}\] For fixed \(n_{*}\), this quantity increases with increasing \(d_{\alpha_{1}}/d_{\alpha_{2}}\), so we should make this ratio large for a sharp transition. A simple way to ensure the previous conditions is the following: \[\frac{d_{\alpha_{1}}}{d_{\alpha_{2}}}\equiv q\gg 1,\quad p\gg p^{\prime},\quad p ^{\prime}\gg p^{\prime\prime}. \tag{102}\] Furthermore, we impose \[\frac{d_{\alpha_{1}}}{d_{\alpha_{2}}}=\frac{d_{\beta_{1}}}{d_{\beta_{2}}}=q, \tag{103}\] which forces the phase transitions in \(S_{n}(\rho,{\cal A}_{r_{1}})\) and \(S_{n}(\rho,{\cal A}_{r_{2}})\) to occur at the same critical \(n_{*}\). Now let us examine the phase transition in \(S_{n}(\rho,{\cal A}_{r_{1}r_{2}})\). In the limit of sharp transitions we have \[S_{n}(\rho,{\cal A}_{r_{1}r_{2}})\approx\frac{1}{1-n}\log\left(\max\left\{ \frac{p^{n}}{(d_{\alpha_{1}}d_{\beta_{1}})^{n-1}},\frac{(p^{\prime})^{n}}{(d_{ \alpha_{2}}d_{\beta_{1}})^{n-1}},\frac{(p^{\prime})^{n}}{(d_{\alpha_{1}}d_{ \beta_{2}})^{n-1}},\frac{(p^{\prime\prime})^{n}}{(d_{\alpha_{2}}d_{\beta_{2}}) ^{n-1}}\right\}\right). \tag{104}\] For simplicity, we will choose \[\frac{p}{p^{\prime}}>\frac{p^{\prime}}{p^{\prime\prime}}\gg 1. \tag{105}\] In this case, we find that \(S_{n}(\rho,{\cal A}_{r_{1}r_{2}})\) has a phase transition occurring at a critical \(n_{c}\) determined by \[\frac{n_{c}}{n_{c}-1}=\frac{\log(q^{2})}{\log\left(\frac{p}{p^{\prime\prime}} \right)}=\frac{\log(q^{2})}{\log\left(\frac{p}{p^{\prime}}\frac{p^{\prime}}{p^ {\prime\prime}}\right)} \tag{106}\] which satisfies \(1<n_{c}<n_{*}\). We now combine the above results to find the (pre-gauged) Renyi mutual information \[I_{n}(r_{1}:r_{2},{\cal A}_{r_{1}r_{2}})\equiv S_{n}(\rho,{\cal A}_{r_{1}})+S _{n}(\rho,{\cal A}_{r_{2}})-S_{n}(\rho,{\cal A}_{r_{1}r_{2}}). \tag{107}\] We find the following phases: \[I_{n}(r_{1}:r_{2},\mathcal{A}_{r_{1}r_{2}})\approx\begin{cases}0&n<n_{c},\\ \log\left(q^{2}\right)+\frac{n}{1-n}\log\left(\frac{(p+p^{\prime})^{2}}{p^{ \prime\prime}}\right)&n_{c}<n<n_{*},\\ \frac{n}{1-n}\log\left(\frac{(p^{\prime}+p^{\prime\prime})^{2}}{p^{\prime \prime}}\right)&n_{*}<n.\end{cases} \tag{102}\] Now we rewrite the boundary Renyi mutual information (101) as \[S_{n}(R_{1}:R_{2})=\underbrace{I_{n}(r_{1}:r_{2},\mathcal{A}_{r_{1}r_{2}})-I_ {n}(r_{1}:r_{2},\widetilde{\mathcal{A}}_{r_{1}r_{2}})}_{\text{area contribution}}+\underbrace{I_{n}(r_{1}:r_{2},\widetilde{\mathcal{A}}_{r_{1}r_{2}})}_ {\text{bulk matter contribution}}\, \tag{103}\] where the contribution of the nontrivial area operator to the boundary Renyi mutual information is identified with the difference of the bulk Renyi mutual information in the two algebras. As stated previously, \(I_{n}(r_{1}:r_{2},\widetilde{\mathcal{A}}_{r_{1}r_{2}})\) is suppressed relative to \(I_{n}(r_{1}:r_{2},\mathcal{A}_{r_{1}r_{2}})\), so this model implements phase transitions in the boundary Renyi mutual information without a large bulk matter contribution (in the gauge-invariant algebra). We plot these two phase transitions for an example in Figure 4. This is a proof of concept showing that adding bulk gauge symmetries to the RTN in this manner allows the boundary Renyi mutual information to be nontrivial and \(n\)-dependent, even for states with small bulk Renyi mutual information (in the gauge-invariant algebra). In our simple example here, the minimal surface does not shift--i.e. it is the same for all \(n\)--but there is no obstruction to writing a more complicated example in which the location of the minimal surface changes with \(n\) due to the nontrivial area operator. ## 5 Discussion and Outlook In this work, we have presented a modification of the random tensor network which allows us to reproduce known features of semiclassical holographic states. We discuss some open questions and possible future directions below. We have presented a toy model which, for simple choices of bulk input state, exhibits sharp phase transitions in the Renyi entropy and Renyi mutual information. With a sufficiently tuned set of probabilities and irrep dimensions, one could engineer a smooth varying Renyi entropy that matches with, for example, the correct one-interval CFT\({}_{2}\) Renyi entropy (5). It would be an even more complicated task to reproduce the correct Renyi entropy for multiple intervals in the CFT [26; 27]. The bulk algebras that we encountered in our model are type I von Neumann algebras. This is in contrast to the type II von Neumann algebras for gravity constructed using the crossed product [28; 29; 30]. A "type I approximation" to the crossed product was recently studied in [31]. It is thus tempting to incorporate the crossed product and the resultant birth of a type II algebra into the tensor network toy models of holography. Our gauge-invariant subregion algebras generally have nontrivial centers. On the other hand, a prescription was given in [32] to construct gauge-invariant subregion algebras with trivial centers in lattice gauge theory. This prescription involves adding operators to the algebra that we do not include, so it does not contradict our results in any way. Here we have implemented a graph version of the lattice gauge theory construction along the lines of Kogut and Susskind, but crucially without dynamics, due to the lack of a Hamiltonian. Because of this, our construction does not have anything more to say about time evolution in tensor networks than previous models. It would be interesting to understand how to incorporate a Hamiltonian and the associated time evolution into tensor networks. It would also be interesting to study the commutators of intersecting area operators in our gauged RTN, which in standard AdS/CFT do not commute [33]. Figure 4: Phase transitions in the Rényi mutual information. Here we set \(q=10^{50}\), \(p^{\prime}=10^{-16}\), and \(p^{\prime\prime}=10^{-24}\). We plot the dominant contribution to the Rényi mutual information in the three phases (dashed) as well as the fully analytic interpolating function (solid). ## Acknowledgements We thank Chris Akers, Horacio Casini, David Grabovsky, Daniel Harlow, Kristan Jensen, Don Marolf, and Pratik Rath for interesting discussions. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-19-1-0360. This material is also based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011702. SAM would like to thank the Centro de Ciencias de Benasque Pedro Pascal for their hospitality while a portion of this work was completed.
2305.00392
Bayesian Inference of Supernova Neutrino Spectra with Multiple Detectors
We implement the Bayesian inference to retrieve energy spectra of all neutrinos from a galactic core-collapse supernova (CCSN). To achieve high statistics and full sensitivity to all flavours of neutrinos, we adopt a combination of several reaction channels from different large-scale neutrino observatories, namely inverse beta decay on proton and elastic scattering on electron from Hyper-Kamiokande (Hyper-K), charged current absorption on Argon from Deep Underground Neutrino Experiment (DUNE) and coherent elastic scattering on Lead from RES-NOVA. Assuming no neutrino oscillation or specific oscillation models, we obtain mock data for each channel through Poisson processes with the predictions, for a typical source distance of 10 kpc in our Galaxy, and then evaluate the probability distributions for all spectral parameters of theoretical neutrino spectrum model with Bayes' theorem. Although the results for either the electron-neutrinos or electron-antineutrinos reserve relatively large uncertainties (according to the neutrino mass hierarchy), a precision of a few percent (i.e., $\pm 1 \% \sim \pm 4 \%$ at a credible interval of $2 \sigma$) is achieved for primary spectral parameters (e.g., mean energy and total emitted energy) of other neutrino species. Moreover, the correlation coefficients between different parameters are computed as well and interesting patterns are found. Especially, the mixing-induced correlations are sensitive to the neutrino mass hierarchy, which potentially makes it a brand new probe to determine the neutrino mass hierarchy in the detection of galactic supernova neutrinos. Finally, we discuss the origin of such correlation patterns and perspectives for further improvement on our results.
Xu-Run Huang, Chuan-Le Sun, Lie-Wen Chen, Jun Gao
2023-04-30T05:26:21Z
http://arxiv.org/abs/2305.00392v2
# Bayesian Inference of Supernova Neutrino Spectra with Multiple Detectors ###### Abstract We implement the Bayesian inference to retrieve energy spectra of all neutrinos from a galactic core-collapse supernova (CCSN). To achieve high statistics and full sensitivity to all flavours of neutrinos, we adopt a combination of several reaction channels from different large-scale neutrino observatories, namely inverse beta decay on proton and elastic scattering on electron from Hyper-Kamiokande (Hyper-K), charged current absorption on Argon from Deep Underground Neutrino Experiment (DUNE) and coherent elastic scattering on Lead from RES-NOVA. Assuming no neutrino oscillation or specific oscillation models, we obtain mock data for each channel through Poisson processes with the predictions, for a typical source distance of 10 kpc in our Galaxy, and then evaluate the probability distributions for all spectral parameters of theoretical neutrino spectrum model with Bayes' theorem. Although the results for either the electron-neutrinos or electron-antineutrinos reserve relatively large uncertainties (according to the neutrino mass hierarchy), a precision of a few percent (i.e., \(\pm 1\%\sim\pm 4\%\) at a credible interval of \(2\sigma\)) is achieved for primary spectral parameters (e.g., mean energy and total emitted energy) of other neutrino species. Moreover, the correlation coefficients between different parameters are computed as well and interesting patterns are found. Especially, the mixing-induced correlations are sensitive to the neutrino mass hierarchy, which potentially makes it a brand new probe to determine the neutrino mass hierarchy in the detection of galactic supernova neutrinos. Finally, we discuss the origin of such correlation patterns and perspectives for further improvement on our results. a,b]Xu-Run Huang, a,b]Chuan-Le Sun, a]Lie-Wen Chen, a]Jun Gao a]School of Physics and Astronomy, Shanghai Key Laboratory for Particle Physics and Cosmology, and Key Laboratory for Particle Astrophysics and Cosmology (MOE), Shanghai Jiao Tong University, Shanghai 200240, China b]Department of Physics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong S.A.R., China [email protected] [email protected] [email protected] [email protected] supernova neutrinos, Bayesian analysis ## 1 Introduction The epochal detection of neutrino signals of SN 1987A, deriving from the Large Magellanic Cloud (\(\sim 50\) kpc), revealed the veil of multi-messenger era of astrophysics. Although only about two dozen neutrinos from this transient were caught by three lucky detectors, namely Kamiokande II [1], Irvine-Michigan-Brookhaven (IMB) [2] and Baksan [3], this detection renders to us the first glimpse into the collapsing core of a dying massive star. After that, various analyses, based on such sparse data, confirmed the outline of stellar core collapse and meanwhile imposed constraints on elusive properties of neutrinos [4; 5; 6; 7; 8; 9]. Three decades after that landmark, extraordinary progresses have been made among the modelling of stellar core collapse [10; 11; 12; 13; 14; 15], neutrino physics [16] and neutrino detection [17; 18; 19]. That is, millions of neutrinos will be detected with unprecedentedly high precision in modern neutrino observatories if the next galactic CCSN exploded at a typical distance of \(\sim 10\) kpc (approximately the distance between the centre of the Milky Way and our Solar System) [20; 21]. Such detection will promise, with no doubt, a much vaster playground for investigating meaningful topics in both CCSN physics and neutrino physics [20; 21; 22] (also other potentially interesting topics beyond these domains [23]). Modern hydrodynamic codes are now capable of performing successful simulations of the collapse and explosion of massive stars [24; 25; 26; 12]. They enrich our understanding of the explosion mechanism and characteristics of the related neutrino emission [22; 15]. However, a direct confirmation of those models is still missing and thus highly anticipated. Multiple neutrino detectors are currently in operation and scrutinizing the cosmos, or expected to operate in the future. Furthermore, some of them can promise unprecedentedly high statistics if the target is not too far, including water-based Cherenkov detectors (Hyper-Kamiokande [27], IceCube [28]), liquid scintillator detectors (JUNO [29], THIAE [30]), liquid argon time projection chambers (DUNE [31; 32; 33]), Pb-based cryogenic detectors (RES-NOVA [34; 35]) and so on. Although it is too complicated to predict when the next CCSN will occur in the vicinity, a rate of \(1.63\pm 0.46\) CCSN/100 y is obtained for the Milky Way and galaxies in the Local Group [36]. So, it could be promising to anticipate at least one galactic CCSN during the missions of those contemporary or next-generation detectors. Such a prospect has attracted quite some attentions on how to maximize the scientific return from such detection in the communities of astrophysics and particle physics. Among them, reconstructing the energy spectrum of neutrinos is significant for physics but demanding for the amount and quality of data. Attributing to the relatively strong interaction and low requirement on detector construction, inverse beta decay on proton (IBD-p) has become the most widely-utilised reaction channel in large-scale neutrino detectors [27; 29; 30]. This literally promises a good sensitivity to electron-antineutrinos. Elastic scattering on electron and charged current reaction on nuclei (e.g. \({}^{12}\)C [29], \({}^{16}\)O [27] and \({}^{40}\)Ar [32; 33]) offer the approaches to catch electron-neutrinos. Previous works have shown that a reasonable precision can be achievable in the measurement of supernova \(\nu_{e}\) spectrum [37; 38]. Now, the last task is presented as achieving sufficient sensitivity to heavy flavour neutrinos which can only undergo neutral current processes in such low-energy region. Therefore, elastic scattering on proton (pES) in scintillator detectors has been naturally proposed as an available access to heavy flavour part of supernova neutrinos [39]. Nevertheless, the RES-NOVA project, recently proposed in ref. [34] with the primary mission of detecting supernova neutrinos, promises high statistics via the coherent elastic neutrino-nucleus scattering on Lead. Note that different species of heavy flavour neutrinos are generally indistinguishable from each other since none of their charged companions would emerge with sufficiently large amount in stellar core collapse 1. However, a synergy of reaction channels is indispensable for extracting flavour-depending information (e.g. the collection of IBD-p, elastic scattering on electron/proton and charged/neutral current reactions on nuclei [40; 41; 42; 43; 44; 45]). Footnote 1: Thus, \(\nu_{x}\) is commonly used to denote one species of heavy flavour neutrinos and so do we. Sometimes, \(\nu_{x}\) and \(\bar{\nu}_{x}\) appear simultaneously, then they indicate particles and anti-particles, respectively. According to methodology, previous efforts can be schematically divided into two categories: statistical approaches and unfolding processes. Based on certain templates, statistical analysis extracts signals from noisy data with high efficiency, and thus has been usually adopted [37; 38; 40; 41; 42]. In such analyses, the profiles of neutrino fluxes are commonly depicted by the sophisticated Garching formula [46], which has been proven to be well compatible with high-resolution simulations [47]. To some extent, this simple fit represents our sophistication on the modelling of stellar core collapse. However, the heavy dependence on this analytic formula may potentially discard some important features of the real signals. Unfolding methods [43; 44; 45] are capable of alleviating such drawback, since they do not rely on any analytical formulas. But the shortages of such methods are even more severe. Aside from the complexity, the spectral reversion with response matrix belongs to the case of ill-posed problem, which means that small errors or noise can easily lead to artificial patterns in the results [45]. So, the pragmatic strategy is to implement these two processes complementarily in analysis of supernova neutrinos. They all offer meaningful information, only in different manner. In this work, we employ the Bayesian statistics to perform such evaluations. In the last decades, Bayesian method [48] has been proven to be a powerful tool in questions generally handling uncertainty, including gravitational wave astronomy [49], relativistic heavy-ion collisions [50], astrophysics and cosmology [51; 52], and fields of human activity beyond fundamental physics (e.g. Bayesian networks). Especially, it had already been introduced to the analysis of neutrino signals from SN 1987A [9]. In this paper, we demonstrate the use of Bayes' theorem to evaluate the spectral parameters for all flavours of neutrinos from a galactic CCSN. At the source, we adopt the time-integrated spectra for each type of neutrinos from a long-term axisymmetric core-collapse simulation which is reported in ref. [38]. Then, the simple adiabatic conversion in the CCSN [53] is applied here to account for the inevitable oscillation effects, including the case of normal mass hierarchy (NH) and inverted mass hierarchy (IH). We also show the results with no oscillation effects. However, any other neutrino conversion models can also be implemented in principle. As to the detection, we attempt to simultaneously obtain high statistics and full sensitivities to all types of neutrinos by taking advantage of three large-scale neutrino observatories, namely Hyper-K, DUNE and RES-NOVA. It should also be mentioned that the pES channel in JUNO is capable of performing flavour-blind detection with high energy resolution. However, it is reported that the reconstructed \(\nu_{e}\) and \(\nu_{x}\) spectra suffer from a substantial systematic bias of energy threshold induced by the pES channel's insensitivity to neutrinos with energy below 20 MeV [44]. Note that the peak is usually located at \(\sim 10\) MeV in the spectrum of supernova neutrinos. Instead, the proposed 1 keV threshold for nuclear recoil energy in RES-NOVA offers the flavour-blind sensitivity to neutrinos with energy above \(\sim 10\) MeV [34]. Detailed configurations of these detectors will be discussed later. The fast event-rate calculation tool, _SNOwGLoBES_2, is employed to compute count rates for channels in Hyper-K and DUNE, while that for RES-NOVA is done with a code developed by our own 3. In section 2, we review the detector characteristics and generate the mock data for further analysis. Aside from the detector responses, noise from Poisson processes is also included in the mock data. In section 3, we demonstrate how the spectral parameters are estimated from the mock data via Bayes' theorem, and numerical results as well. Finally, we conclude in section 4. Footnote 2: _SNOwGLoBES_ provides detector responses to many reaction channels (see e.g. ref. [17] for details) and it is available at [https://webhome.phy.duke.edu/~schol/snowglobes/](https://webhome.phy.duke.edu/~schol/snowglobes/). Footnote 3: This code and _SNOwGLoBES_ have been integrated in our Bayesian code. ## 2 Supernova neutrinos in detectors Before getting into details of Bayesian analysis, we summarise the features of detectors employed in this work and the characteristics of supernova neutrinos. Since no experimental data is available up to now, we calculate the number of expected events in each energy bin for each channels, based on the neutrino fluxes from numerical simulation, and then extract the number of events for analysis from a Poisson distribution with the expected count as average value. How we consider the neutrino oscillation effects is also presented in this section. ### Detector configurations The primary reaction channels for the selected detectors, namely IBD-p in Hyper-K, charged current reaction on Argon (vAr(CC)) in DUNE and neutral current scattering on Lead (vPb(NC)) in RES-NOVA, are adopted in this study to provide sensitivities to \(\bar{\nu}_{e}\), \(\nu_{e}\) and \(\nu\), sequentially. We also include the elastic scattering on electron (eES) in Hyper-K, in order to further enhance the sensitivity of this collection to \(\nu_{e}\) and \(\nu_{x}\). Note that eES channel have different cross sections to each type of neutrinos, i.e., \(\sigma_{\nu_{e}}>\sigma_{\bar{\nu}_{e}}>\sigma_{\nu_{x}}\)4. It is also interesting to mention that neutral current scattering on Argon in DUNE can potentially offer good sensitivity to \(\nu_{x}\), just not yet fully studied [33]. Footnote 4: Strictly speaking, \(\sigma_{\nu_{x}}\) is slightly greater than \(\sigma_{\nu_{x}}\) (see figure 2 in ref. [17]). Hyper-K is a next-generation water-based Cherenkov detector which is scheduled to start data-taking in 2027 [54]. Its primary missions include precision measurements on neutrino oscillations, searches for proton decay and observations on astrophysical neutrinos [27]. In this study, we employ two reaction channels in Hyper-K, namely the IBD-p (\(\bar{\nu}_{e}+p\to e^{+}+n\)) and eES (\(\nu+e^{-}\rightarrow\nu+e^{-}\)). Electrons and anti-electrons are produced in these scatterings and emit Cherenkov lights along with their motions in ultra-pure water. Then, the events can be reconstructed by collecting those Cherenkov photons via photomultiplier tubes (PMT). Currently, the reconstruction of IBD-p event has been well established. Meanwhile, eES event can also get separated from IBD-p signals, to some extent, according to their different angular dependence. Furthermore, it is reported that the neutron tagging efficiency can get improved substantially through addition of gadolinium (e.g., an efficiency of \(\sim 90\%\) in a gadolinium-loaded Super-K) [37]. That is, the tagging efficiency for the two reaction channels is expected to be promising since the possibility of gadolinium loading has already been considered in the design report of Hyper-K 5. Here we just assume a generally full tagging efficiency for the two reactions. On the other hand, according to the design report, the fully configured Hyper-K detector consists of two tanks, of which each contains 258 kton of ultra-pure water. The designed fiducial mass for each tank reaches 187 kton. Therefore, a 374 kton of total fiducial mass for Hyper-K has been adopted in some of previous works (see, e.g., ref. [38; 41]). However, the realistic fiducial mass for one tank can exceed this designed scale and reach 220 kton in the detection of supernova neutrinos, because of the localization in time and the neglect of low energy radioactive background due to the short-time feature of supernova neutrino signals [27]. We thus consider one tank with a fiducial mass of 220 kton, just following the available scale also adopted in ref. [45]. That is, only half of the capability of Hyper-K is under evaluation in this study. As to detector response, we adopt the same smearing matix and post-smearing efficiency as that of Super-K I (or III, IV), which are provided in _SNOwGLoBES_. Its response corresponds to the assumption of 40% PMT coverage. Footnote 5: The project of loading gadolinium into Super-K has already been approved. And this will provide a template for further application in Hyper-K. See ref. [27] for more details. DUNE [31; 32] will consist of four time projection chambers which contains 70 kton liquid argon in total. The nominal fiducial mass is 40 kton, and we also adopt this value in this study. However, in principle the available mass may exceed this value when studying supernova neutrinos, just like the case in Hyper-K. The primary goals for DUNE include precision measurements on neutrino oscillation parameters and searching for new physics. Among current-operated and future-planned neutrino detectors, DUNE will bring unique sensitivity to \(\nu_{e}\) with energies down to \(\sim 5\) MeV via the vAr(CC) reaction (\(\nu_{e}+^{40}\mathrm{Ar}\to e^{-}+^{40}\mathrm{K}^{*}\)). When such reactions happen, short electron tracks will be created and recorded, potentially along with gamma-rays in the chambers. DUNE will also have excellent time resolution which assures its capability of precisely depicting the neutrino burst time profile if the source is close enough. For instance, it is possible to identify the neutrino "trapping notch", which emerges as a consequence of neutrino trapping in the dense collapsing core and typically has a width of \(1-2\) ms, for closest CCSNe (few kpc) [33]. Moreover, in the galactic supernova neutrino detection landscape with DUNE, one of the most interesting topic is that the mass hierarchy problem in neutrino oscillations can be decisively determined by the detection of neutronization burst which is almost composed of \(\nu_{e}\) when produced [53]. The above works also adopted _SNOwGLoBES_ in their studies. Therefore, it is quite convenient for us since the configurations of DUNE has already been provided as well. RES-NOVA [34; 35] is a newly proposed experiment with the primary aim of hunting neutrinos from CCSNe. It intends to achieve a flavour-blind measurement with low energy threshold, high energy resolution and high statistics to supernova neutrinos, by taking advantage of the large coherent elastic scattering cross sections between MeV neutrinos and Pb nuclei, the ultrahigh radiopurity of archaeological Pb and modern technologies on cryogenic detector. This innovative project carries the ambition of providing a \(5\sigma\) sensitivity to supernova bursts up to Andromeda. However, the detailed configuration has not been settled yet. In this work, we consider a simple realisation of RN-3 in ref. [34], which is constructed with pure Pb crystals and has a detector mass of 465 ton. It will have a 1 keV energy threshold and a 0.2 keV resolution for nuclear recoil energy. This means that RES-NOVA could be sensitive to neutrinos with energies down to \(\sim 10~{}\mathrm{MeV}\). When neutrinos arrive at the detector, they can possibly undergo the vPb(NC) processes (\(\nu+\mathrm{Pb}\rightarrow\nu+\mathrm{Pb}\)). After that, the target nucleus will gain a recoil energy in the magnitude of a few keV, and then billions of phonons will get created in the absorber and act as information carriers. Such experimental strategy can possibly make full use of the entire energies deposited in the detector and lead to a realisation of excellent energy reconstruction. However, unlike the previous detectors, the configuration of RES-NOVA is currently absent in _SNOwGLoBES_. We calculate the event rates following our previous works (i.e., ref. [23, 55]). The averaged neutron skin of Pb nuclei is fixed on the experimental value of \({}^{208}\)Pb, namely \(R_{n}-R_{p}=0.283\pm 0.071~{}\mathrm{fm}\) from PREX-II [56]. Furthermore, in order to properly account for the effect of threshold, we adopt such an acceptance efficiency function: \[A(x)=\frac{a}{1+\mathrm{e}^{-k(x-x_{0})}}, \tag{1}\] where the values of parameters are taken as \(a=1,k=5,x_{0}=1.5\). Such arrangements assure that the detection efficiency will swiftly rise up to around 100% from \(\sim 0\%\) when nuclear recoil energy goes to 2 keV from 1 keV, and approaches 100% asymptotically after 2 keV. In fact, this function derives from the acceptance efficiency of the COHERENT experiment [57, 58], and can also produce similar structure as the reconstruction efficiency function of DUNE [33], just with different parameters. Note that this efficiency represents a conservative estimate and the real one is yet to be determined. ### Neutrino spectra and oscillations State-of-the-art stellar evolution theory indicates that dying massive stars would undergo violent core collapse at their end, generating an outward-propagating shock-wave to expel their mantles and exploding as spectacular CCSNe which can emerge as luminous as their host galaxy. In such explosions, almost \(\sim 99\%\) of the released gravitational potential energy (\(\sim 10^{53}~{}\mathrm{erg}\)) will be liberated through neutrino emission. Moreover, the evolutionary histories of the dense core are imprinted in both the temporal structures and energy spectra of neutrino emissions. Note that the neutrinos can still deliver information out of the collapsing core, even if no electromagnetic signal was emitted due to the formation of black hole in failed CCSN. The detailed characteristics of neutrino emission depend not only on the properties of progenitor star (e.g., mass, compactness and so on [59, 60]), but also on the nuclear equation of state of neutron star which still remains largely uncertain [61, 62, 63]. Except that, currently our comprehension on the spectral structure of supernova neutrinos is primarily obtained from studies on numerical simulations, due to lack of experimental data. According to detailed investigations on supernova neutrino spectra [46, 47], the instantaneous spectrum for each type of neutrinos will generally follow the quasi-thermal distribution (also called Garching formula), which can be presented as \[f_{\nu}(E_{\nu})=\mathcal{A}\left(\frac{E_{\nu}}{\langle E_{\nu}\rangle}\right)^{ \alpha}\exp\left[-(\alpha+1)\frac{\mathrm{E}_{\nu}}{\langle\mathrm{E}_{\nu} \rangle}\right]. \tag{2}\] Here, \(E_{\nu}\) and \(\langle E_{\nu}\rangle\) are the energy and average energy of neutrino in the unit of MeV, respectively; \(\mathcal{A}=\frac{(\alpha+1)^{\alpha+1}}{\langle E_{\nu}\rangle\,\Gamma( \alpha+1)}\) is the normalization factor with \(\Gamma\) being the gamma function; and \(\alpha\) characterises the amount of spectral pinching (with large value leading to suppression on high energy tail). \(\alpha\) can be determined by the energy moment of the distribution, e.g., the relation \[\frac{\left\langle E_{\nu}^{2}\right\rangle}{\langle E_{\nu}\rangle^{2}}=\frac {2+\alpha}{1+\alpha}. \tag{3}\] Actually, eq. (2) has been usually adopted as well to describe the time-integrated spectra in previous studies [38; 39; 40; 41; 42; 64], and so do we. Now, assuming no neutrino oscillation, the flux on the Earth can be expressed as \[\Phi(E_{\nu})=\frac{1}{4\pi d^{2}}\frac{\mathcal{E}_{\nu}}{\langle E_{\nu} \rangle}f_{\nu}(E_{\nu}), \tag{4}\] where \(d\) is the distance of source, and \(\mathcal{E}_{\nu}\) denotes the total energy emitted through a specific species of neutrinos. The spectral parameters for the source, adopted in this work, are given in table 1. It should be mentioned that the progenitor model, used to generate these parameters in the simulation, is expected to explode as one of the most common type II supernova (see ref. [38] for more details). Now, the predicted event rate for each channel can be calculated. For Hyper-K and DUNE, we set a uniform 100 energy grids to cover the energy range of \(0.25-100.00\) MeV 6 and drop the first several zones to approximately obtain a threshold of \(5\leavevmode\nobreak\ \mathrm{MeV}\). For RESNOVA, we also set a uniform energy grid with the bin width of \(0.2\leavevmode\nobreak\ \mathrm{keV}\), which starts from the threshold of \(1\leavevmode\nobreak\ \mathrm{keV}\)7. We have also tested another non-uniform grid scheme, i.e., the adaptive energy-gridding technique 8 (see ref. [45]), and the results of analysis turn out to be almost the same as that of current grid scheme. With the prediction data, the mock data can be generated now, e.g., given the predicted number of events \(N_{pd}\), the corresponding number \(N_{md}\) can be extracted from a Poisson distribution with \(N_{pd}\) being the average value 9. The \begin{table} \begin{tabular}{l l l l} \hline \(\nu\) & \(\alpha_{\nu}\) & \(\langle E_{\nu}\rangle\) [MeV] & \(\mathcal{E}_{\nu}\) [\(10^{52}\leavevmode\nobreak\ \mathrm{erg}\)] \\ \hline \(\nu_{e}\) & 2.67 & 14.1 & 7.70 \\ \(\bar{\nu}_{e}\) & 3.28 & 16.3 & 6.44 \\ \(\nu_{x}\) & 2.20 & 17.2 & 5.88 \\ \hline \end{tabular} \end{table} Table 1: Spectral parameters for the time-integrated spectra of supernova neutrino fluxes (see table 1 in ref. [38]). results are shown in figure 1. The caveat is that such a treatment means that the mock data is extracted from one simulated measurement. So, it is inevitable that the information reflected by the data may deviates from that of the original source due to the Poisson processes. Only high statistics can alleviate such deviations. However, this is also the fact faced by realistic measurements. Flavour transitions are also inevitable for supernova neutrinos. These messengers are primarily produced in the dense core of a dying star, penetrate through the thick stellar mantle and ultimately arrive in detectors on the Earth. Various conditions, encountered in this long journey, lead to complex transition patterns, e.g., adiabatic/non-adiabatic transitions, self-induced transitions and earth matter effects [20, 53, 65]. Since this work is not meant to dig into the detail of flavour conversion, we focus on the adiabatic transition associated with smoothly-varying matter potentials in supernovae, for simplicity. On the other hand, the three-flavour neutrino mixing framework has been well established experimentally due to tremendous experimental efforts over the past few decades. So we can describe the flavour transitions in supernovae with proper formulas under specific assumptions. However, there Figure 1: Predicted events and mock data for each reaction channel in Hyper-k, DUNE and RES-NOVA. \(E_{\nu}\) and \(E_{r}\) are the reconstructed neutrino energy and nuclear recoil energy, respectively. The source is assumed to be located at a typical distance, i.e., \(d=10\) kpc, and no oscillation effect is under evaluation. still exist two unknowns up to now in this scenario, i.e., the mass hierarchy and the complex phase associated with CP-violating observable. For the latter one, previous works have shown that it will not cause sizeable modifications to the signals of supernova neutrinos [66, 67]. But the previous one is crucial to the flavour composition of supernova neutrinos in detectors. And that necessitates the consideration of both NH and IH in this work. Assuming the adiabatic Mikheyev-Smirnov-Wolfenstein (MSW) model, in the case of NH, the observed fluxes (\(\Phi_{\nu}\)) are composed with the original fluxes (\(\Phi_{\nu}^{0}\)) in the following forms [53]: \[\Phi_{\nu_{e}} =\Phi_{\nu_{x}}^{0} \text{(NH)}, \tag{5}\] \[\Phi_{\bar{\nu}_{e}} =\cos^{2}\theta_{12}\Phi_{\bar{\nu}_{e}}^{0}+\sin^{2}\theta_{12} \Phi_{\bar{\nu}_{x}}^{0} \text{(NH)}, \tag{6}\] where \(\theta_{12}\) is the mixing angle with the value \(\sin^{2}\theta_{12}=0.307\pm 0.013\)[68]. In the case of IH, the formulas are rearranged as [53] \[\Phi_{\nu_{e}} =\sin^{2}\theta_{12}\Phi_{\nu_{e}}^{0}+\cos^{2}\theta_{12}\Phi_{ \nu_{x}}^{0} \text{(IH)}, \tag{7}\] \[\Phi_{\bar{\nu}_{e}} =\Phi_{\bar{\nu}_{x}}^{0} \text{(IH)}. \tag{8}\] And the total fluxes are conserved in both cases with such an equality: \[\Phi_{\nu_{e}}+\Phi_{\bar{\nu}_{e}}+4\Phi_{\nu_{x}}=\Phi_{\nu_{e}}^{0}+\Phi_{ \bar{\nu}_{e}}^{0}+4\Phi_{\nu_{x}}^{0} \text{(NH\&IH)}. \tag{9}\] Here \(\Phi_{\nu_{x}}\) and \(\Phi_{\bar{\nu}_{x}}\) represent the fluxes of neutrinos and anti-neutrinos with heavy flavours, sequentially, and are all equal to one quarter of the total heavy flavour flux. In the data analyses, we do not distinguish between them. From the above expressions, one can see that in the NH case, the \(\nu_{e}\) component is ultimately coming from the original \(\nu_{x}\) component while the \(\bar{\nu}_{e}\) flavour is only partially transformed. In the IH case, the transformations is almost reversed, i.e., the \(\bar{\nu}_{e}\) flavour is fully transformed now while the \(\nu_{e}\) component is partially transformed. Note that, instead of simply reversion, the extents of partial transformations are different for the two cases. The oscillation effects on the prediction of each reaction channel are shown in figure 2. As one can see, it is clear that the predicted energy spectra for different mass hierarchy diverge from each other in the flavour-sensitive reaction channels, including IBD-p and eES in Hyper-K and vAr(CC) in DUNE, while they totally overlap with each other in the flavour-blind reaction channel, i.e., vPb(NC) in RES-NOVA. It is also interesting to mention that the different gaps between IH and NH in IBD-p and vAr(CC) reflect the different extents of partial transformations. For the mock data used in the final analysis, we conduct the same extractions, only including all those ingredients this time. ## 3 Bayesian inference and numerical results Now data analysis can be performed with Bayesian inference to the mock data generated in the previous section. We firstly describe the basic ideas of Bayesian inference briefly and the prior arrangements of our analysis. Then what's following are the demonstration of numerical results and some discussions as well. ### Basic ideas Bayesian statistics is fundamentally different from conventional frequentist statistics. In Bayesian probability theory, probability is treated as a subjective concept which depends on our state of knowledge, instead of the objective limit of relative frequency of the outcome. So, it is allowed to get updated on the basis of new information which can be collected via some approaches, e.g., conducting experiments. With a full understanding of the issue under investigation, in principle the Bayesian probability will arrive at a stable value. The basic logical rule which allows us to do such updating is the Bayes' theorem, which can be presented as \[P(\theta|D)\propto P(D|\theta)P(\theta). \tag{10}\] In the case of parameter estimation, \(\theta\) and \(D\) represent the model parameter to be estimated collectively and the dataset relevant to the model, respectively. The quantity to be evaluated is the posterior probability, \(P(\theta|D)\), which stands for the probability of \(\theta\) given the new dataset \(D\). \(P(\theta)\) is the prior probability which quantifies our beliefs on \(\theta\) before inclusion of Figure 2: Predicted events in each reaction channel under inverted mass hierarchy (IH) or normal mass hierarchy (NH). \(E_{\nu}\) (\(E_{r}\)) denotes the reconstructed neutrino energy (nuclear recoil energy). The distance is assumed to be 10 kpc. The mock data for each case can be extracted with the same strategy in figure 1 and we did not show them here. new conditions. The likelihood function \(P(D|\theta)\) is a mathematical function of \(\theta\) for a fixed dataset \(D\) (also denoted by \(\mathcal{L}(\theta;D)\)). It quantifies the probability of the observation of \(D\) when given the specific parameter \(\theta\). In this framework, the main task of inference will get descended into how to calculate the distribution of posterior probability, once the expressions of prior and likelihood are settled. Note that a proper realization of prior probability will be quite helpful in the analysis of less informative dataset, but, somehow, trivial in the case with dataset informative enough. In this work, since the Garching formula is adopted to describe the time-integrated spectra of supernova neutrinos, we get 9 model parameters, i.e., \[\vec{\theta}=(\alpha_{\nu_{e}},\alpha_{\bar{\nu}_{e}},\alpha_{\nu_{x}},\left<E_ {\nu_{e}}\right>,\left<E_{\bar{\nu}_{e}}\right>,\left<E_{\nu_{x}}\right>, \mathcal{E}_{\nu_{e}},\mathcal{E}_{\bar{\nu}_{e}},\mathcal{E}_{\nu_{x}}). \tag{3.2}\] The realisation of \(P(\vec{\theta})\) could be nontrivial. Generally speaking, the posterior distribution of previous inference can act as the prior distribution of new inference with new information. However, this is not the case in this study, due to the highly limited information provided by the measurement of SN 1987A. Up to now, our knowledge on this issue is primarily obtained from various simulations. In detail, the values of \(\alpha\) are usually varying with time in the range of \(2\lesssim\alpha\lesssim 4\)[46, 47, 69]. For \(\left<E_{\nu}\right>\), the magnitude of \(\sim 10\) MeV exists in almost all simulations and also gets confirmed by the observation of SN 1987A. Furthermore, a neutrino energy hierarchy is emerged as \(\left<E_{\nu_{e}}\right><\left<E_{\bar{\nu}_{e}}\right>\lesssim\left<E_{\nu_{ x}}\right>\) in simulations [11, 69]. For \(\mathcal{E}_{\nu}\), both simulations and SN 1987A indicate that the total released energy via neutrinos should lie in the vicinity of \(3\times 10^{53}\) erg. And the ansatz of energy equipartition among different flavours of neutrinos has also been found to be roughly valid in simulations. Based on the above statements, we quantify the prior knowledge with 9 independent Gaussian functions associated with the 9 spectral parameters, i.e., \[\log P(\theta>0)=-\frac{(\theta-\mu)^{2}}{2\sigma^{2}}+constant, \tag{3.3}\] where we exclude the non-physical negative quadrants. The relevant Gaussian parameters are given in table 2. It must be emphasized here that, with such arrangements, we do not intend to mean that the spectral parameters of neutrinos from the next galactic CCSN would follow these distributions. It rather expresses such a belief that we are quite confident that \(\theta\) will lie within \(\mu\pm\sigma\), very sure that \(\theta\) will lie within \(\mu\pm 2\sigma\) and almost certain that \(\theta\) will lie within \(\mu\pm 3\sigma\). Values far beyond these regions are still possible but just not likely to happen since that would break the current theoretical framework. Such priors cover the parameter spaces used in the previous analysis [41] with the regions of \(3\sigma\), and meanwhile accommodate strong deviations from the expected values. However, it should be noted again that the posterior will be eventually dominated by the data, instead of the choice of priors, when the dataset is informative enough. As a confirmation, we also conduct the analysis with flat priors, and the comparison is shown in appendix A. \begin{table} \begin{tabular}{c c c c c c} \hline & \(\alpha_{\nu}\) & \(\left<E_{\nu_{e}}\right>\) & \(\left<E_{\bar{\nu}_{e}}\right>\) & \(\left<E_{\nu_{x}}\right>\) & \(\mathcal{E}_{\nu}\left[10^{52}\text{ erg}\right]\) \\ \hline \(\mu\) & 3 & 12 & 14 & 16 & 5 \\ \(\sigma\) & 1 & 4 & 4 & 4 & 5/3 \\ \hline \end{tabular} \end{table} Table 2: The parameters of Gauss distributions in priors. \(\mu\) and \(\sigma\) represent the center values and standard deviations, respectively. \(\left<E_{\nu}\right>\)s are in the unit of MeV. The dataset consists of series of energy bins and related number of events, and we conduct the analysis with such a binned likelihood: \[\mathcal{L}_{\zeta}(\vec{\theta};D)=\prod_{i=1}^{\mathrm{N_{bin}}}\frac{\lambda_{ i}^{n_{i}}}{n_{i}!}\mathrm{e}^{-\lambda_{i}}, \tag{10}\] for the reaction channel \(\zeta\), where \(\mathrm{N_{bin}}\) is the number of energy bins, \(\lambda_{i}\) and \(n_{i}\) represent the number of events related to the \(i\)th bin in predictions and mock data, respectively. \(\lambda_{i}\) is a function of \(\vec{\theta}\), while \(n_{i}\) belongs to \(D\). Such a Poisson distribution is also adopted in previous studies [41; 42]. Now, the eventual likelihood is simply expressed as \[\mathcal{L}(\vec{\theta};D)=\prod_{\zeta\in\ all\ exp.}\mathcal{L}_{\zeta}( \vec{\theta};D), \tag{11}\] after combining all the reaction channels. Other potentially useful reaction channels can also be considered via this formula in the future. Furthermore, eq. (10) can be replaced with another more well-constructed likelihood, which considers other uncertainties in realistic measurements thoroughly, in future studies. The calculation of posterior distribution used to be the most complicated part of Bayesian inference. However, powerful methods and tools are currently available to alleviate it. In this work, we implement the ensemble sampler tool, _emcee_\({}^{\ 10}\)[70], to sample the 9-dimension posterior distribution. The _emcee_ package is a Python implementation of the Metropolis-Hastings (M-H) algorithm \({}^{\ 11}\), which has already been adopted in many published projects in the astrophysics literature. The caveat, which derives from the M-H algorithm, is that the samples initially generated in the chain can be heavily influenced by the choice of starting point in parameter space, due to the inevitably existing correlation among neighboring samples. So, in practice the initial part will be excluded. In this study, we just drop the initial 200 samples in each chain, to obtain sets of stable samples. ### Demonstration Finally, we perform the analysis and show the numerical results in this section. As a start, we test the capability of our method with the case considering no oscillation effects. So as to obtain an appropriate determination of the posterior distribution, we draw a dataset including \(10^{6}\) samples, and calculate the distribution of each parameter by conducting marginalization over other parameters which follows the law of total probability. Then, the cases of different oscillation models are evaluated through the same processes. Among them, \(d=10\) kpc is adopted as the default distance of source. In practice, it is possible for this distance to be much smaller, e.g., nearby core collapse supernova candidates reported in [71], including the famous Betelgeuse. Generally speaking, smaller distance means higher statistics and then better precision, when neutrino flux is not too intense to cause signal pile-up in the detector. However, it would be another topic, for future works, on how to properly deal with the effects of signal pile-up if the source is too close. As a test, we only estimate the distance effect in the case of \(d=5\) kpc here. Figure 3: Posterior distributions for the no oscillation case. The Gaussian prior distributions are functioning here. Plots on the diagonal show posterior distributions for the corresponding parameter after marginalization over other parameters, and the off-diagonal ones show correlations between them. Contours in the off-diagonal plots demonstrate the area of \(1\sigma,2\sigma\) and \(3\sigma\) credible level, respectively. The blue lines mark the parameter values to generate the mock data used in this analysis. #### 3.2.1 No oscillation Figure 3 shows the posterior distributions when no neutrino oscillation is considered12. The 1-dimension (1-D) distributions for all spectral parameters are plotted on the diagonal. We also present the representative values of these 1-D distributions, i.e., the maximum _a posteriori_ (MAP) estimate and the \(2\sigma\) credible intervals in the highest posterior density scheme, in table 3. As one can see, the three parameters for \(\bar{\nu}_{e}\) flux are constrained quite well in this analysis. In detail, the \(2\sigma\) symmetrized fractional uncertainties 13 reach \(\pm 2.8\%\), \(\pm 0.8\%\) and \(\pm 0.9\%\) for \(\alpha\), \(\langle E\rangle\) and \(\mathcal{E}\), sequentially. Such high precision is primarily attributed to the ultra-high statistics provided by the IBD-p channel in Hyper-K, as can be seen in figure 0(a). Meanwhile, the sensitivity to \(\nu_{e}\) mainly derives from the vAr(CC) reaction in DUNE and the eES channel in Hyper-K. Modest uncertainties are also achieved as \(\pm 12.4\%\), \(\pm 4.3\%\) and \(\pm 4.9\%\). However, the precision for \(\nu_{x}\) flux is relatively poor and the fractional uncertainties are only obtained as \(\pm 33.4\%\), \(\pm 10.7\%\) and \(\pm 10.9\%\). The vPb(NC) reaction in RES-NOVA renders the primary sensitivity to \(\nu_{x}\) and also achieves a number of total events even larger than the sum of that from the eES channels in Hyper-K and the vAr(CC) channel in DUNE. However, the fact is that \(\sim 1/3\) of the signals in RES-NOVA come from the \(\nu_{e}\) and \(\bar{\nu}_{e}\) fluxes. That is, the information of \(\nu_{x}\) from RES-NOVA is actually contaminated. Nevertheless, higher statistics will further improve the accuracy, e.g., enlarging the fiducial mass of RES-NOVA by 10 times will improve the accuracy by \(\sim 50\%\) in our test. On the other hand, due to the strong suppression of Pb nuclei on the nuclear recoil energy, a threshold of 1 keV in nuclear recoil energy only makes RES-NOVA sensitive to neutrinos with energy above \(\sim 10~{}\mathrm{MeV}\). Such threshold, although literally quite low among detectors in the same category, is nevertheless not low enough for precision measurement of the spectrum of \(\nu_{x}\) flux in supernova neutrinos, since the information below and even in the peak is lost. Such loss naturally jeopardizes precision extraction of information related to spectral shape. 14 Footnote 10: [https://emcee.readthedocs.io/en/stable/index.html](https://emcee.readthedocs.io/en/stable/index.html). Footnote 11: The M-H algorithm is the most commonly used Markov chain Monte Carlo algorithm (see ref. [48, 70] for more details). Footnote 12: _corner_ is used to plot such diagrams [72]. Footnote 13: Indeed, asymmetries appear among these 1-D distributions in figure 3 and also in table 3, and will also show up in that of other cases. For simplicity, the symmetrized fractional uncertainties are calculated by averaging the positive and negative uncertainties over the most probable values here and after. Footnote 14: In the test analyses, we assume a \(\sim 6~{}\mathrm{MeV}\) threshold of neutrino energy (i.e., \(0.4~{}\mathrm{keV}\) threshold of nuclear recoil energy) for RES-NOVA, and the accuracy for \(\nu_{x}\) is improved by a factor of \(1/4\). The neutral current scatterings on \({}^{16}\mathrm{O}\) in Hyper-K can also provide information on the low energy region (e.g., \(\sim 400\) events in the energy range of \(5\sim 10~{}\mathrm{MeV}\)). The inclusion of this reaction also lead to a moderate improvement (\(\sim 25\%\)) on the accuracy of \(\alpha_{\nu_{x}}\). On the other hand, the off-diagonal plots suggest the correlations between parameters. Generally speaking, it is quite noticeable that significant correlations appear among parameters in the same type of neutrinos universally, and also only exist among them. Furthermore, these correlations even show certain features for a specific type of neutrinos, i.e., strong positive correlation between \(\alpha\) and \(\langle E\rangle\) of whom both determine the shape of spectrum, and noteworthy negative correlations between \(\mathcal{E}\) and one of the above spectral shape parameters, respectively. Such correlation patterns are primarily embedded in the parameterization of neutrino spectrum (see eq. (2)) and eq. (4). It is also potentially interesting to mention that such correlations are the weakest for the \(\bar{\nu}_{e}\) flavour while that of the others are comparable 15. The distance effect is tested here. For a closer source with \(d=5\) kpc, the higher statistics in data lead to better accuracies on the reconstructed spectral parameters, while almost no effect on the correlations among these parameters. In detail, the symmetrized factional uncertainties are updated by \(\pm 6.4\%\), \(\pm 2.3\%\) and \(\pm 2.6\%\) for \(\nu_{e}\) flavour, \(\pm 1.5\%\), \(\pm 0.4\%\) and \(\pm 0.5\%\) for \(\bar{\nu}_{e}\) part and \(\pm 20.1\%\), \(\pm 6.3\%\) and \(\pm 5.8\%\) for \(\nu_{x}\) component. However, as a result for comparison, these percentages are calculated with new \(2\sigma\) credible intervals (i.e., for \(d=5\) kpc) and the most probable values in the previous case (i.e., for \(d=10\) kpc). Such treatment is also applied in similar comparisons hereafter. In short, the accuracies are universally enhanced by \(40\%\sim 50\%\) among all parameters in this test. #### 3.2.2 Flavour conversions Figure 4 displays the posterior distributions when the oscillation effects are considered under the assumption of NH. The representative values, corresponding to the distributions on the diagonal, are also given in table 3. Still, the best results are obtained for the \(\bar{\nu}_{e}\) flavour for the same reason as the case without oscillation effect. Numerically speaking, the symmetrized fractional uncertainties are \(\pm 5.6\%\), \(\pm 1.6\%\) and \(\pm 2.1\%\) for \(\alpha\), \(\langle E\rangle\) and \(\mathcal{E}\), sequentially, within a credible level of \(2\sigma\). They become worse slightly, due to the partial conversion in eq. (6). In this flavour conversion mode, the \(\nu_{e}\) events and \(\sim 30\%\) of \(\bar{\nu}_{e}\) events in detectors are now responsible for the \(\nu_{x}\) component. Thus, the results for \(\nu_{x}\) component are much better after combining information from all the four channels. The uncertainties are read as \(\pm 10.5\%\), \(\pm 3.8\%\) and \(\pm 4.2\%\), even slightly better than the \(\nu_{e}\) results in the case of no oscillation. In contrast, the precision for \(\nu_{e}\) are now rather poor, only achieving uncertainties of \(\pm 45.1\%\), \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \multirow{2}{*}{Osc} & \multirow{2}{*}{estimate} & \multicolumn{3}{c}{\(\alpha\)} & \multicolumn{3}{c}{\(\langle E\rangle\) [MeV]} & \multicolumn{3}{c}{\(\mathcal{E}\) [\(10^{52}\) erg]} \\ \cline{3-11} & & \(\nu_{e}\) & \(\bar{\nu}_{e}\) & \(\nu_{x}\) & \(\nu_{e}\) & \(\bar{\nu}_{e}\) & \(\nu_{x}\) & \(\nu_{e}\) & \(\bar{\nu}_{e}\) & \(\nu_{x}\) \\ \hline \multirow{4}{*}{NO} & MAP & 2.83 & 3.25 & 2.93 & 14.37 & 16.26 & 17.88 & 7.71 & 6.45 & 5.63 \\ & \(2\sigma^{-}\) & -0.38 & -0.10 & -1.00 & -0.63 & -0.14 & -1.81 & -0.41 & -0.07 & -0.46 \\ & \(2\sigma^{+}\) & +0.32 & +0.08 & +0.96 & +0.61 & +0.12 & +2.02 & +0.34 & +0.05 & +0.77 \\ & \% & 12.4 & 2.8 & 33.4 & 4.3 & 0.8 & 10.7 & 4.9 & 0.9 & 10.9 \\ \hline \multirow{4}{*}{NH} & MAP & 3.48 & 3.12 & 2.37 & 13.84 & 16.09 & 17.45 & 7.95 & 6.46 & 5.86 \\ & \(2\sigma^{-}\) & -1.33 & -0.16 & -0.25 & -1.94 & -0.25 & -0.60 & -1.72 & -0.13 & +0.24 \\ & \(2\sigma^{+}\) & +1.81 & +0.19 & +0.25 & +2.27 & +0.28 & +0.73 & +2.16 & +0.14 & +0.25 \\ & \% & 45.1 & 5.6 & 10.5 & 15.2 & 1.6 & 3.8 & 24.4 & 2.1 & 4.2 \\ \hline \multirow{4}{*}{IH} & MAP & 3.41 & 3.85 & 2.18 & 15.04 & 16.03 & 17.17 & 7.77 & 6.58 & 5.89 \\ & \(2\sigma^{-}\) & -0.96 & -1.74 & -0.07 & -1.39 & -2.95 & -0.18 & -0.86 & -1.46 & -0.06 \\ \cline{1-1} & \(2\sigma^{+}\) & +1.23 & +1.57 & +0.07 & +1.43 & +2.03 & +0.15 & +0.84 & +1.88 & +0.05 \\ \cline{1-1} & \% & 32.1 & 43.0 & 3.2 & 9.4 & 15.5 & 1.0 & 10.9 & 25.4 & 0.9 \\ \hline \end{tabular} \end{table} Table 3: The representative values of 1-D posterior distributions. NO indicates the case without neutrino oscillation, while NH (IH) represents the case of normal (inverted) mass hierarchy. Gaussian priors are adopted in all cases. The rows denoted with MAP give the most probable values of the posteriors, while \((2\sigma^{-},2\sigma^{+})\) show the relative credible intervals at the \(2\sigma\) level of probability. % rows give the corresponding symmetrized fractional uncertainties. \(\pm 15.2\%\) and \(\pm 24.4\%\). Because all the information for \(\nu_{e}\) flavour are extracted from the data of vPb(NC) reactions in RES-NOVA, and only \(\sim 1/6\) of these data are responsible. Note that the deviation between posterior and prior distributions for \(\alpha_{\nu_{e}}\) is kind of trivial, which means the result get too much information from the prior, instead of the data. It indicates that the constraint on \(\alpha_{\nu_{e}}\) is actually quite limited in this case. The numerical results for the IH conversion are illustrated in figure 5 and table 3. In this conversion mode, neutrino signals in all reaction channels are mainly coming from the original \(\nu_{x}\) component (see eq. (7) and eq. (8)), which naturally lead to a promising precision in this part. That is, the symmetrized fractional uncertainties are obtained as \(\pm 3.2\%\), \(\pm 1.0\%\) and \(\pm 0.9\%\) for the three parameters correspondingly. It should be mentioned that \(\nu_{x}\) components are responsible for \(\sim 2/3\) of the total neutrinos. Hence, it is quite significant to achieve such a Figure 4: The same as figure 3, but the oscillation effects with normal mass hierarchy are under evaluation. high precision on the measurement of this part. However, the price is large uncertainties on the measurements of other components. The representative values of posterior distributions are \(\pm 32.1\%\), \(\pm 9.4\%\) and \(\pm 10.9\%\) for the \(\nu_{e}\) flavour, and \(\pm 43.0\%\), \(\pm 15.5\%\) and \(\pm 25.4\%\) for the \(\bar{\nu}_{e}\) part. The situation of the \(\bar{\nu}_{e}\) part are quite similar to that of the \(\nu_{e}\) flavour in the NH conversion. Similarly, the caveat is that the prior distribution provides too much information in the evaluation of \(\alpha_{\bar{\nu}_{e}}\), also just like the case of \(\alpha_{\nu_{e}}\) in the NH conversion. Aside from the diagonals, the off-diagonal plots in figure 4 and figure 5 portray the correlations between parameters as 2-dimension distributions. So as to quantify these correlations, the matrices of correlation coefficients, namely \(\mathbf{V}^{\rm NH}\) and \(\mathbf{V}^{\rm IH}\), are calculated and shown in figure 6, where the value in a coordinate of a matrix refers to the distribution in the same coordinate of posterior charts. Apparently, the correlations among the three parameters Figure 5: The same as figure 3, but the oscillation effects with inverted mass hierarchy are under evaluation. of one specific species remain the same and, more specifically, another universal hierarchy among the three correlation coefficients emerges as \(|\rho(\alpha,\langle E\rangle)|>|\rho(\langle E\rangle\,,\mathcal{E})|>|\rho( \alpha,\mathcal{E})|\). Such patterns are still controlled by the spectral formalism. On the other hand, different correlation patterns appear between different oscillation models. In the case of NH, moderate correlations exist among spectral parameters from \(\bar{\nu}_{e}\) and \(\nu_{x}\) components. That is, the spectral shape parameters, \(\alpha\) and \(\langle E\rangle\), of \(\bar{\nu}_{e}\) flux have negative correlations to the corresponding parameters of \(\nu_{x}\) flux, and so do the total energy parameters, \(\mathcal{E}\). This can be expected from the mixing of these two components, as described in eq. (6). As a consequence, more complicated correlation patterns stem from two categories of correlations mentioned above (see figure 5(a) and figure 4 for more details). However, it turns out that no such correlations are seen in the case of IH, while the mixing of \(\nu_{e}\) and \(\nu_{x}\) components does exist, i.e., in eq. (7). The absence here is ascribed to the different sensitivities to \(\nu_{e}\) and \(\bar{\nu}_{e}\) species in our detector configurations 16. Such difference between NH and IH can potentially act as another smoking gun to determine the mass hierarchy in measurement of the next galactic CCSN 17, although we postpone further estimates in future work. Footnote 16: As a test, the exchange of parameters between \(\nu_{e}\) flavour and \(\bar{\nu}_{e}\) component is estimated again and the mixing-induced correlations are still missing for IH while clear for NH. The effect is that these correlations become relatively weaker in the NH mode. We also swap the values of \(\sin^{2}\theta_{12}\) and \(\cos^{2}\theta_{12}\), and only see some mild effects on the correlation coefficients (even weaker than the previous case). Footnote 17: When analysing the data with NH template, dataset with IH will show even stronger mixing-induced correlations than dataset with NH (e.g., the correlation coefficients between \(\alpha_{\bar{\nu}_{e}}\) and \(\langle E_{\nu_{x}}\rangle\) (\(\alpha_{\bar{\nu}_{e}}\) and E\({}_{\nu_{x}}\)) in the two cases are shown as \(-0.78\) vs \(-0.46\) (\(0.64\) vs \(0.30\)), and, however, the impacts on different coefficients can be different.). If the analyses were conducted with IH/NO template, we see no manifest signals or just rather weak trends. Again, we check the results for \(d=5\) kpc. The correlation patterns for both NH and IH are still robust, only with modest enhancements found in spectral-induced correlation coefficients of the \(\nu_{e}\) (\(\bar{\nu}_{e}\)) flavour in NH (IH) conversion. As to the accuracies of reconstructed parameters, universal improvements of \(40\%\sim 50\%\) are again obtained for the \(\bar{\nu}_{e}\) and \(\nu_{x}\) components in the case of NH, and for the \(\nu_{e}\) and \(\nu_{x}\) components in the case of IH. Nevertheless, Figure 6: The matrices of correlation coefficients for NH and IH. different parameters of the \(\nu_{e}\) component in NH conversion show different sensitivities to the change of target distance. That is, the accuracy for \(\mathcal{E}_{\nu_{e}}\) is increased by \(\sim 45\%\) in such test, while that for \(\langle E_{\nu_{e}}\rangle\) is only enhanced by \(\sim 15\%\) and it turns out to be rather weak improvement (\(\sim 4\%\)) on \(\alpha_{\nu_{e}}\). It is similar for that of the \(\bar{\nu}_{e}\) flavour in IH conversion. So the measurement of \(\alpha_{\nu_{e}}\) (\(\alpha_{\bar{\nu}_{e}}\)) in NH (IH) conversion deserves further investigation. ## 4 Conclusions In this paper, we present the retrieval of energy spectra for all flavours supernova neutrinos with Bayesian inference by combining data from multiple detectors. When selecting reaction channels, the collection of IBD-p and eES reactions in Hyper-K, vAr(CC) in DUNE and vPb(NC) in RES-NOVA is employed under the consideration of flavour sensitivity and data statistics. Before analysing the mock data, we quantify the prior knowledge on the energy spectra of supernova neutrinos with modified Gaussian functions. Then, using a Poisson likelihood, we sample the posterior distribution, which has 9 degrees of freedom, and extract the probability distribution of each parameter. Furthermore, the correlation coefficients among parameters are also estimated and discussed. Assuming a typical source distance (i.e. \(d=10\) kpc) in our Galaxy, our results show that the average energy and individual emitted energy can be determined with an accuracy of a few percent in normal (inverted) mass hierarchy, except for the \(\nu_{e}\) (\(\bar{\nu}_{e}\)). Especially, those for heavy flavour neutrinos are reconstructed with a 1% precision under the oscillation effect of inverted mass hierarchy. The spectral pinching for either \(\bar{\nu}_{e}\) (\(\nu_{x}\)) can also be measured to a few percent precision in normal (inverted) mass hierarchy. In contrast, that for either \(\nu_{e}\) or \(\bar{\nu}_{e}\) is hardly extractable from the data, accordingly. Nevertheless, based on the overall accuracy inferred here, it is interesting to mention that the precise determination of neutron skin of Lead should be promising through nearby galactic supernova neutrino detection in RES-NOVA as proposed in our previous work [23]. For future studies, an effective way to enhance the capability of our method is to further improve the flavour-blind sensitivity in the collections (e.g. higher statistics or extra sensitivity to neutrinos with energy below 10 MeV). For instance, the neutral current scatterings on \({}^{16}\)O in Hyper-K can provide valuable information in the low energy region (i.e., \(5\sim 10\) MeV), while the pES reaction in JUNO and neutral current scattering on Ar (\(\nu+\text{Ar}\rightarrow\nu+\text{Ar}^{*}\)) in DUNE (if available) will offer more events in the relatively higher energy range. It is also worthy to mention that the next-generation large-scale dark matter detectors will also render complementary information in such studies (see, e.g., Ref [73, 74]). Furthermore, our analyses indicate that there exist two categories of correlations among parameters: spectral-induced correlation and mixing-induced correlation. The former is encoded in the formalism of neutrino flux, while the latter derives from the complementary effects of neutrino mixing and detector configurations. Such correlations potentially offer us new ways to extract information from data, more efficiently, via specific combinations of spectral parameters. It is also possible to solve the mass hierarchy problem by analysing the mixing-induced correlations. However, more realistic oscillation models should be included in real observations, e.g., non-adiabatic oscillation, collective oscillation and Earth matter effect. The investigation of these issues will be left to future works. Flat prior We replace the Gauss distributions (see, e.g., eq. (11)) with flat distributions, whose parameter spaces are restricted to the \(3\sigma\) regions of Gauss distributions, in the analysis. Considering no neutrino oscillation, the results are presented in table 4 and figure 7. Generally speaking, the posterior distributions are quite similar to that of Gauss priors (see, e.g., table 3 and figure 3). The results of \(\bar{\nu}_{e}\) flavour remain almost the same, due to the highly informative dataset offered by the IBD-p reaction in Hyper-K. The influence on the extraction of \(\nu_{e}\) part are also tiny, i.e., only an increase of \(\sim 0.3\%\) on the \(2\sigma\) symmetrized fractional uncertainty. However, such replacement shows relatively noticeable impact on the retrieval of \(\nu_{x}\) component, namely an increase of \(10.1\%\) on \(\alpha\) and enlargement of \(\sim 2.5\%\) on \(\langle E\rangle\) and \(\mathcal{E}\). Such consequences are totally reasonable, and confirm the previous statement that the more informative the dataset is, the less dependence the posterior will show on the prior. Note that these priors can be further updated according to future developments on modelling of stellar core collapse. We are grateful to Ming-chung Chu for useful comments. X.-R. Huang acknowledges support from Shanghai Jiao Tong University via the Fellowship of Outstanding PhD Graduates. This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 12235010 and 11625521, and the National SKA Program of China No. 2020SKA0120300. Note added.The data and code underlying this article will be shared on reasonable request.
2305.01503
NewsPanda: Media Monitoring for Timely Conservation Action
Non-governmental organizations for environmental conservation have a significant interest in monitoring conservation-related media and getting timely updates about infrastructure construction projects as they may cause massive impact to key conservation areas. Such monitoring, however, is difficult and time-consuming. We introduce NewsPanda, a toolkit which automatically detects and analyzes online articles related to environmental conservation and infrastructure construction. We fine-tune a BERT-based model using active learning methods and noise correction algorithms to identify articles that are relevant to conservation and infrastructure construction. For the identified articles, we perform further analysis, extracting keywords and finding potentially related sources. NewsPanda has been successfully deployed by the World Wide Fund for Nature teams in the UK, India, and Nepal since February 2022. It currently monitors over 80,000 websites and 1,074 conservation sites across India and Nepal, saving more than 30 hours of human efforts weekly. We have now scaled it up to cover 60,000 conservation sites globally.
Sedrick Scott Keh, Zheyuan Ryan Shi, David J. Patterson, Nirmal Bhagabati, Karun Dewan, Areendran Gopala, Pablo Izquierdo, Debojyoti Mallick, Ambika Sharma, Pooja Shrestha, Fei Fang
2023-04-30T07:15:29Z
http://arxiv.org/abs/2305.01503v1
# NewsPanda: Media Monitoring for Timely Conservation Action ###### Abstract Non-governmental organizations for environmental conservation have a significant interest in monitoring conservation-related media and getting timely updates about infrastructure construction projects as they may cause massive impact to key conservation areas. Such monitoring, however, is difficult and time-consuming. We introduce NewsPanda, a toolkit which automatically detects and analyzes online articles related to environmental conservation and infrastructure construction. We fine-tune a BERT-based model using active learning methods and noise correction algorithms to identify articles that are relevant to conservation and infrastructure construction. For the identified articles, we perform further analysis, extracting keywords and finding potentially related sources. NewsPanda has been successfully deployed by the World Wide Fund for Nature teams in the UK, India, and Nepal since February 2022. It currently monitors over 80,000 websites and 1,074 conservation sites across India and Nepal, saving more than 30 hours of human efforts weekly. We have now scaled it up to cover 60,000 conservation sites globally. ## 1 Introduction Massive floods, poaching, waste pollution - every week, new threats impacting our environment come to light. Each of these can cause a long chain of negative impacts if not addressed. As such, monitoring these conservation-related events is of great importance for non-governmental organizations (NGOs) focused on environmental conservation such as the World Wide Fund for Nature (WWF) to take timely action and participate in relevant conversations. In addition to conservation as a whole, many NGOs are particularly interested in monitoring news on certain subtopics. One such area is the ongoing or upcoming infrastructure projects such as roads, railways, and pipelines. These are usually more long-term and actionable than events like disasters or animal activity which occur in the past or present (hence limiting intervention impact). Conservation NGOs such as WWF play a key role in advocating for more sustainable infrastructure development. Early detection and engagement of these projects could shift infrastructure planning towards more environmentally sustainable outcomes while benefiting the people that the projects intend to serve. However, information about conservation-related events and infrastructure plans threatening critical habitats is scattered across numerous sources and comes in different forms. NGOs typically learn of such information through word-of-mouth or a handful of news outlets that they check manually. This process is both time-consuming and ineffective, and it can potentially fail to capture critical information in a timely manner, leaving these NGOs out of key conversations during early or ongoing stages of these developments. To fill this gap, we develop NewsPanda, a natural language processing (NLP) toolkit to automatically detect and analyze news and government articles describing threats to conservation areas. NewsPanda has five main components, which we detail in Section 3. At the core of NewsPanda is a classification module built using a BERT-based language model, which we fine-tune to classify whether articles are relevant to conservation and to infrastructure. Developing such a tool in the conservation nonprofit setting poses several unique challenges. First, labeling data is expensive. We propose an active learning-based method to Figure 1: Top: Current costly and time-consuming information gathering pipeline at NGOs. Bottom: NewsPanda automates multiple steps in the pipeline, enabling humans to perform the more critical tasks (analysis and action). selectively acquire labels on the most critical data points. Second, the data labels could be noisy since labeling for relevance is ultimately a subjective judgement, even if we fix a labeling rubric. We adopt a noise reduction algorithm [3] to improve our model's performance. NewsPanda was developed as a collaboration between WWF and Carnegie Mellon University (CMU). It has been successfully deployed since February 2022 and has been used by the WWF teams in the UK, India, and Nepal to monitor developments in conservation sites. The entire pipeline runs on a weekly basis, scraping and classifying relevant news articles regarding conservation and infrastructure construction related events that occurred in the past week. These articles are then visualized in WWF's GIS systems for the field teams to investigate. We also share some results through social media for the benefit of the broader civil society. Through the deployment of NewsPanda, the WWF teams have been able to save over 30 hours weekly on collecting news, which allows us at WWF to instead focus on analyzing the news and taking actions (Figure 1) 1. Footnote 1: We are happy to work with interested researchers and nonprofits on sharing our code and data. ## 2 Related Work News Monitoring SystemsAlthough there is a rich literature on news information extraction in general domains [1, 13] as well as some specific applications [15, 16], there has been hardly any media monitoring tool for environmental conservation and infrastructure construction. Directly using generic media monitoring tools often lead to unsatisfactory results that are not localized enough to be actionable for a specific conservation site or not relevant enough to be reliable. As a result, conservation NGOs still use a manual process to collect articles. The only work on conservation news monitoring that we are aware of is a preliminary attempt by Hosseini and Coll Ardanuy [1] that apply BERT to classify news articles. Compared to that, with NewsPanda we provide a classification module with algorithmic contributions to address challenges in using the tool in the nonprofit context, a full end-to-end information extraction and processing pipeline, and most importantly, results and lessons learned from a large scale deployment of the tool. This is the first comprehensive and actionable media monitoring tool for conservation and infrastructure. NLP for Conservation & InfrastructureOutside of news monitoring, NLP tools have been used for various applications in conservation and infrastructure. Some analyze the relevant news articles for general insights on conservation reporting [20] or study their spread and impact [21]. These studies are descriptive in nature and orthogonal to our work. The few studies that take the civil society stakeholder's perspective are focused on different links in the process from us. Luccioni, Baylor, and Duchene [1] use BERT-based models to analyze corporate environment sustainability reports. Boutilier and Bahr [1] explore mining-related texts to analyze the social license of a particular project. They target different problems from us. They assume a relevant text is readily available and try to extract meaningful insights from it. On the other hand, we work on identifying that relevant text from thousands of irrelevant texts in the first place and leave the insight extraction to professional organizations like WWF that have been doing that for years. ## 3 NewsPanda Overview NewsPanda toolkit consists of five modules as illustrated below and in Figure 1(a). During pilot study and deployment (Section 8), this entire pipeline is run on a weekly basis. 1. **Information Retrieval Module**: We use the NewsAPI scraper [10] with the names of conservation sites taken from a curated list of conservation areas. 2. **Relevance Classification Module**: We classify articles along two dimensions, namely _Conservation Relevance_ and _Infrastructure Relevance_, through a large pretrained language model fine-tuned with our collected dataset. Details of this model are explained in Section 5. 3. **Article Postprocessing Module**: The article postprocessing module has 3 parts: a keyword extractor which extracts keywords, an event extractor which extracts event trends, and a geolocator which provides location coordinates. We discuss these features in Section 6. 4. **Visualization Module**: After the relevant articles are identified, we visualize them in our GIS system at WWF, which we can further analyze and act upon (Section 8). 5. **Social Media Module**: In parallel to the visualization module, another downstream application for NewsPanda is WildlifeNewsIndia, 2 a Twitter bot we built from NewsPanda that shares weekly relevant conservation-related articles on social media (Section 8). Footnote 2: [https://twitter.com/WildlifeNewsIND](https://twitter.com/WildlifeNewsIND) ## 4 Dataset We use two main datasets for developing NewsPanda. First, we use an existing corpus (WHS-Corp) by Hosseini and Coll Ardanuy [1] consisting of articles scraped using World Heritage Sites as keywords and labelled by domain experts. Second, we scrape and label our own corpus (InfraCorp), which is a more focused, timely, and fine-grained upgrade over WHS-Corp. The datasets differ in terms of the locations of the conservation sites used, as well as the time frame of the articles. ### WHS-Corp Dataset WHS-Corp contains over 44,000 articles from 2,974 different sources covering 224 World Heritage Sites globally. Scraping was done using NewsAPI's Python library from a list of curated conservation sites of interest. Besides the title and content, it also contains metadata such as the publication site, the author, and the date of publication. Articles in WHS-Corp span from January 2018 to October 2019. After these articles were gathered, a subset of 928 articles were sampled and manually annotated for _Conservation Relevance_ by domain experts familiar with conservation. _Conservation Relevance_ denotes whether an article discusses threats or impacts to wildlife and environment conservation in general, e.g. poaching, forest development, natural disasters. We use this labelled dataset for training our model. ### InfraCorp Dataset As opposed to WHS-Corp which focuses on global conservation sites, InfraCorp specifically focuses on conservation sites in India and Nepal. The InfraCorp corpus contains 4,137 articles (150 for Nepal and 3,987 for India) from 1,074 conservation sites across the two countries. All articles were taken in the two-year span from November 2019 to November 2021. We use NewsAPI to search for the official names of the conservation sites, or alternative and/or local names for the sites as recorded at WWF. Given the data availability as well as the annotator capacity of the local domain experts from India and Nepal, we labeled all 150 articles from Nepal and only 1,000 articles from India. Annotation for InfraCorp was done along two dimensions: _Conservation Relevance_ and _Infrastructure Relevance_. _Conservation Relevance_ is similar to the one described for WHS-Corp in Section 4.1. Among the articles which were labelled as positive for _Conservation Relevance_, we further categorize whether it is relevant to infrastructure. This covers issues such as new roads in forested areas and construction projects near national parks. Each article was annotated by two domain experts, one from WWF UK, and another from either WWF India or WWF Nepal. We provided the annotators with a descriptive rubric for labeling in each dimension, as well as concrete examples of edge cases. The following was one such example in our instructions: Articles describing tourism or wildlife or natural beauty of a national park, but without talking about environmental impacts or threats to wildlife and conservation, do not count as positive for _Conservation Relevance_. Where the two sets of labels disagree, the authors closely inspect the articles and decide on the final labels. ## 5 Relevance Classification Module We highlight the structure of our NewsPanda classification module and other key techniques used during training. ### Classification Model The backbone of the NewsPanda classification model is a BERT model [10] with a linear classification head. BERT is a Transformer-based language model trained using masked language modelling and next sentence prediction objectives on large-scale corpora of books and articles. This large-scale pretraining, as well as its ability to effectively encode context, leads to superior performance on a wide variety of tasks. We adapt BERT to the domain of conservation and infrastructure, and we fine-tune it to perform news article classification. In Section 7, we explore different variants of the BERT model (such as RoBERTa). One key change we make to the BERT model is that in the final linear head after the main BERT layers, instead of only considering the BERT vector outputs, we also incorporate other features, namely sentiment analysis and topic modelling, as shown in Figure 1(b). We hypothesize that including these additional features will provide the model with more useful information that will help classify whether or not a particular article is relevant to infrastructure or conservation. For instance, if an article has topic vectors that align with other articles covering forest habitats, but it has an overwhelmingly positive sentiment, then we may suspect that it could be a tourism-related feature article instead of a conservation-related news article (which are often more neutral or negative in terms of sentiment). For sentiment analysis, we extract the sentence polarity scores of the article title, its description, and its content, giving us three sentiment scores per article. This is done on a scale of \(-1.0\) to \(+1.0\), with \(-1.0\) representing the most negative score and \(+1.0\) representing the most positive score. Sentiment analysis was done using the textblob package [1]. Meanwhile, for topic extraction, we consider the entire training corpora of WHS-Corp and InfraCorp, and train a Latent Dirichlet Allocation (LDA) model to identify topic clusters. We use 50 topics for the LDA model and Figure 2: NewsPanda pipeline (1(a)) and model diagram for conservation and infrastructure relevance classifiers (1(b)). implemented it using scikit-learn [11]. Lastly, for the main BERT model, we concatenate the title, description, and content of each article, and we use this concatenated text as input to our classifier. For cases where the article is missing certain features (e.g. no description), we simply supply an empty string for that feature. The vectors from the three steps (i.e. BERT model, sentiment analysis, topic modelling) are then concatenated, and this final vector is used as the input to the final classification head to generate a binary prediction. Specific implementation settings and other hyperparameters can be found in Section 7.1. ### Active Learning Annotating a dataset is costly. In curating our InfraCorp dataset, we need to be mindful of which specific articles to label in order for our model to learn most efficiently. For this selection process, we first fine-tune a pretrained RoBERTa-base model on the existing WHS-Corp dataset, based on the _Classification Relevance_. To make this preliminary model as close to our final model as possible, we also incorporate the topic modelling and sentiment analysis features, as shown in Figure 2b. Because this is only a preliminary model, we forego doing extensive hyperparameter tuning and decided to just select a setting that worked recently well: with a learning rate of 1e-5, batch size of 16, and training for 10 epochs, we were able to get an F-score of 0.61 on WHS-Corp. Using this trained model, we then generate _Classification Relevance_ predictions for all articles in the InfraCorp corpus, together with the corresponding softmax scores. We treat these softmax scores as a measure for the classification confidence of the model: if the softmax is close to 0 or close to 1, then it means that the model is very certain with its prediction, while if the softmax is close to 0.5, then it means the model is unsure with its prediction. We then select 300 articles which our model is least confident about. We hypothesize that selecting these "difficult" rows will have the greatest impact on model performance. We call this active learning-based dataset InfraCorp-A. To verify the effectiveness of active learning, we also randomly sample 300 articles to label, which we call InfraCorp-R. We will later evaluate how this compares with the actively selected dataset on a randomly selected test set of 400 samples in our ablation study (Section 7.3). ### Noisy Label Correction Our dataset is labelled by two sets of domain expert annotators from WWF. Although we provided detailed criteria for labelling each article, there is always room for some subjectivity in the process. This resulted in the two sets of labels not agreeing with each other on over \(10\%\) of the data points. Although, as mentioned in Section 4.2, we did manage to obtain the "ground truth" label for a small subset of InfraCorp for model evaluation purposes, doing that for every single article is prohibitively expensive - much more expensive than the (not cheap) process of having either annotator providing a (noisy) label. Therefore, in order for **NewS-Panda** to work well once deployed, we need to be able to learn well from the potentially noisy labels only. More formally, let \(x_{n}\) be the embedding of an article along with its sentiment and topic modeling vectors as described in Section 5.1. Let \(y_{n}\) be the true label of this article. The task is to make an accurate prediction on the dataset \(\{(x_{n},y_{n}):n=1\dots N\}\) when we only have access to the noisy data \(\{(x_{n},\tilde{y}_{n}):n=1\dots N\}\) where \(\tilde{y}_{n}\) is the label that we get from either of the two annotators, and the true labels \(y_{n}\) are the final labels that we decide on after resolving conflicts. To address this challenge, we adapt the CORES\({}^{2}\) loss [13] noise correction algorithm, which is an extension of the earlier peer loss [15]. Peer loss frames the task of learning from noisy labels as a peer prediction problem. In practice, the loss for each \((x_{n},y_{n})\) data point can be calculated using the standard cross entropy loss with \((x_{n},y_{n})\), modified with a loss calculated using a randomly sampled input \(x_{n_{1}}\) and an _independently_ randomly sampled label \(y_{n_{2}}\). That is, we have \[\ell_{\text{\tiny{PEER}}}(f(x_{n}),\tilde{y}_{n}):=\ell(f(x_{n}),\tilde{y}_{n })-\alpha\cdot\ell(f(x_{n_{1}}),\tilde{y}_{n_{2}})\] where \(\alpha>0\) is a tunable parameter. Meanwhile, CORES\({}^{2}\) replaces the random sampling from peer loss with a confidence regularizer defined as follows: \[\ell_{\text{\tiny{CORES}}}(f(x_{n}),\tilde{y}_{n}):=\ell(f(x_{n}),\tilde{y}_{ n})-\beta\cdot\mathbb{E}_{\tilde{Y}_{|\tilde{D}}}[\ell(f(x_{n}),\tilde{Y})]\] where \(\tilde{D}\) is the dataset, \(\tilde{Y}\) is a noisy label, and \(\beta>0\) is a tunable parameter. Following cheng2021explaining, we calculate this confidence regularizer term using an estimate of the noise prior probability. We test both peer loss and CORES\({}^{2}\) loss, and report results in our ablation study (Section 7.3). ## 6 Article Postprocessing Module Once the relevant articles are identified using the model, we then perform a few post-processing steps to extract key information and make them easier to analyze and visualize. ### Keyword Extractor Keywords are important, as they allow the easy summarization, categorization, and grouping of news articles. Furthermore, we also use these keywords as hashtags in our social media module (Section 8). To extract keywords, we use an extensive list of conservation-related keywords maintained at WWF and search the article for exact matches. In addition, we also use Named Entity Recognition systems to extract the salient words in each article. To perform this, we use a BERT-based model trained on the CoNLL 2003 Named Entity Recognition dataset [13]. The keywords extracted using these two methods are then concatenated to form the final set of keywords. ### Event Extractor To track the progress of infrastructure projects, it is often not enough to just view a single article in isolation. Rather, news regarding these projects often builds up over a period of weeks or months. To help provide this context, we create an automated event extractor, which leverages our InfraCorp dataset, including both the labelled articles as well as the unlabelled articles. Given a new article \(a\), our goal is to find past articles \(P_{a}\) which are closely related to \(a\). We first gather all previous articles which are from the same conservation site. Next, we create a graph \(G_{a}\), where each article is a node, and two nodes share an edge if the corresponding articles share \(\geq k\) common keywords (from Section 6.1). Here, \(k\) is an adjustable parameter depending on how loosely connected we want \(G_{a}\) to be. For our data, we use \(k=3\). Once the graph \(G_{a}\) is constructed, we then define an "event" to be the maximal clique containing \(a\), and we report all such events. A sample chain of events is shown in Figure 3. ### Geolocation To aid with visualization (Section 8), we perform geolocation on the classified news articles, based on the search terms used to retrieve them. To extract latitude and longitude coordinates, we leverage an extensive directory of conservation sites from WWF, and we use the directory to map conservation sites to their corresponding coordinates. If the directory contains no match, we geolocate using the geopy package. ## 7 Experiments and Results Here, we discuss results of our in-lab experiments and ablation studies to verify our hypotheses. Results from real-world deployment are discussed in the succeeding section. ### Experiment Settings BaselinesWe compare the performance of our **NewsPanda** model with the following baselines: 1. [leftmargin=*] 2. **Keyword model**: We consider a naive model that checks for the count of certain keywords. We curate two sets of "conservation-related keywords" and "infrastructure-related keywords". If an article contains more than \(k\) "conservation-related keywords", then it is considered to be relevant to conservation (likewise for infrastructure). 3. **RNN-based models**: We tokenize each article, then pass the embedding to RNN models, where the hidden state of the last layer is used as input to the final classification layer. We use two types of RNN models, namely GRUs [1] and LSTMs [1]. 4. **BERT-based models**: We fine-tune a pretrained BERTbase [1] and RoBERTa-base model [12], where we add a classification head after the final layer to perform relevance classification. Evaluation MetricsSince our task is binary classification, we measure the accuracy, precision, recall, and F1-score. For precision, recall, and F1, we consider only the scores of the positive class. All metrics are calculated separately for _Conservation Relevance_ and _Infrastructure Relevance_. DataFor _Conservation Relevance_, we train on the InfraCorp dataset (consisting of both InfraCorp-A and InfraCorp-R), as well as the WHS-Corp dataset. For _Infrastructure Relevance_, since WHS-Corp does not contain infrastructure labels, we only train using InfraCorp. We split the training data into an 80-20 training-validation split. For evaluation, we use the test split of InfraCorp for both _Conservation Relevance_ and _Infrastructure Relevance_. Implementation SettingsFor the GRU/LSTM, we use a batch size of 128, hidden size of 128, and dropout of 0.2. We train for 10 epochs with a learning rate of 1e-4. Meanwhile, for BERT, RoBERTa, and **NewsPanda**, we train for 10 epochs with batch size 4 and learning rate 1e-5. We use RoBERTa for the backbone model of **NewsPanda**. Model selection is done by considering the best validation F1-score. ### Results and Analysis Experimental results are shown in Tables 0(a) and 0(b). We observe that indeed, adding the sentiment analysis and topic modelling features, as well as the CORES\({}^{2}\) loss for noisy label correction, aids in predictions for both _Conservation Relevance_ and _Infrastructure Relevance_, providing an improvement over both BERT-base and RoBERTa-base. Our data is quite imbalanced: \(>\)80% of the articles are not relevant. This manifests itself in the discrepancies between accuracy and F1-score. We observe, for example, that the naive keyword model has very high accuracy scores but \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Model** & **Acc.** & **P** & **R** & **F1** \\ \hline \hline Keyword & 0.820 & 0.317 & 0.634 & 0.423 \\ LSTM & 0.711 & 0.495 & 0.511 & 0.504 \\ GRU & 0.729 & 0.422 & 0.505 & 0.475 \\ BERT & 0.860 & 0.708 & 0.704 & 0.706 \\ RoBERTa & 0.867 & 0.705 & 0.743 & 0.721 \\ **NewsPanda** & **0.877** & **0.729** & **0.801** & **0.744** \\ \hline \multicolumn{5}{c}{(a) Scores for _Conservation Relevance_} \\ \hline **Model** & **Acc.** & **P** & **R** & **F1** \\ \hline \hline Keyword & **0.947** & 0.250 & 0.455 & 0.323 \\ LSTM & 0.908 & 0.566 & 0.537 & 0.554 \\ GRU & 0.895 & 0.544 & 0.557 & 0.553 \\ BERT & 0.922 & 0.840 & 0.745 & 0.771 \\ RoBERTa & 0.916 & 0.794 & 0.809 & 0.799 \\ **NewsPanda** & 0.941 & **0.880** & **0.821** & **0.850** \\ \hline \end{tabular} \end{table} Table 1: Average scores for _Conservation Relevance_ (Table 0(a)) and _Infrastructure Relevance_ (Table 0(b)), taken over 10 random seeds. Figure 3: Example of events selected by the Event Extractor (Section 6.2) by date. The progression of the project is highlighted by the phrases in red underline. very low F1-scores, which indicates that it predicts a lot of zeros (hence the high accuracy), but is not able to predict the relevant articles well. The RNN-based models (LSTM and GRU) seem to perform relatively poorly, achieving an F1-score of around 0.5. This could also be attributed to the data imbalance, since these RNN-based models are generally not as robust to imbalanced datasets. In contrast, the BERT and RoBERTa models perform quite well, with F1-scores \(>\)0.7 for conservation and \(>\)0.75 for infrastructure, and precision/recall scores also around that range. This indicates that these transformer-based models are able to generalize quite well and successfully capture the notions of _Conservation Relevance_ and _Infrastructure Relevance_. Lastly, **NewS****Panda** offers significant improvement over the RoBERTa-base model (F1 t-test \(p\)-value \(=0.018\) for conservation and \(0.033\) for infrastructure), showing the positive effects of incorporating information such as the emotion and topics over simply considering the article text in isolation. ### Ablation Study #### Active Learning We compare the effect with training on actively-sampled data (InfraCorp-A) and randomly-sampled data (InfraCorp-R). Each of these datasets contain 300 India articles, as detailed in Section 5.2 and 4.2. We append these articles to the existing WHS-Corp to create the final data for training. We use the RoBERTa model for these experiments. Results are shown in Table 2. For both InfraCorp-A and InfraCorp-R, we see an improvement over just using WHS-Corp. Indeed, training with more data will result in better performance, regardless of how the data is sampled. We also observe that adding actively sampled data results in a larger improvement than adding randomly sampled data across all metrics (F1 t-test \(p\)-value \(=0.004\)). This verifies the effectiveness of our hypothesized confidence-based data selection for annotation. #### Noisy Label Correction We examine the effect of the noise correction methods outlined in Section 5.3, by comparing the effect of using peer loss, CORES\({}^{2}\) loss, and standard cross entropy loss. Based on InfraCorp, we use the labels supplied by one of the two annotators for the training set, and the final calibrated labels for the test set. Hyperparameter search was done for both peer loss and CORES\({}^{2}\) loss to find the optimal values of \(\alpha=0.05\) and \(\beta=0.05\). We trained for 20 epochs with a learning rate of 2e-5. From Table 3, we observe that for accuracy and precision, all three losses perform very similarly, with peer loss performing the highest by a small margin. For recall and F1, peer loss and the standard loss perform at comparable levels, while CORES\({}^{2}\) loss performs better than both (F1 t-test \(p\)-value \(=0.001\)). This is likely because the confidence regularizer used in CORES\({}^{2}\) works better than the random sampling used by peer loss. Both peer and CORES\({}^{2}\) loss might work even better if we had more training data than the current 600 in InfraCorp. In the end, given the positive results of CORES\({}^{2}\), we used it in our **NewS****Panda** model. ## 8 Deployment and Impact **NewS****Panda** has been used at WWF since February 2022. We describe the deployment, results, and lessons learned. ### Pilot Study The first stage of **NewS****Panda** deployment, which is the pilot study, started in February 2022 and ran for around one month. Every week, the CMU team scraped the news articles and ran the entire **NewS****Panda** pipeline, forwarding the outputs to the WWF teams to examine and provide feedback. During this pilot phase, the WWF and CMU teams identified a range of operational and technical issues in the initial version of **NewS****Panda**. First, in order for **NewS****Panda** to fit into the established workflow of WWF, it needs to be integrated into its GIS system. During the pilot, we realized that it is crucial to add the geolocation of each article (Section 6.3) and format the model output according to the specifications of the GIS platform used at WWF. Figure 4 shows how **NewS****Panda**'s results get integrated into the GIS system, with the red areas being the locations where we identify a relevant article. We also discovered that while **NewS****A**I has a good collection of global news sources, it fails to include some relevant sources in the local context. With the suggestions from the WWF team, we incorporated additional sources that often yield relevant local articles. One such site is Parivesh, which contains proposals of infrastructure projects in India. Finally, we found that some conservation sites' names often lead to 0 results, while other terms were too general and yielded hundreds of results, almost all of which were irrelevant, leading to inefficiencies. We set a lower and upper threshold, and filter out search terms outside the thresholds. ### Deployment Results After we resolved the above issues, we proceeded with the actual deployment. The procedure was similar to the pilot phase, except that at this phase, the focus is to evaluate the performance of **NewS****Panda**. The WWF teams closely inspected the model predictions each week and provided ground truth labels for each article. The label feedback allowed the CMU team to retrain the model regularly. This \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Dataset** & **Acc.** & **P** & **R** & **F1** \\ \hline \hline WHS-Corp & 0.911 & 0.585 & 0.585 & 0.586 \\ \hline WHS+Inf.Corp-A & **0.921** & **0.600** & **0.774** & **0.670** \\ WHS+Inf.Corp-R & 0.916 & 0.586 & 0.696 & 0.637 \\ \hline \end{tabular} \end{table} Table 2: Evaluation scores for _Conservation Relevance_ for InfraCorp-A compared with InfraCorp-R, averaged over 10 random seeds. \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Noisy Label** & **Acc.** & **P** & **R** & **F1** \\ **Correction** & & & & \\ \hline \hline None & 0.907 & 0.566 & 0.441 & 0.497 \\ \hline Peer Loss & **0.911** & **0.591** & 0.465 & 0.509 \\ CORES\({}^{2}\) & 0.908 & 0.584 & **0.551** & **0.553** \\ \hline \end{tabular} \end{table} Table 3: Evaluation scores for _Conservation Relevance_ for two noise correction methods, over 10 random seeds. stage ran from March 2022 to July 2022. Table 4 shows the aggregated results over 5 months of evaluation results from WWF India, Nepal, and UK. WWF UK labeled the first half of the deployment for all locations and India/Nepal labeled the second half for news articles in their respective countries. Overall, **NewsPanda** continued to show great performance in _Conservation Relevance_ during real-world deployment. Across all evaluations, the precision scores are consistently high, indicating that almost all of the articles reported by **NewsPanda** are indeed relevant. We intentionally tuned the model towards this direction - when almost everything that the model flagged is relevant, it would greatly help with establishing the trust in the model at the early stage of deployment. As we continue developing the model, we aim to improve the model towards achieving higher recall, to be able to capture more relevant articles. On the other hand, on _Infrastructure Relevance_ for India, the model's performance was worse than the offline experiments. Upon further inspection, we discovered that the majority of mistakes were in fact only 2-4 original pieces of news that were paraphrased by various news sources into 20-40 articles. Since there are only a few _Infrastructure Relevance_ positive articles to start with, this had a big impact on the model performance. Meanwhile, such phenomenon did not occur in our offline experiments because there we randomly sampled news from a large corpus for labeling. Aside from overall metrics, we also highlight individual success stories. Figure 4(right) shows a concrete example where **NewsPanda** made a difference. In early August, 2022, **NewsPanda** detected a new project of Ikhala Block Boundary Kishtwar to Lopara Road and highlighted it in the WWF GIS system. Upon further investigation by WWF staff, it is found that the project would divert 5.9 hectares of forest land. More importantly, WWF found that the project was still at its pre-proposal stage. This means WWF would be able to take early action and possibly participate in relevant conversations. Such stories are happening frequently since the deployment of **NewsPanda**. Using the tool's outputs integrated into our internal GIS systems, the WWF staff are continuously coordinating with our field teams to examine the status and report on relevant projects and areas. ### Qualitative and Quantitative Comparison with Current Practice Prior to **NewsPanda**, WWF had already been monitoring media for conservation-related articles (Figure 1). However, most of these efforts were not very structured or logged. It is thus difficult to draw head-to-head comparisons between **NewsPanda** and WWF's existing approach. That said, we still provide qualitative and quantitative evidence supporting the merit of **NewsPanda** over the current practice. Two months into the deployment, the CMU team carried out semi-structured interviews with their WWF colleagues who have been using **NewsPanda** outputs in their work. The purpose was to understand how WWF teams liked the toolkit and to elicit possible suggestions for improvement. Some quotes from the interviews are as follows. "You're giving us a bunch of articles... over 50 articles a week. We had two interns who spend 2-3 days a week on this and would only give us seven to ten articles. So there is a huge bump in efficiency right there in itself." "The data that you're sharing give a global perspective. It is very useful to understand the upcoming projects or mitigation measures that are being adopted on a global scale. So it helps us be informed." This improvement in news collection also helped with the downstream task - infrastructure impact assessment. "It took us maybe a month to do analyses of three or four infrastructure projects. With **NewsPanda**, we can send (stakeholders) 20 or 30 reports in a month." \begin{table} \begin{tabular}{c|c c c|c c c} \hline & \multicolumn{3}{c}{**Conservation**} & \multicolumn{3}{c}{**Infrastructure**} \\ & P & R & F1 & P & R & F1 \\ \hline \hline India & 0.849 & 0.605 & 0.706 & 0.462 & 0.250 & 0.324 \\ Nepal & 0.895 & 0.917 & 0.906 & 0.923 & 0.308 & 0.462 \\ UK & 0.879 & 0.823 & 0.850 & 1.000 & 0.455 & 0.625 \\ \hline \end{tabular} \end{table} Table 4: Aggregated scores of NewsPanda on weekly articles from March 2022 to July 2022. Figure 4: Left: The highlighted red areas indicate clusters of articles found by our model. Right: The WWF GIS system, where each relevant article is shown on the map with its corresponding key details. The micro-level improvement in this single task has also resulted in macro-level organizational change: "It's also a transition in their (WWF staff) job function. They will not just be doing data hunting. They are qualifying themselves to be data analysts." The WWF Nepal team has been putting together weekly news digests for conservation sites in Nepal. Although this dataset is small and has no negative labels, this is the only quantitative comparison between **NewsPanda** and current practice we can make. We find that our model is able to identify 62% of the articles in the news digest. This is a relatively good performance as we had extremely limited articles (only 150) about Nepali conservation sites to train the model. ### Sustainable Deployment and Broader Impact Encouraged by the success of **NewsPanda** at the initial stages, we are working to scale it to more sites and permanently deploy **NewsPanda** as part of the WWF computing infrastructure. We have been collecting news articles for over 60,000 sites globally and applying our trained model to classify them on a weekly basis since April 2022. Because the main model has already been trained, we no longer need extensive data labeling for evaluation. Instead, we only need a small subset for model update and fine-tuning purposes. We are currently investigating the ability **NewsPanda** to generalize to new locations and new languages given only a few (or even zero) domain-specific training points. We are also shifting our system to a cloud server to be owned and maintained by the WWF team, rather than the CMU team, to ensure sustainable deployment. The CMU team will continue to provide support and tutorials to help WWF eventually grow in-house capability of sustaining the project. Much as this project was a collaboration between WWF and CMU, **NewsPanda** could also be valuable to the broader civil society. Thus, we also developed a social media module in the form of a Twitter bot called **WildlifeNewsIndia**. The bot periodically tweets a selected set of the identified relevant articles. In addition to tweeting links to articles, we also use the keywords from **NewsPanda**'s keyword extractor (Section 6.1) to generate salient hashtags. A sample tweet is shown in Figure 5. Currently, **WildlifeNewsIndia** is focused on conservation-related articles in India. As we continue working on this project, we hope to scale this to a global level, so that any organization or individual interested in conservation can benefit from the tool. ### Lessons Learned This 1.5 year long and counting collaboration has yielded many valuable lessons for both WWF and CMU. We have already mentioned some of those in earlier sections. We highlight two more generalizable lessons below. Problem identification is an iterative process and rapid prototyping helps surface unforeseen needs. The event extractor in Section 6.2 was not initially part of the agenda: without a prototype of the model readily available, it was difficult for WWF to realize what could be done with it. However, after several iterations of communication and exploring results, the need to track the development related to a single project/location became clear to us. This was made possible by the rapid prototyping, where the CMU team used viable algorithms that may not be optimal but are quick to implement to demonstrate the possibilities of the toolkit. It is the various "not-so-AI" components that realize the promise of an AI for nonprofit project on the ground. While the classification module in Section 5 is the engine of **NewsPanda**, the postprocessing module in Section 6 and the visualization module in Figure 4 are key in getting the information in a consumable format, and ultimately the buyer in at WWF. Each of the latter two modules requires at least as much engineering effort and careful design as the classification module. We call on future AI for nonprofit projects to pay enough attention to all the infrastructure around the AI part, in order to deliver the real impact that we hoped for. ## 9 Conclusion In this paper, we designed and deployed **NewsPanda**, a toolkit for extracting, classifying, and analyzing articles related to conservation and infrastructure. We showed empirically that our **NewsPanda** model classifies better than baseline methods for both _Conservation Relevance_ and _Infracructure Relevance_. We also presented quantitative and qualitative evaluations of our system in the real world as well as its impact on WWF teams in UK, India, and Nepal. Currently **NewsPanda** mainly focuses on a few countries, and we are expanding it to a global scale. However, incorporating additional conservation sites is just the beginning. To do it right, we also need to cover more languages and more local media sources. This is especially important for the global south, as many high-impact local developments might never reach international news outlets. The ability to capture these local sources, especially if they are not written in English, is something we are currently working on. We are currently starting with articles written in the Nepali language. Initial experiments with a multilingual version of **NewsPanda** have shown good generalization when given only a few Nepali articles for training. With this multilingual model, we hope to further expand to cover a wider array of languages. Figure 5: Sample tweet of **WildlifeNewsIndia** ## Acknowledgements We thank Mr. Pramod Neupane, Consultant-Sustainable Infrastructure at World Bank, for all his support during the initial phase of the project which includes project conceptualization, data curation, funding acquisition, project administration for WWF Nepal, and resources allocation at WWF Nepal. We also thank the communications team at WWF Nepal for providing the weekly news links. This work was supported in part by a Google AI for Social Good award, NSF grant IIS-2046640, a Siebel Scholarship and a Carnegie Mellon Presidential Fellowship.
2304.04752
A Practitioner's Guide to Bayesian Inference in Pharmacometrics using Pumas
This paper provides a comprehensive tutorial for Bayesian practitioners in pharmacometrics using Pumas workflows. We start by giving a brief motivation of Bayesian inference for pharmacometrics highlighting limitations in existing software that Pumas addresses. We then follow by a description of all the steps of a standard Bayesian workflow for pharmacometrics using code snippets and examples. This includes: model definition, prior selection, sampling from the posterior, prior and posterior simulations and predictions, counter-factual simulations and predictions, convergence diagnostics, visual predictive checks, and finally model comparison with cross-validation. Finally, the background and intuition behind many advanced concepts in Bayesian statistics are explained in simple language. This includes many important ideas and precautions that users need to keep in mind when performing Bayesian analysis. Many of the algorithms, codes, and ideas presented in this paper are highly applicable to clinical research and statistical learning at large but we chose to focus our discussions on pharmacometrics in this paper to have a narrower scope in mind and given the nature of Pumas as a software primarily for pharmacometricians.
Mohamed Tarek, Jose Storopoli, Casey Davis, Chris Elrod, Julius Krumbiegel, Chris Rackauckas, Vijay Ivaturi
2023-03-31T04:00:53Z
http://arxiv.org/abs/2304.04752v1
# A Practitioner's Guide to Bayesian Inference in Pharmacometrics using Pumas ###### Abstract This paper provides a comprehensive tutorial for Bayesian practitioners in pharmacometrics using Pumas workflows. We start by giving a brief motivation of Bayesian inference for pharmacometrics highlighting limitations in existing software that Pumas addresses. We then follow by a description of all the steps of a standard Bayesian workflow for pharmacometrics using code snippets and examples. This includes: model definition, prior selection, sampling from the posterior, prior and posterior simulations and predictions, counter-factual simulations and predictions, convergence diagnostics, visual predictive checks, and finally model comparison with cross-validation. Finally, the background and intuition behind many advanced concepts in Bayesian statistics are explained in simple language. This includes many important ideas and precautions that users need to keep in mind when performing Bayesian analysis. Many of the algorithms, codes, and ideas presented in this paper are highly applicable to clinical research and statistical learning at large but we chose to focus our discussions on pharmacometrics in this paper to have a narrower scope in mind and given the nature of Pumas as a software primarily for pharmacometricians. **Keywords:** Bayesian inference, statistical software, pharmacometrics, workflow ###### Contents * 1 Motivation * 2 Introduction * 2.1 Pharmacometrics Workflow * 2.2 Data * 2.3 Models * 2.4 Algorithms * 2.5 Software * 2.6 Limitations of Other Software * 2.7 Related Works and Contributions * 2.8 Paper Layout and Reading Plan * 3 Bayesian Workflow in Pumas * 3.1 Defining a Model * 3.2 Prior Simulations and Predictions * 3.3 Fitting a Model * 3.4 Numerical Errors and Debugging * 3.5 Updating the Posterior with New Data * 3.6 Basic Summary Statistics * 3.7 How Many Samples are Needed? * 3.8 Diagnostic Plots * 3.9 More Diagnostics * 3.10 What if the Chains Are Not Converging? * 3.11 Advanced Posterior Queries * 3.12 Posterior Plots * 3.13 Posterior Simulations and Predictions * 3.14 Visual Predictive Checks and Simulation Plots * 3.15 Simulation Queries * 3.16 Non-Compartmental Analysis (NCA) Parameters * 3.17 Crossvalidation and Expected Log Predictive Density * 3.18 Information Criteria * 4 Background and Intuition * 4.1 Notation * 4.2 Bayesian Statistics * 4.3 Prior Selection * 4.4 Markov Chain Monte Carlo (MCMC) Intuition * 4.5 No-U-Turn Sampler (NUTS) Algorithm * 4.6 Basic Summary Statistics * 4.7 Convergence * 4.8 Crossvalidation and Model Selection * 5 Example Models * 6 Conclusion * 7 Acknowledgements Motivation Fully Bayesian approaches have become important tools in a pharmacometrician's toolbox (Lee and Gobburu, 2011; McDade et al, 2022) because they enable the rigorous and flexible quantification of uncertainty in all of the model's parameters as well as the use of knowledge from previous similar studies which have applications in rare disease drug approvals, pediatric studies, precision dosing and adaptive trial design. The Bayesian workflow implemented in Pumas (Rackauckas et al, 2020) was designed to be flexible and easily accessible using an intuitive clean syntax. We hope that this paper can be a good learning resource for any pharmaceuian interested in learning about and using fully Bayesian methods. ## 2 Introduction In this section, we discuss the need for a fully Bayesian approach in pharmacometrics and describe where it fits in the spectrum of methods typically used in the field. The standard models, data, algorithms, workflows, and software used in pharmacometrics will be briefly presented to set the context for future discussions. Finally, the main contributions of Pumas and the layout of this paper will be summarized. ### Pharmacometrics Workflow Pharmacometrics is a field of study that includes various quantitative analysis techniques used to understand interactions between drugs and patients. A typical pharmacometrics workflow involves: 1. Prepare analysis-ready datasets from clinical trial source datasets, 2. Exploratory data analysis and scientific reasoning of observed data via summaries and plots, 3. Developing parametric models for disease-drug-patient interaction, 4. Fitting the models' parameters to the data and estimating uncertainty in the parameters, 5. Diagnosing the models' fits and evaluating the quality of their fits, 6. Comparing models and selecting the best model, and 7. Using the best model to predict/simulate alternative scenarios for existing or new patients or to answer key drug development questions. ### Data The data used in pharmacometrics typically includes: 1. Patient covariates, e.g. age or sex, sometimes including time-varying covariates, 2. Drug dosing regimen and administration schedule for each patient, and 3. Observed response for each patient, e.g. the measured drug concentration in the blood and/or some clinical endpoints, typically observed at multiple time points. ### Models The kinds of models used in pharmacometrics are typically: 1. Structural models to capture the observed response, e.g, dynamic-based models where the pharmacokinetics are modeled using ordinary differential equations (ODEs) 2. Covariate models predicting the observed response conditional on the covariates, and 3. Hierarchical models with population-level parameters and patient-specific parameters,1 Footnote 1: In this paper, we use the terminology “population-level” parameters and “patient-specific” (or subject-specific) parameters instead of “fixed effects” and “random effects” which are more common in pharmacometrics. This is because the definition of fixed and random effects in the statistical literature is ambiguous (see page 21 in Gelman (2005)), and in Bayesian modeling, every parameter is modeled using a random variable, which further increases the ambiguity. These are not mutually exclusive. For example, a covariate-based model can also be a hierarchical model with a dynamic-based ODE component. The following showcases a classic model of Theophylline dynamics via a 1-compartment pharmacokinetic (PK) oral absorption model. The model describes the dynamics of drug absorption into the bloodstream and its clearance. Initially, the drug is assumed to be administered as a bolus into a single depot compartment, e.g. when it's first ingested into the gut. The drug then gradually enters the bloodstream with an absorption rate \(\mathrm{Ka}\times\mathrm{Depot}\) where Depot is the amount of drug remaining in the gut. The central compartment represents the drug amount absorbed in the bloodstream over time. The drug is also assumed to be cleared from the central compartment by a rate \(\frac{\text{CL}}{V}\times\text{Central}\), where V is the volume of distribution of this compartment. The model has population parameters (\(\boldsymbol{\theta},\boldsymbol{\Omega}\), \(\sigma\)) and patient-specific parameters \(\boldsymbol{\eta}_{i}\). \(\boldsymbol{\theta}\) is a vector of 4 numbers, \(\boldsymbol{\Omega}\) is a \(3\times 3\) positive definite covariance matrix and \(\sigma\) is a standard deviation parameter for the error model. In this model, each patient \(i\) has weight and sex covariates: \[Z_{i}=\begin{bmatrix}\text{wt}_{i},\\ \text{sex}_{i},\end{bmatrix}, \tag{1}\] where \(\text{wt}_{i}\) is a positive real number and \(\text{sex}_{i}\) is an indicator variable which is 0 for males and 1 for females, has individual coefficients: \[\begin{bmatrix}\text{Ka}\\ \text{CL}\\ V\end{bmatrix}=\begin{bmatrix}\theta_{1}e^{\eta_{i,1}}\\ \theta_{2}(\frac{\text{wt}_{i}}{70})^{0.75}\theta_{4}^{\text{sex}_{i}}e^{\eta _{i,2}}\\ \theta_{3}e^{\eta_{i,3}}\end{bmatrix}, \tag{2}\] has internal dynamics: \[\frac{d[\text{Depot}]}{dt} =-\text{Ka}[\text{Depot}]\] \[\frac{d[\text{Central}]}{dt} =\text{Ka}[\text{Depot}]-\frac{\text{CL}}{V}[\text{Central}],\] where Depot and Central are the depot and central compartments, and has normally distributed residual error in the observed drug concentration conc, also known as the error model: \[\text{conc}\sim\text{Normal}\left(\frac{\text{Central}}{V},\sigma\right).\] ### Algorithms In this section, we briefly present an overview of the various algorithms commonly used to fit models to data in pharmacometrics. #### 2.4.1 Marginal Maximum Likelihood Estimation (MLE) When fitting parametric models to data in pharmacometrics, marginal MLE algorithms are the most popular. The patient-specific parameters are marginalized and the marginal likelihood is maximized. There are two families of algorithms to do MLE with the marginal likelihood: 1. Approximate integration of the conditional likelihood (Wang, 2007) which includes: 1. Laplace method, and 2. First order conditional estimation (FOCE). 2. Stochastic approximation expectation maximization (SAEM) (Delyon et al, 1999; Kuhn and Lavielle, 2005). #### 2.4.2 Marginal Maximum-a-Posteriori (MAP) Estimation Marginal MAP is an alternative to marginal MLE that also gives a point estimate for the population parameters but instead of maximizing the marginal likelihood, it maximizes the product of the prior probability of the population parameters and the marginal likelihood. #### 2.4.3 Fully Bayesian Analysis Marginal likelihood maximization can give us, as a by-product, the conditional posterior of the patient-specific parameters \(\eta_{i}\) given the optimal population parameters, or an approximation of it. However, a fully Bayesian approach can give us samples from the joint posterior of all the population parameters and patient-specific parameters simultaneously. The sampling from the joint posterior is typically done using an MCMC algorithm such as the No-U-Turn sampler (NUTS) (Hoffman and Gelman, 2014; Betancourt, 2017) algorithm which is a variant of the Hamiltonian Monte Carlo (HMC) (Neal, 2011) MCMC algorithm. We will cover Bayesian notation and algorithms in more detail in later sections of the paper. Besides the ability to sample from the joint posterior and easily simulate from the posterior uncertainty, a fully Bayesian approach allows modellers to: 1. Incorporate domain knowledge and insights from previous studies using prior distributions. 2. Quantify the so-called epistemic uncertainty, which is the uncertainty in the model parameters' values in cases where the model is non-identifiable2 and/or there are not many data points available. Footnote 2: if a parameter is unidentifiable, that means their values cannot be uniquely determined from the available data. The above are advantages of Bayesian analysis which the non-Bayesian workflow typically doesn't have a satisfactory answer for. Bayesian inference as a paradigm uses the established theory of probability to rigorously quantify the uncertainty in the parameter values with fewer assumptions about the model and data. Bayesian workflows empower the analyst to use, when available, domain knowledge to quantify the epistemic uncertainty of model parameters, thus providing immense flexibility in analysis pipelines. Since Bayesian analysis is a conceptual replacement for bootstrapping, the performance of Bayesian inference should be compared to that of bootstrapping rather than that of a single model fit using Laplace/FOCE/SAEM. This is important to set the users' expectations because Bayesian inference will typically be one or more orders of magnitude slower than running a single model fit with Laplace/FOCE/SAEM. ### Software A number of software implement all the classic MLE-based analysis workflows, e.g. NONMEM (Beal et al, 2009), Monolix, and Pumas (Rackauckas et al, 2020). For fully Bayesian analysis, a few options exist including: Pumas, Stan (Carpenter et al, 2015), Torsten (Margossian et al, 2022), BUGS (Goudie et al, 2020), Turing.jl (Ge et al, 2018), PyMC (Salvatier et al, 2015), Pyro (Bingham et al, 2018) and NONMEM's relatively new Bayesian module. In Bayesian statistics, Stan has grown to be the de-facto standard software for Bayesian analysis when only continuous parameters exist, which is typically the case in pharmacometric models. Stan has been tested and used by many statisticians working in different fields and with different kinds of models over many years. Of all the above software for Bayesian analysis, Stan has the largest community and most mature ecosystem of R packages for a full Bayesian analysis workflow. ### Limitations of Other Software The main limitation of generic probabilistic programming languages (PPLs), such as Stan or Turing.jl, in pharmacometrics (PMx) is that they are not tailored to PMx users and the kinds of models and data found in PMx. For instance, as of the time of this writing, parallelism over subjects is difficult to write in Stan3. Footnote 3: In order to parallelize over subjects, you need to use the _reduce_sun_ function in the likelihood which is not a trivial task. For examples on using the _reduce_sun_ function, you can refer to stanpmx.github.io. Torsten tries to bridge the gap between PMx users and the Stan software, e.g. by making it easier to define dosing regimens and simplifying parallelism over subjects using its group solver. For more on the Torsten group solver, you can refer to metrumresearchgroup.github.io/Torsten/function/ode-pop/. However, the APIs of both Stan and Torsten do not provide the best user experience for PMx users because they are based on C++, an arguably difficult-to-use, non-interactive programming language. Additionally, as of the time of this writing, it is unnecessarily difficult to define dynamics initial conditions in Torsten that depend on the model's parameters, while it is easier to do that in the parent PPL Stan. More broadly, once you assume a particular model structure, there are numerous computational optimizations, e.g. automatic parallelism and automatic ODE linearity detection, that can be employed which we do in Pumas, but which are difficult to do in other more generic software. The second limitation of other software is that they do not support other non-Bayesian workflows that are common in PMx which means that users with a PMx background need to translate their models from other software to the PPL to compare the Bayesian and non-Bayesian results. This is often a time consuming and error-prone process. ### Related Works and Contributions This paper is not the first attempt to describe a Bayesian workflow in the context of pharmacometrics. Works like Wakefield et al (1999) and Lunn et al (2002) were early attempts using the BUGS software (Goudie et al, 2020). More recently, the Torsten paper (Margossian et al, 2022) and Bayesian workflow paper (Gelman et al, 2020) are also excellent resources with a focus on using Stan (Carpenter et al, 2015). Gelman et al (2020) is particularly focused on the workflow aspect of developing Bayesian models and performing analysis and diagnostics, and while it is not customized for pharmacometrics, the Torsten paper (Margossian et al, 2022) attempts to fill this gap. There is also Elmokadem et al (2023) which provides a guide to using Stan/Torsten or Turing.jl (Ge et al, 2018) for Physiologically-based pharmacokinetic (PBPK) models. Given the above excellent resources, we identify the two main contributions of this paper as: 1. [leftmargin=*] 2. Offering a complete pharmacometrics-oriented Bayesian workflow including scripts and examples of common models using the Pumas software. 3. Offering an intuitive explanation of various Bayesian statistical and computational concepts that enable the fine tuning of algorithm's options and making sense of software outputs without being too technical. We believe the Pumas Bayesian workflow implementation addresses many of the issues with other software by: 1. [leftmargin=*] 2. Providing a complete, user-friendly Bayesian analysis workflow that includes: 1. [label=()] 2. model and dosing regimen definition, 3. MCMC sampling, 4. diagnostics, 5. simulation, 6. counter-factual predictions, and 7. customizable cross-validation, using a few lines of code; 3. Using the same user-friendly, compact model syntax for hierarchical, dynamics-based models used in both Bayesian and non-Bayesian analyses so there is no need to translate models; 4. Using Julia as the core programming language which is a fast, interactive, and easy-to-use language, providing an excellent user experience; 5. Automating the definition and computational optimization of PMx models, e.g. using automatic parallelism over subjects and automatic ODE linearity and stiffness detection, delivering high performance with a neat syntax; and 6. Using the open-source, modular implementation, AdvancedHMC.jl (Xu et al, 2020), of the same MCMC algorithm that Stan uses, which is also used in Turing.jl (Ge et al, 2018)). ### Paper Layout and Reading Plan The rest of this paper is organized as follows. In Section 3, we describe the Bayesian workflow in Pumas. This should help users who are already familiar with the Bayesian workflow and theory to get started quickly using Pumas. In Section 4, we then a take a step back and give a reasonably comprehensive, intuition-oriented introduction to all of the main concepts and ideas invoked in the Bayesian workflow including the intuition behind the algorithm used and general points of caution when selecting priors. This section can help users make sense of highly technical concepts using mostly English and some light mathematics which can be useful to make informed decisions when using Pumas. Finally, Section 5 contains a number of common example models and Section 6 includes some additional reading material and resources for current and future Pumas users. This paper is meant to serve both as a tutorial paper for users of Pumas and as an accessible resource for pharmacometricians learning Bayesian inference. If you are an experienced Bayesian and would like to learn how to use Pumas, then you can read Section 3 only and skip Section 4. If you are still learning Bayesian theory, we recommend reading at least some parts of Section 4 first. You can then use Table 1 to switch to reading the relevant parts in Section 3 if you prefer interweaving theory and code. For more focused tutorials and scripts, you can also check the Pumas tutorials website (tutorials.pumas.ai). ## 3 Bayesian Workflow in Pumas A general Bayesian analysis workflow was presented in Gelman et al (2020) which summarizes many of the current best practices in Bayesian statistics. There are many similarities between the general Bayesian workflow and the standard pharmacometrics workflow based on marginal MLE. There are a number of differences between them worth highlighting though: 1. [leftmargin=*] 2. The Bayesian workflow focuses on what a modeller can do when MCMC sampling fails for a model. This could be failure because the sampler is taking too long to complete its sampling, the sampler is often triggering numerical errors,or the samples are failing to pass the diagnostic tests after sampling. Things like simplifying the model or using less or more data can help pinpoint the issue and improve sampling. Much of this is also applicable if MLE/MAP optimization fails but optimization failing is less common than MCMC failing with one of its many failure modes. 2. The Bayesian workflow includes prior selection diagnostics, e.g. prior predictive checks. Marginal MLE optimization does not use priors on population parameters, and there are well-established standard priors often used for patient-specific parameters so this is also less of an issue outside the Bayesian context. 3. The general Bayesian workflow mentions using the model and MCMC samples to make predictions, but it naturally does not specify what kinds of predictions which is very much context-specific. In pharmacometrics, we are typically interested in identifying the effect of using different dose amounts for new or existing patients or in predicting/simulating future responses given past data. In this section, we will borrow components from the general Bayesian workflow, customize it for pharmacometrics, and present the Pumas syntax that can help a pharmacometrician follow the current best practices when performing Bayesian analysis. The syntax presented here is using Pumas 2.4 (to be released in June 2023), but Pumas 2.3 has most of the workflow implemented already. For updated syntax, options, and features, please refer to the Pumas documentation (docs.pumas.ai). ### Defining a Model #### Overview In Pumas you can define models using the @model macro, which is composed of model blocks. We will cover six model blocks in this tutorial: 1. @param: where we define the population parameters of our model, along with their priors. 2. @random: where we define the subject-specific parameters of our model, along with their priors. This is an optional block which can be dropped for single subject models. 3. @covariates: where we declare the subject's covariates. This is an optional block which can be dropped for covariate-free models. 4. @pre: here we do all sorts of pre-computations and other statistical transformations, e.g. calculating the individual PK parameters. 5. @dcp: here you can define dose control parameters (DCPs) in the model. This is an optional block which can be dropped if no DCPs exist in the model. 6. @dynamics: where we define all of our dynamics, e.g. the system of ODEs that governs the relationship between the PK/PD compartments. This is an optional block which can be dropped for models without ODEs. 7. @derived: this is where we defined our observed response's distribution used to calculate the likelihood. 8. @observed: this is where we define additional variables to be computed during simulation but not fitting, e.g a non-compartmental analysis (NCA) based on the concentration curve can be performed here. This is an optional block. With these blocks, you can code almost all PK/PKPD/PBPK models in Pumas. We'll cover the functionality of all of these model blocks. First, the @param block is where we include all of the population parameters of our model. We begin by specifying a begin... end where we insert one parameter per line. For each parameter, we give a prior with the \(\sim\) operator followed by a distribution. Listing 1 shows an example of an @param block with three parameters and their priors. tvcl has a LogNormal prior with log-scale mean log(2.5) and unit log-scale standard deviation 1, tvvc has a positive-constrained Normal \begin{table} \begin{tabular}{|c|c|c|} \hline **Theme** & **Section 3** & **Section 4** \\ \hline Model definition and prior choice & 3.1 & 4.1, 4.2, 4.3 \\ \hline Prior and posterior predictions and simulations & 3.2, 3.11-3.16 & 4.4 \\ \hline Inference algorithm and options & 3.3-3.6 & 4.5-4.6 \\ \hline Convergence and diagnostics & 3.7-3.10 & 4.7 \\ \hline Crossvalidation and model Selection & 3.17-3.18 & 4.8 \\ \hline \end{tabular} \end{table} Table 1: This table shows the relevant groups of subsections from Section 3 and Section 4. prior with mean 70 and standard deviation 10, and \(\sigma\) has an Exponential prior with rate 3.4 Footnote 4: To write LaTeX symbols like \(\sigma\) and \(\theta\) in Pumas, you can write the LaTeX form, e.g. \(\backslash\)\(sigma\), followed by the Tab key. ``` 1@parambegin 2tvel\(\sim\)LogNormal(log(2.5),1) 3tvvc\(\sim\)Constrained(Normal(70,10);lower=0) 4\(\sigma\sim\)Exponential(3) 5end ``` Listing 1: @param block example Next, the @random block holds all of the subject-specific parameters and their priors. Similar to the @param block, we also begin by specifying a begin... end where we insert one parameter per line; and each parameter is assigned a prior with the \(\sim\) operator followed by a distribution. In Listing 2, we have an example of an @random block with a single parameter \(\eta\) which has an MvNormal (multivariate normal) prior with mean 0 and identity covariance matrix. ``` 1@randombegin 2\(\eta\sim\)MvNormal([10;01]) 3end ``` Listing 2: @random block example The @covariates block is used to specify the subject's covariates. This is only a declaration block that follows the same approach by declaring one covariate per line inside the begin... end statements. You can find an example in Listing 3 where we are declaring two subject covariates: WT for the subject's weight, and \(\mathtt{SEX}\) for the subject's sex. ``` 1covariatesbegin 2WT 3SEX 4end ``` Listing 3: @covariates block example We continue with the @pre block, where any pre-computations or statistical transformations necessary for our model are done. In this block, we can use all of the parameters and covariates declared in the previous blocks. The approach is similar to the other blocks so far: one pre-computation or transformation per line inside the begin... end statements. Listing 4 provides an example of the @pre block, in which we compute the individual PK parameters for each subject. ``` 1@prebegin 2CL=tvel*exp(\(\eta\)[1])*(WT/70)^0.75 ``` Listing 4: @pre block example The fifth block, @dynamics, is where we specify the ODE system that governs the dynamics of our model. In this block, we declare one ODE per line inside begin... end statements. On the left-hand side (LHS) of the equation is the derivative of the compartment, i.e. the rate of change. We are free to give each compartment any name. On the LHS the compartment name is followed with a'(prime) operator to denote the rate of change in that compartment. The prime operator is an intuitive way to declare the derivative of the compartment. On the right-hand side (RHS) of the equation is some combination of the compartments and the individual parameters specified in the @pre block. In the example in Listing 5, we specify the dynamics of the Theophylline model presented in section 2 with parameters \(\mathtt{Ka}\), \(\mathtt{CL}\) and \(\mathtt{Vc}\).. ``` 1@dynamicsbegin 2Depot'=-Ka*Depot 3Central'=Ka*Depot-CL/Vc*Central 4end ``` Listing 5: @dynamics block example Pumas supports automatic ODE linearity and stiffness detection. So even linear ODEs can be written in the above readable syntax with no performance loss. Additionally if ODE stiffness is detected, Pumas will switch to a stiff ODE solver as appropriate. The @derived block is where we define our likelihood term. We can use two types of assignments in this block: the deterministic assignment = and the probabilistic assignment \(\sim\). For the deterministic assignments, the RHS is a deterministic quantity, whereas for the probabilistic assignment the RHS is a probabilistic quantity represented as a distribution. In Listing 6, we have two variables being defined5. The first is cp defined using the deterministic assignment, and the second is the conc defined using the probabilistic assignment while also using cp as one of the parameters of the log-normal distribution. In this example, conc is observed and is called the dependent variable. We define the distribution of conc to be a log-normal distribution with log-scale mean log cp and log-scale standard deviation \(\sigma\) (from our @param block). ``` 1@derivedbegin [email protected]/Vc 3conc\(\sim\)@.LogNormal(log(cp),\(\sigma\)) 4end ``` Listing 6: @derived block example The @observed block can be used to compute additional quantities such as the NCA parameters from the concentration curve. ``` 1@observedbegin 2nca:=@ncacp 3auc=NCA.auc(nca) 4cmax=NCA.cmax(nca) 5end ``` Listing 7: @observed block example In addition to the above blocks, there are also more blocks available for: * @init for initializing the dynamics manually, or * @vars for defining short-hand notation variables for use in the @dynamics and @derived blocks. For more on the various components of a Pumas model, please refer to the Pumas documentation (docs.pumas.ai). #### Example: PK Model For illustration purposes, consider the following 1-compartment model with first-order absorption: \[\text{Depot}^{\prime} =-\text{Ka}\cdot\text{Depot}\] \[\text{Central}^{\prime} =\text{Ka}\cdot\text{Depot}-\frac{\text{CL}}{V_{C}}\cdot\text{ Central}\] where CL is the elimination clearance from the Central compartment; \(V_{C}\) is the volume of the Central compartment; and Ka is absorption rate constant. If we had one subject only, this model can be coded in Pumas using all the blocks we've seen except for the @random which is not necessary for single-subject models. Listing 8 shows the code for this model. This is a complete Bayesian Pumas model. We are specifying the parameters along with their priors in the @param block. We only have a single subject, so there is no need for the inclusion of a @random block, which in turn makes the PK individual parameters defined in the @pre block being the same as the population parameters. In the @dynamics block, we are declaring two ODEs that govern our model dynamics. The dynamics have 2 compartments named: Depot and Central. Finally, in the @derived block, we calculate cp as a function of the Central compartment divided by the PK parameter Vc with a deterministic assignment, and we define our observed response conc as following a log-normal distribution with log-scale mean log cp and log-scale standard deviation \(\sigma\). The model in Listing 8 is a single-subject model, but most of the time we have multiple subjects in the data and need to define a population model. This can be accomplished with the addition of a @random block to define the subject-specific parameters \(\eta\). We can then use the population and subject-specific parameters together to define the individual PK parameters in the @pre block. We chose to assign a Gaussian prior distribution to \(\eta\)s with a covariance matrix parameterized using correlations and standard deviations as explained in 4.3.3. The model in Listing 9 is an example of such parameterization. It builds upon the previous single-subject PK model by adding 2 more population parameters: a correlation matrix \(C\) and a vector of standard deviations \(\omega\) in the @param block. The \(C\) correlation matrix has a Cholesky-parameterized LKJ prior (the recommended prior) and the \(\omega\) vector of standard deviations has a positive-constrained multivariate-normal distribution with a diagonal covariance matrix with pre-truncation mean equal to 0 and variances equal to 0.4\({}^{2}\). We build the \(\eta\)s in the @random block by using a multivariate Gaussian prior with a covariance matrix built from the correlation and standard deviations using the Pumas' function cor2cov. Finally, in the @pre block we defined the individual PK parameters as a transformation of the population and the subject-specific parameters combined. #### Selecting Prior Distributions Choosing a new prior for a parameter can be a daunting task. In general, there is no one prior that fits all cases and it might be a good practice to follow a previous similar study's priors where a good reference can be found. However, if you are faced with the task of choosing good prior distributions for an all-new model, it will generally be a multi-step process consisting of: 1. **Deciding the support of the prior**. The support of the prior distribution must match the domain of the parameter. For example, different priors can be used for positive parameters than those for parameters between 0 and 1. Table 2 can help narrow down the list of options available based on the domain of the parameter. Figure 8: PK 1-compartment single-subject model example 2. **Deciding the center of the prior**, e.g. mean, median or mode. 3. **Deciding the strength of the prior**. This is often controlled by a standard deviation or scale parameter in the corresponding distribution function. A small standard deviation or scale parameter implies low uncertainty in the parameter value which leads to a stronger prior. A large standard deviation or scale parameter implies high uncertainty in the parameter value which leads to a weaker prior. It is recommended that each prior distribution that is considered to be used should be assessed carefully before using it. This will ensure that the strength of the prior reflects your confidence level in the parameter values. Refer to the discussion in Section 4.3 on prior selection for more details. 4. **Deciding the shape of the prior**. Some distributions are left-skewed, others are right skewed and some are symmetric. Some have heavier tails than others, e.g. the student's T-distribution is known for its heavier tail compared to a normal distribution. The shape of the probability density function (PDF) should reflect knowledge about the parameter value prior to observing the data. When selecting new priors, besides the discussion in Section 4.3, you may also find the tips and recommendations in github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations to be useful. For a more advanced discussion of various prior choices, also see Simpson et al (2014). For univariate distributions, you can plot the distribution's probability density curve using the PumasPlots.lines6 function, e.g: Footnote 6: which is a part of the PumasUtilities package ``` 1usingPumasUtilities 2dist=Normal(0.0,1.0) 3PumasPlots.lines(dist) ``` For multivariate and matrix-variate distributions, you can use the rand, mean and var functions to sample from the distribution and make sense of the values' distributions. For example: ``` 1dist=LKJ(3,1.0) 2x=rand(dist,100)#100samples 3mean(x)#mean 4var(x)#element-wisevariance ``` The following is a description of some of the most popular prior distributions available in Pumas: 1. Normal(\(\mu\), \(\sigma\)): univariate normal distributions with support \((-\infty,\infty)\), mean \(\mu\) and standard deviation \(\sigma\). 2. LogNormal(\(\mu\), \(\sigma\)): univariate log normal distribution with support \((0,\infty)\) and a log-scale mean \(\mu\) and log-scale standard deviation \(\sigma\). 3. MwNormal(\(\mu\), \(\Sigma\)): multivariate normal distribution with mean vector \(\mu\) and covariance matrix \begin{table} \begin{tabular}{|c|l|} \hline **Support** & **Distributions** \\ \hline \((0,1)\) & Beta, KSOneSided, NoncentralBeta, LogitNormal \\ \hline \((0,\infty)\) & BetaPrime, Chi, Chisq, Erlang, Exponential, FDist, Frechet, Gamma, InverseGamma, InverseGaussian, Kolmogorov, LogNormal, NoncentralChisq, NoncentralF, Rayleigh, Weibull \\ \hline \((-\infty,\infty)\) & Cauchy, Gumbel, Laplace, Logistic, Normal, NormalCanon, NormalInverseGaussian, PGeneralizedGaussian, TDist \\ \hline Real vectors & MvNormal \\ \hline Positive vectors & MvLogNormal \\ \hline Positive definite matrices & Wishart, InverseWishart \\ \hline Correlation matrices & LKJ, LKJCholesky \\ \hline Other & Constrained, truncated, LocationScale, Uniform, Arcsine, Biweight, Cosine, Epanechnikov, Semicircle, SymTriangularDist, Triweight, Pareto, GeneralizedPareto, GeneralizedExtremeValue, Levy \\ \hline \end{tabular} \end{table} Table 2: The table shows some of the most popular prior distributions available in Pumas and their corresponding support domains. You can learn more about each distribution using? followed by the distribution name in the Pumas command line prompt. \(\Sigma\). The matrix \(\Sigma\) can also be a diagonal matrix, e.g. Diagonal([1.0, 1.0]). You can also pass \(\Sigma\) alone as a matrix, e.g. MvNormal(\(\Sigma\)), and the means will be assumed to be 0. 4. MvLogNormal(\(\mu\), \(\Sigma\)): a multivariate log-normal distribution over positive vectors with log-scale mean vector \(\mu\) and log-scale covariance matrix \(\Sigma\) as defined in the MvNormal case above. 5. Cauchy(\(\mu\), \(\sigma\)): a univariate Cauchy distribution with support \((-\infty,\infty)\), location \(\mu\), and scale \(\sigma\). 6. Constrained(dist, lower = l, upper = u): a constrained prior distribution with a fixed support (l, u) and a fixed base distribution dist that could be any univariate or multivariate distribution. lower and upper set the lower and upper bounds on the random variables' support, respectively, defaulting to -Inf \((-\infty)\) and Inf \((\infty)\), respectively. When dist is a univariate distribution, lower and upper should be scalars. When constraining multivariate distributions, lower and upper can be vectors or scalars. If set to a scalar, the same bound will be used for all random variables. There is also a truncated distribution which is different from Constrained in that it allows the base distribution to be a function of the model's parameters but truncated only supports univariate base distributions. In general, it's recommended to use Constrained in the @param block and truncated in the @random and @derived blocks. Examples: * Constrained(Normal(0.0, 1.0), lower = 0.0) is a half normal distribution. * Constrained(Cauchy(0.0, 1.0), lower = 0.0) is a half Cauchy distribution. * Constrained(MvNormal([0.0, 0.0], [1.0 0.0; 0.0 1.0]), lower = 0.0) is a constrained multivariate normal distribution. The init keyword argument can also be set to specify the initial value of the parameter, e.g. Constrained(Normal(), lower = 0.0, init = 1.0) 7. truncated(dist, lower, upper): similar to Constrained with fixed lower and upper bounds lower and upper, respectively, and a base distribution dist. Setting upper is optional and it defaults to Inf \((\infty)\) when not set. In truncated, the base distribution dist is allowed to depend on the model's parameters and the normalization constant is computed in every log probability evaluation. However, the lower and upper bounds must be fixed constants and truncated only supports univariate base distribution. Examples: truncated(Normal(0, 1), 0.0, Inf) is a half normal distribution. truncated(Cauchy(), 0.0, Inf) is a half Cauchy distribution. truncated(Normal(), -Inf, 0.0) is a negative half normal distribution. 8. Uniform(\(l\), \(u\)): a univariate uniform distribution with lower and upper bounds \(l\) and \(u\) respectively. 9. LKJ(\(d\), \(\eta\)): a matrix-variate LKJ prior over correlation matrices of size \(d\times d\). \(\eta\) is the positive shape parameter of the LKJ prior. Decreasing \(\eta\) results in samples with correlations closer to \(\pm 1\). There is also LKJCholesky which is semantically identical to LKJ but has some advantages. See below. 10. LKJCholesky(\(d\), \(\eta\)): a Cholesky-factorized version of the LKJ distribution where the matrix sampled is in factorized form. This is recommended over LKJ for use inside the model for performance reasons. 11. Wishart(\(\nu\), \(S\)): a matrix-variate Wishart distribution over \(d\times d\) positive definite matrices with \(\nu\) degrees of freedom and a positive definite \(S\) scale matrix. 12. InverseWishart(\(\nu\), \(\Psi\)): a matrix-variate inverse Wishart distribution over \(d\times d\) positive definite matrices with \(\nu\) degrees of freedom and a positive definite scale matrix \(\Psi\). 13. Beta(\(\alpha\), \(\beta\)): a univariate Beta distribution with support from 0 to 1 and shape parameters \(\alpha\) and \(\beta\). 14. Gamma(\(\alpha\), \(\theta\)): a univariate Gamma distribution over positive numbers with shape parameter \(\alpha\) and scale \(\theta\). 15. Logistic(\(\mu\), \(\theta\)): a univariate logistic distribution with support \((-\infty,\infty)\), location \(\mu\) and scale \(\theta\). 16. LogitNormal(\(\mu\), \(\sigma\)): a univariate logit normal distribution with support \((0,1)\) and a base normal distribution with mean \(\mu\) and standard deviation \(\sigma\). 17. \(\texttt{TDist}(\nu)\colon\) a univariate Student's T distribution with support \((-\infty,\infty)\), \(\nu\) degrees of freedom and mean 0. To change the mean of the T distribution, you can use a LocationScale distribution (shown below). 18. LocationScale(\(\mu\), \(\sigma\), \(d\)): a scaled and translated univariate distribution with a base distribution \(d\). The base distribution's random variable is first scaled by \(\sigma\) and then translated by \(\mu\). Example: LocationScale(1.0, 2.0, TDist(2)) is a scaled and translated Student's \(t\) distribution. The mean of the LocationScale distribution is \(\mu+\sigma\times\text{mean(d)}\) and the standard deviation is \(\sigma\times\text{std(d)}\). 19. Laplace(\(\mu\), \(\sigma\)): a univariate Laplace distribution with support \((-\infty,\infty)\), location \(\mu\) and scale \(\sigma\). 20. Exponential(\(\theta\)): a univariate exponential distribution with support \((0,\infty)\) and scale \(\theta\). 21. (Improper) flat priors: instead of using a distribution, one can specify a domain instead such as a VectorDomain for vector parameters, PSDDomain for positive definite parameters or CorrDomain for correlation matrix parameters. Those domains are treated in Pumas as flat priors. If the domain is open, this would be an improper prior. For more about domains, see the Pumas documentation (docs.pumas.ai). ### Prior Simulations and Predictions After defining a model with the priors, a prior predictive check may be run to check how close the prior predictions or simulations are to the real data. Prior simulations can be run using the simobs function passing in the model, model, and subject/population, data, as arguments. ``` 1sims=simobs(model,data;samples=100) ``` Listing 10: Prior simulation The samples keyword argument specifies the number of samples to simulate from the prior distributions for each subject in data. In the simobs function, there's also a keyword argument: simulate_error. If it is set to true (the default value), Pumas will sample from the response's error model in the @derived block (aka simulation), otherwise it will return the expected value of the error distribution (aka prediction). The latter is equivalent to using the predict function. There are various statistics and queries that can be run given the simulation results. These are explained in Section 3.15. The simulation results can also be plotted using a visual predictive check (VPC) as explained in Section 3.14. When the simulations are from the prior model, the VPC is usually called a prior predictive check. ### Fitting a Model Now that we have a model, we need to fit it using Pumas. This is the role of the fit function which takes four positional arguments: 1. A Pumas model 2. A Pumas population 3. The initial parameter estimates 4. A fitting algorithm The fitting algorithm in an MLE setting can be an instance of FOCE or Laplace for example. However to run Bayesian inference using MCMC instead, it can be set to an instance of either: * BayesMCMC * MarginalMCMC MarginalMCMC samples from the marginal posterior by integrating out the subject-specific parameter first, whereas BayesMCMC samples from the joint posterior. MarginalMCMC can be much faster than BayesMCMC in some cases but it is still experimental and will be improved in the future. The options in BayesMCMC and MarginalMCMC are passed when constructing an instance of the algorithm using keyword arguments, e.g BayesMCMC(nsamples=2000). The main options that can be set in both BayesMCMC and MarginalMCMC are: * target_accept: target acceptance ratio for the NUTS algorithm, defaults to 0.8 * nsamples: number of Markov Chain Monte Carlo (MCMC) samples to generate, defaults to 2000 * nadapts: number of adaptation steps in the NUTS algorithm, defaults to 1000 * nchains: number of MCMC chains to sample, defaults to 4 * ess_per_chain: target effective sample size (ESS) per chain, sampling terminates if the target is reached, defaults to nsamples * check_every: the number of samples after which the ESS per chain is checked * time_limit: a time limit for sampling in seconds, sampling terminates if the time limit is reached, defaults to Inf (which is \(\infty\) in Julia) * ensemblealg: can be set to EnsembleSerial() for serial sampling, EnsembleThreads() for multi-threaded sampling or EnsembleDistributed() for multi-processing (aka distributed parallelism) sampling. By default parallelism over both chains and subjects will be turned on if enough threads/processes are available. * parallel_chains: can be set to true or false. If set to false, the chains will not be sampled in parallel. If set to true, the chains will be sampled in parallel using either multi-threading or multi-processing depending on the value of ensemblealg. The default value is true. * parallel_subjects: can be set to true or false. If set to false, the log probability computation will not be paralellized. This is preferred when the number of subjects is small. If set to true, the log probability computation will be parallelized over the subjects using either multi-threading or multi-processing depending on the value of ensemblealg. The default value is true if enough threads/processes are available to do both parallelism over chains and subjects. * rng: the random number generator used * diffeq_options: a NamedTuple of all the differential equations solver's options, e.g diffeeq_options = (alg = Rodas5(),) can be used to force Pumas to use the stiff ODE solver Rodas5 instead of relying on the automatic stiffness detection and auto-switching behaviour of Pumas. * constantcoef: a NamedTuple of the parameters to be fixed during sampling. This can be used to sample from conditional posteriors fixing some parameters to specific values, e.g. constantcoef = (\(\sigma\) = 0.1,) fixes the \(\sigma\) parameter to 0.1 and samples from the posterior of the remaining parameters conditional on \(\sigma\). The MarginalMCMC algorithm also has a keyword argument marginal_alg which defaults to LaplaceI() but can also be FOCE() or FO(). By default both BayesMCMC and MarginalMCMC will run 4 Markov chains in parallel, using the remaining computing resources to parallelize the computations across subjects. By default, 2,000 MCMC iterations will be run using the first 1,000 samples of each chain as burn-in. Pumas does not automatically discard the burn-in samples. Hence, the user needs to use the function discard to discard the burn-in samples. If you are using Pumas! 2.4, instead of discard, you can use Pumas.truncate. Listing 11 shows a Bayesian Pumas model fitting example. We save the result to res and we call discard on it with the keyword argument burnin set to 1,000 samples. This will output a truncated fit by discarding the first 1,000 samples per chain. Note that in Julia, 1,000 and 1000 are equivalent. ``` 1res=fit(model,pop,iparans,BayesMCMC(nsamples=2,000,nadapts=1,000)) 2tres=discard(res;burnin=1,000) ``` Listing 11: Fitting a Bayesian model in Pumas You can also pass a ratio keyword argument to the discard function to drop (1 - ratio) \(\times 100\%\) of the samples. This is known as thinning and it works by selecting 1 sample from every 1/ratio samples in each chain. Generally speaking, thinning is usually discouraged in the final analysis because it leads to some loss of information. However, in the initial exploratory phase when many exploratory simulations/predictions are run, it may be desired to do thinning to do faster iterations. Another example is in Listing 12 where we are using MarginalMCMC. ``` 1res=fit(model,pop,iparans,MarginalMCMC(nsamples=100,nadapts=10)) 2tres=discard(res;burnin=10) ``` Listing 12: Fitting a Bayesian model in Pumas with custom arguments When fitting the model using BayesMCMC or MarginalMCMC, you will be able to view the progress of the sampler using live progress information displayed as shown in Figure 1. When using multi-threading or distributed parallelism, an interval or ratio is displayed for each field instead of a value. The following is a description of the most important fields displayed in Figure 1: * iterations refers to how many MCMC iterations are completed including both the adaptation and sampling phases. * n_steps is the number of time steps taken in the last proposal. If this is too large, the NUTS is sampler will be very slow and inefficient. For more on the number of steps, see Section 4.5.6. * is_accept is true if the last proposal was accepted and false otherwise. For more on proposals and the NUTS algorithm, see Section 4.5. * acceptance_rate refers to the average acceptance rate of all the past proposals. This should converge to a similar value as the target_accept option set after the adaptation phase. For more on the acceptance ratio, see Section 4.5.5. * log_density refers to the log joint probability of the parameters and observations. If this is monotonically increasing late during the sampling phase of the fit, this is a sign that the sampler likely didn't converge to the area(s) of Figure 1: Live progress information displayed during sampling using Pumas. Figure 2: An example of the display of the Bayesian fit result, output from the fit function in Pumas. high posterior probability mass during adaptation and the chains likely would not converge. For more on signs of lack of convergence, see Section 4.7 and for more on monotonically increasing log densities (aka optimization behaviour), see Section 4.5.5. * tree_depth is the maximum tree depth reached when generating the last proposal. For more on this, see Section 4.5.6. * step_size is the time step size in the NUTS algorithm which is adapted during the adaptation phase and fixed during the sampling phase. For more on the step size and its connection to the target acceptance ratio, see Section 4.5.5. * is_adapt is true during the adaptation phase and false during the sampling phase. An example of the result of the fit function is shown in Figure 2. A number of summary statistics for the population parameters are displayed automatically. You can also use other Pumas functions to query specific summary statistics programmatically, rather than only in display. For more on summary statistics functions, see Sections 3.6 and 3.11. ### Numerical Errors and Debugging #### 3.4.1 Numerical Instability in the Model Each evaluation of the log-likelihood at specific parameters values \((\eta,\theta)\) involves a full evaluation of the model, including the structural model (@pre block), numerically solving the differential equations (@dynamics block), and the computing of the likelihood (@derived block). In order to perform effective Bayesian inference, one needs to ensure that all the model blocks are numerically stable and do not lead to Inf, -Inf or NaN values. Numerical instability can result from many causes but some usual suspects include: 1. Dividing by a very small number or 0, e.g. if a parameter is in the denominator and is allowed to be 0 during the fitting. 2. Calling the exponential function with a large exponent, e.g. due to bad initial parameter values. 3. Some observations may have 0 or approximately 0 probability according to your model at a specific \((\eta,\theta)\). For example, if a Bernoulli response distribution was used and the probability parameter of the Bernoulli distribution was exactly 0 (or 1), when a 1 (or 0) observation exists in the data. 4. The ODE solver is failing to converge to a solution because the dynamics are not stable for a particular choice of extreme \((\eta,\theta)\). 5. The response's distribution has 0 standard deviation, e.g. when using a proportional error model and the concentration drops to 0. 6. Taking log or square root of a negative parameter. Initial parameter values that cause numerical errors are often more important to watch out for than intermediate bad values during the fitting. This is because bad intermediate models will be rejected automatically when they lead to numerical errors. When this happens, one may see the following warning occur: ``` 1Warning:Thecurrentproposalwillberejectedduetonumericalerror(s). 2isfinite.((0,r,l=,l\(\kappa\)))=(true,false,false,false) ``` This warning is not necessarily a bad thing or a failure of the estimation process. What it represents is that the numerical estimation of \(p(\eta,\theta|D)\) has failed for a particular run. In many cases, this warning is benign and expected from the Bayesian estimation process, e.g. it could be that extreme parameters of an ODE model led to unstable dynamics causing the simulator to diverge. The MCMC stepping process will recover from this by rejecting the step and proposing new \((\eta,\theta)\) values to try. Thus if one only sees a few of these warnings during the estimation process, there may be nothing to worry about. However, if excessive warnings are displayed, then this could mean that many steps are being rejected causing the MCMC process to not effectively explore the posterior, potentially leading to a lack of convergence. One further indication of this is if the stepping process also displays the additional warning: ``` 1NaNdtdetected:LikelyaNNaNvalueinthestate,parameters,orderivativevaluecausedthisoutcome. ``` This warning implies that the ODE solver failed. This warning is shown because the calculation of the initial ODE time step \(dt\), which is the first part of the ODE solver process, resulted in NaN. This is usually caused by NaN or Inf values in the ODE parameters. If this is the case, it may be a good idea to investigate whether the individual parameters in the @pre block have reasonable values or not. One quick way to do this is to instrument the @model definition to print out the current values that are being used in the simulation process. For example, the line ``` 1VC=@[3]*exp(r[2]) 2can be changed to 3VC=@[5]*exp(r[2]) ``` which will print out the value that is calculated at every step. Using these printouts, one can directly see the values of the ODE parameters being used in the @dynamics block. @pumasdebug is a Pumas 2.4 feature which is not available prior to that. Some of the most common issues found through this form of debugging are due to incorrectly setting parameter bounds. For example, a parameter defined by dividing by a fixed or random effect may cause a parameter value to be (nearly) infinite if the denominator is close to zero. Thus a fix is to ensure that a lower bound is appropriately defined for the effect causing the (near) 0 denominator. #### Numerical Instability in the ODE Solver If the ODE parameters seem to be realistic candidates that are being rejected, then one may need to ensure that the ODE solver process is appropriate for the equation at hand. Any of the following warnings is usually a sign that the ODE solver is failing for algorithmic reasons rather than NaN or Inf values in the parameters: ``` 1Warning:Interrupted.Largermaxitersisneeded. 1Warning:dt(x)<=dtmin(y)at=r.Aborting. 1Warning:Instabilitydetected.Aborting ``` For debugging such situations, it can be helpful to recreate the ODE solve with the exact parameters generated by @pre and directly call the ODE solver functions from DifferentialEquations.jl (Rackauckas and Nie (2017)). However, some common guidance is: * Reduce the tolerances. Generally a very robust set of tolerances (at the cost of some performance) is abstol=1e-12, reltol=1e-12. This can be set as part of the diffeq_options keyword argument in the sampling algorithm, e.g. BayesMCMC(diffeq_options = (abstol=1e-12, reltol=1e-12,)). * Change the ODE solver to a method specifically designed for stiff differential equations. A common choice is Rodas5P(). Once again, this can be set using the diffeq_options keyword argument, e.g. BayesMCMC(diffeq_options = (alg = Rodas5P(),)) One common reason for numerical failures of ODE solvers is due to a property known as stiffness in the ODE. Stiffness is difficult to rigorously define but can loosely be defined as large time-scale differences in the rates of change in an ODE. For example, if one value has a derivative of 1 while the other has a derivative of 10\({}^{9}\). This can lead to difficulties in the ODE solver and by consequence in the simulation and Bayesian inference process. Pumas, by default, uses an ODE solver which switches between a less robust but faster method for non-stiff ODEs, and a more robust but slower method for stiff ODEs. However, this default behaviour at times can be less stable than requiring all steps to use a stiff ODE solver, hence the second recommendation to manually switch the ODE solver. ### Updating the Posterior with New Data There are algorithms that can efficiently update the posterior samples given new observations per subject or new subjects, such as sequential Monte Carlo. As of the time of this writing, Pumas does not implement these algorithms. To update the posterior given new data for existing subjects or new subjects, you would currently have to refit the model to the entire dataset. Alternatively, you can approximate the posterior samples using a tractable distribution family, e.g. a multivariate Gaussian distribution, and refit the model to the new data only using the posterior approximation as the prior distribution. In future releases of Pumas, we intend to implement such efficient methods for updating the posterior samples. Please refer to the Pumas documentation (docs.pumas.ai) for a list of the latest features. ### Basic Summary Statistics To query a number of basic summary statistics for the population parameters, you can use: ``` summarystatus(trees) ``` where \(\ttres\) is the result from fit or discard. This will output the sample mean, sample standard deviation, Monte Carlo standard error (MCSE), effective sample size (ESS), \(\hat{R}\) and ESS per second. To get the same summary statistics for the subject-specific parameters of the \(i^{th}\) subject, you can use the subject keyword argument: ``` summarystats(trees,subject=i) ``` ### How Many Samples are Needed? The number of samples needed to accurately estimate various quantities can be different. For instance, to be able to estimate the probability of rare events or some extreme quantiles, you will need many more samples than the number of samples needed to estimate the mean of the posterior. By default, Pumas will generate 4 chains with 2000 samples per chain, 1000 of which will be used for adaptation. Depending on what you are trying to estimate, you may need to run the sampler for longer and check that the result does not significantly change as you increase the number of samples in your chains. More concretely, an ESS of 400 was recommended as a good target in Vehtari et al (2019). With the default 4 chains Pumas runs, this is an ESS of 100 per chain. The same ESS recommendations were also reported in old and recent editions of Gelman et al (2013a). While this is just a general guideline and it doesn't apply to extreme quantile estimation or estimating probabilities of rare events, it can be a good initial target to aim for. The ess_per_chain option discussed in Section 3.3 can be used to set a target ESS per chain. Beside a target ESS, one should also ensure that the \(\hat{R}\) diagnostic is less than 1.1. It is even better if it were less than 1.01 as recommended in Vehtari et al (2019). Estimating the ESS, \(\hat{R}\) and MCSE for the purpose of estimating different quantities other than the posterior mean, e.g. parameter quantiles, is currently not implemented in Pumas but it is an upcoming feature. This can give users more insights into the estimation accuracy of their MCMC samples. ### Diagnostic Plots There are several diagnostic plots that help you identify lack of convergence including: trace plot, cumulative mean plot, and auto-correlation plot. All of the diagnostic plots require the loading of the PumasUtilities package first: ``` usingPumasUtilities ``` Assume \(\ttres\) is the output from fit or discard. #### 3.8.1 Trace Plot The trace plot of a parameter shows the value of the parameter in each iteration of the MCMC algorithm. A good trace plot is one that: * is noisy, not an increasing or decreasing line. * has a fixed mean. * has a fixed variance. * shows all chains overlapping with each other, also known as chain mixing7. Footnote 7: Chain mixing refers to the case when different chains include samples from the same regions in the posterior as opposed to each chain including samples from a separate region of the posterior. You can plot trace plots with the function trace_plot, e.g: ``` trace_plot(trees;parameters=[:tvcl]) ``` Figure 3 shows the resulting trace plot for the parameter tvcl. You can add more parameter names to the parameters keyword argument, e.g. parameters = [:tvcl, :tvvc] to plot more parameters. As you can see the trace plot shown has many of the desired properties of a good trace plot. Figure 3: Example of a trace plot. When the parameters keyword argument is not specified, all the population parameters' trace plots will be displayed. To plot the trace plot of the subject-specific parameters of a group of subjects, you can set the subjects keyword argument instead of setting the parameters keyword argument, e.g: ``` trace_plot(tres;subjects=[i,2]) ``` See the Pumas documentation (docs.pumas.ai) for more details and examples. #### 3.8.2 Cumulative Mean Plot The cumulative mean plot of a parameter shows the mean of the parameter value in each MCMC chain up to a certain iteration. An MCMC chain converging to a stationary posterior distribution should have the cumulative mean of each parameter converge to a fixed value. Furthermore, all the chains should be converging to the same mean for a given parameter, the posterior mean. If the cumulative mean curve is not converging or the chains are converging to different means, this is a sign of non-convergence. You can plot a cumulative mean plot for the population parameter tvcl using: ``` cummean_plot(tres;parameters=[:tvcl]) ``` Figure 4 shows the resulting trace plot for the parameter tvcl. Much like in the trace plot, you can add more parameter names to the parameters keyword argument or leave it out completely which will plot all the population-level parameters. Similarly, the same plot can be plotted for the subject-specific parameters using the subjects keyword argument instead of the parameters keyword argument. #### 3.8.3 Auto-correlation Plot MCMC chains are prone to auto-correlation between the samples because each sample in the chain is a noisy function of the previous sample. The auto-correlation plot shows the correlation between every sample with index \(s\) and the corresponding sample with index s + lag for all s \(\in\) 1:N-lag where N is the total number of samples. For each value of lag, we can compute a correlation measure between the samples and their lag-steps-ahead counterparts. The correlation is usually a value between 0 and 1 but can sometimes be between -1 and 0 as well. The auto-correlation plot shows the lag on the x-axis and the correlation value on the y-axis. For well behaving MCMC chains when lag increases, the corresponding correlation gets closer to 0. This means that there is less and less correlation between any 2 samples further away from each other. The value of lag where the correlation becomes close to 0 can be used to guide the thinning of the MCMC samples to extract mostly independent samples from the auto-correlated samples. The discard function can be used to perform thinning with the ratio keyword set to 1 / lag for an appropriate value of lag. ``` discard(tres;ratio=1/lag) ``` That said, generally speaking, thinning is usually discouraged in the final analysis because it leads to some loss of information. However, in the initial exploratory phase when many exploratory simulations/predictions are run, it may be desired to do thinning to do faster iterations. You can plot an auto-correlation plot for the population parameter tvcl using: ``` autocor_plot(pk_lcmp_fit;parameters=[:tvcl]) ``` Figure 5 shows the resulting auto-correlation plot for the parameter tvcl. Much like in the trace plot, you can add more parameter names to the parameters keyword argument or leave it out completely which will plot all the population-level parameters. Similarly, the same plot can be plotted for the subject-specific parameters using the subjects keyword argument instead of the parameters keyword argument. ### More Diagnostics A number of other diagnostics exist to help you identify: Figure 4: Example of a cumulative mean plot. * When the MCMC algorithm hasn't converged, or * How many samples to throw away as burn-in. In general, we recommend running these diagnostics after removing the adaptation steps using the discard function. Some of the diagnostics we present here can then tell you how many more samples to remove as burn-in after removing the adaptation steps. The discard function can be used again on its own output to remove the additional samples as burn-in. #### 3.9.1 Geweke Diagnostic gewekeding(tres; subject = nothing, first = 0:1, last = 0.5) The above function computes the Geweke diagnostic (Geweke, 1991) for each chain outputting a p-value per parameter. tres is the output from fit or discard and the remaining keyword arguments have the default values shown above. If the subject keyword argument is set to nothing (the default value) or left out, the chains diagnosed are those of the population parameters. If subject is set to an integer index, the chains diagnosed are those of the subject-specific parameters corresponding to the subject with the input index. The Geweke diagnostic compares the sample means of two disjoint sub-chains \(X_{1}\) and \(X_{2}\) of the entire chain using a normal difference of means hypothesis test where the null and alternative hypotheses are defined as: \[H_{0} :\mu_{1}=\mu_{2}\] \[H_{1} :\mu_{1}\neq\mu_{2}\] where \(\mu_{1}\) and \(\mu_{2}\) are the population means. The first sub-chain \(X_{1}\) is taken as the first (first * 100)% of the samples in the chain, where first is a keyword argument defaulting to 0.1. The second sub-chain \(X_{2}\) is taken as the last (last * 100)% of the samples in the chain, where last is a keyword argument defaulting to 0.5. The test statistic used is: \[z_{0}=(\bar{x}_{1}-\bar{x}_{2})\Big{/}\sqrt{s_{1}^{2}+s_{2}^{2}}\] where \(\bar{x}_{1}\) and \(\bar{x}_{2}\) are the sample means of \(X_{1}\) and \(X_{2}\) respectively, and \(s_{1}\) and \(s_{2}\) are the Markov Chain standard error (MCSE) estimates of \(X_{1}\) and \(X_{2}\) respectively. Auto-correlation is assumed within the samples of each individual sub-chain, but the samples in \(X_{1}\) are assumed to be independent of the samples in \(X_{2}\). The p-value output is an estimate of \(P(|z|>|z_{0}|)\), where \(z\) is a standard normally distributed random variable. Low p-values indicate one of the following: * The first and last parts of the chain are sampled from distributions with different means, i.e. non-convergence, * The need to discard some initial samples as burn-in, or * The need to run the sampling for longer due to lack of samples or high auto-correlation. High p-values (desirable) indicate the inability to conclude that the means of the first and last parts of the chain are different with statistical significance. However, this alone does not guarantee convergence to a fixed posterior distribution because: * Either the standard deviations or higher moments of \(X_{1}\) and \(X_{2}\) may be different, or * The independence assumption between \(X_{1}\) and \(X_{2}\) may not be satisfied when high auto-correlation exists. #### 3.9.2 Heidelberger and Welch diagnostic ``` 1heideldiag(tres;subject = nothing, alpha = 0.05, eps = 0.1, start = 1) ``` The above function computes the Heidelberger and Welch diagnostic (Heidelberger and Welch, 1983) for each chain. If the subject keyword argument is set to nothing (default value) or left out, Figure 5: Example of an auto-correlation plot. the chains diagnosed will be those of the population parameters. If subject is set to an integer index, the chains diagnosed will be those of the subject-specific parameters corresponding to the subject with the input index. The output of this function is a dataframe whose columns are explained below. Intuitively, the Heidelberger diagnostic attempts to: * Identify a cutoff point for the initial transient phase for each parameter, after which the samples can be assumed to come from a steady-state distribution. The initial transient phase can be removed as a burn-in. The cutoff point for each parameter is given in the burnin column of the output dataframe. * Estimate the relative confidence interval for the mean of the steady-state posterior distribution of each parameter, assuming such steady-state distribution exists in the samples. The relative confidence interval is computed by dividing the lower and upper bounds of the confidence interval by the mean value of the parameter. A large confidence interval implies either the lack of convergence to a stationary distribution or the lack of samples. Half the relative confidence interval is given in the halfwidth column of the output dataframe. The test column will be true (1) if the halfwidth is less than the input target eps (default is 0.1) and false (0) otherwise. Note that parameters with a mean value close to 0 can have erroneously large relative confidence intervals because of the division by the mean. The test value can therefore be expected to be false (0) for those parameters without concluding a lack of convergence. * Quantify the extent to which the distribution of the samples is stationary using statistical testing. The returned p-value, shown in the pvalue column of the output dataframe, can be considered a measure of mean stationarity. A p-value lower than the input threshold alpha (default is 0.05) implies a lack of stationarity of the mean, i.e. the posterior samples did not converge to a steady-state distribution with a fixed mean. The Heidelberger diagnostic only tests for the mean of the distribution. Therefore, much like other diagnostics, it can only be used to detect the lack of convergence and not to prove convergence. In other words, even if all the numbers seem normal, one cannot conclude that the chain converged to a stationary distribution or that it converged to the true posterior. ### What if the Chains Are Not Converging? If the chains seem to not be converging, there are things you can try to help your Markov chains converge: * Lower the target acceptance ratio from the default 0.8. * Re-parameterize your model to have less parameter dependence. * Fix some parameter values to known good values, e.g. values obtained by _maximum-a-posteriori_ (MAP) optimization. * Initialize the sampling from good parameter values. * Use a stronger prior around suspected good parameter values. * Simplify your model, e.g. using simpler dynamics. * Try the marginal MCMC algorithm MarginalMCMC instead of the full joint MCMC algorithm BayesMCMC. ### Advanced Posterior Queries #### Summary Statistics After you fit your Bayesian Pumas model, there are a number of functions and plots you can call on the output of the fit or discard functions. Often you want to execute posterior queries. Beside the basic summary statistics that one can get using the summarystats function as discussed in Section 3.6, one can also compute more advanced statistics based on the posterior. A common advanced posterior query is the probability that a certain parameter \(\theta\) is higher than 0 which can be written as an expectation problem: \[\mathrm{E}[\theta>0\mid\mathrm{data}]\] The way you can do this is using the mean function with a convenient do operator. Listing 13 shows an example of a posterior query using the do operator where we are testing if the parameter tvcl is higher than 0. It outputs a valid probability estimate, i.e. \(\in[0,1]\). ``` 1mean(tres)dop 2p.tvcl>0 3end ``` Listing 13: Example of a posterior query with the do operator Instead of mean, one can also use var to compute the variance, or use cov and cor to compute the covariance and correlation matrices, respectively, if multiple outputs are returned from the do block. Listing 14 shows an example where the correlation matrix between the tvcl and tvvc parameters is estimated using the posterior samples. ``` 1cor(tres)dop 2[p.tvcl,p.tvc] 3end ``` Listing 14: Posterior queries from multiple outputs Note that any transformation of the parameters can be done in the do block. For example, we can get the mean value of the lower triangular Cholesky factor of a correlation matrix parameter C using the code in Listing 15. ``` 1mean(tres)dop 2getchol(p.C).L 3end ``` Listing 15: Mean Cholesky factor of correlation matrix This is sometimes needed to compare the results of Pumas and Stan because Stan's equivalent to LKJCholesky reports the lower triangular factors in the MCMC samples instead of the actual correlation matrices which Pumas reports. To compute summary statistics of the subject-specific parameters of subject \(i\) instead of the population parameters, you can use the subject keyword argument as shown in Listing 16. ``` 1mean(tres,subject=i)dop 2p.\(\eta\)std 3end ``` Listing 16: Mean subject-specific parameters #### 3.11.2 Quantiles You can query the estimate of the posterior quantiles for population-level or subject-specific parameters using the quantile function: ``` 1quantile(tres) ``` This will display the 2.5%, 25%, 50%, 75%, and 97.5% quantiles of all the population-level parameters by default. To display the quantiles of the subject-specific parameters of subject \(i\) instead, you can use the subject keyword argument as such: ``` 1quantile(tres,subject=i) ``` To change the quantiles computed, you can also manually input the desired quantiles using the q keyword argument. For example, the following returns the 10% and 90% quantiles of the subject-specific parameters of subject \(i\): ``` 1quantile(tres,subject=i,q=[0.1,0.9]) ``` #### 3.11.3 Credible Intervals A credible interval is an interval containing a pre-specified probability mass in the posterior distribution. For instance, an estimate of the 95% credible interval is any interval such that at least 95% of the posterior samples obtained with MCMC lie in that interval. Naively, one can use the interval from the 2.5% quantile to the 97.5% quantile of a parameter as a 95% credible interval. This can be obtained using the quantile function as shown in section 3.11.2. However, less naively, one may be interested in the smallest interval that includes at least 95% of the posterior mass. This is commonly known as the highest probability desnity interval (HPDI). To get the HPDI which contains \((1-\text{a})\%\) of the samples for each population parameter, you can use: ``` 1hpd(tres,alpha=a) ``` To get the same interval for the subject-specific parameters of subject \(i\), you can use: ``` 1hpd(tres,alpha=a,subject=i) ``` ### Posterior Plots There are a number of plots that you can use to visualize the posterior distribution. In this section, we'll cover plots related to the parameter estimates: density plots, ridge plots and corner plots. #### 3.12.1 Density Plot A density plot shows a smoothed version of the histogram of a parameter value, giving an approximate probability density function for the marginal posterior of each parameter. This helps us visualize the shape of the marginal posterior of each parameter. If you run multiple Markov chains, the plot will show overlapping densities for each density distinguished by different colors. You can plot density plots with the function density_plot. Listing 17 and Figure 6 show the code and the resulting density plot, respectively. If you do not specify which parameter you want with the optional keyword argument parameters, the plot will output multiple density plots faceted automatically. parameters accepts a vector of parameters. ``` 1density_plot(pk_lcmp_fit;parameters=[:tvcl]) ``` Listing 17: Example of a density plot #### 3.12.2 Ridgeline Plot Another common posterior plot is the ridgeline plot, which outputs a single density summarizing all the of the sampled chains along with relevant statistical information about your parameter. The information that it outputs is the mean, median, 10% and 90% quantiles, along with 95% and 80% highest posterior density interval (HPDI). You can plot ridgeline plots with the function ridgeline_plot, which has a similar syntax as density_plot. Listing 18 and Figure 7 show the code and the resulting ridgeline plot, respectively. ``` 1ridge_plot(pk_lcmp_fit;parameters=[:tvcl]) ``` Listing 18: Example of a ridgeline plot #### 3.12.3 Corner Plot The corner plot is a plot that showcases scatter plots between different parameters along with marginal histograms in a well-organized template. This can be used to investigate a high correlation between parameter values that can be a source of convergence issues for the MCMC sampler. Listing 19 shows the code for a corner plot for the parameters tvq and tvcl. The output is in Figure 8. ``` 1corner_plot(tres,parameters=[:tvq,:tvvcl]) ``` Listing 19: Example of a corner plot ### Posterior Simulations and Predictions #### 3.13.1 Existing Subjects You can simulate new responses for existing subjects using the parameter values sampled from the posterior stored in the MCMC result. This is not to be confused with prior predictive simulations which use parameter values sampled from the priors. The simobs function can be used to do this: Figure 8: Example of a corner plot. Figure 6: Example of a density plot. Figure 7: Example of a ridgeline plot. ``` 1sims=simobs(tres;samples=100) ``` where tres is the output from the fit or discard functions. The samples keyword argument is the number of sub-samples taken from the MCMC samples. When not set, all of the MCMC samples will be used. By default, all the subjects are simulated. If the keyword argument subject is set to any integer index \(i\), only the \(i^{th}\) subject will be simulated. In the simobs function, there's also a keyword argument: simulate_error. If it is set to true (the default value), Pumas will sample from the response's error model in the @derived block (aka simulation), otherwise, it will return the expected value of the error distribution (aka prediction). The latter is equivalent to using the predict function. #### 3.13.2 New Dose or Covariates It is often useful to do counterfactual simulations by changing some of the variables we have control over and doing what-if analysis. Changing the dose is the most common use case in pharmacometrics but in some cases, covariates may also be changed. To change the dose and/or covariates and reuse the posterior samples for a particular subject, you will first need to define a new subject. You can either define the new subject manually using the Subject constructor, or you can start from a data frame and use the read_pumas function to convert it to a Pumas subject. To learn about the manual Subject constructor, please refer to the Pumas documentation (docs.pumas.ai). To showcase the second approach, assume the original data frame of the entire population is df. To multiply the dose of subject i by 3 given the data frame df where the dose amount is located in the first row of the :amt field, you can do: ``` 1subjdf=copy(df[df.id.==i,:]) 2subjdf[1,:amt]=subjdf[1,:amt]*3.0 3new_subj=read_pumas(subjdf)[1] ``` For more on data frame wrangling, please refer to the Pumas documentation (docs.pumas.ai) or tutorials (tutorials.pumas.ai). After defining the new subject new_subj, you can call the following method of simobs: ``` 1simobs(tres,new_subj,subject=i,samples=100) ``` Setting the subject keyword argument to the index i will trigger the use of the MCMC samples for subject i's parameters while using the dose, covariates and time points from new_subj. Note that the index i refers to the index of the subject in the training population passed to the fit function which may not match the ID field of the subject. To simulate for an actually new subject new_subj, where the subject-specfic parameters are sampled from the prior distribution and the population parameters are sampled from the posterior, you can drop the subject keyword argument: ``` 1simobs(tres,new_subj,samples=100) ``` ### Visual Predictive Checks and Simulation Plots A visual predictive check (VPC) of simulations in which the parameter values were sampled from the prior distributions is commonly known as the prior predictive check. Similarly, a VPC of simulations in which the parameter values were sampled from the posterior distribution is commonly known as the posterior predictive check. After calling a prior or posterior simobs call, the result simulation object sims can be passed to the vpc function to compute all of the quantiles necessary for a VPC plot. The function also accepts a categorical variable to stratify the VPC results with the keyword argument stratify_by. The VPC result can be plotted with the vpc_plot function. Listing 20 show the code for running a VPC from simulations. ``` 1vpc_res=vpc(sims) 2vpc_plot(vpc_res) ``` Listing 20: Visual predictive check Figure 9 is an example of an output of such code. To plot the simulated quantile median lines, you can set the keyword argument simquantile_medians=true, e.g: ``` 1vpc_res=vpc(sims) 2vpc_plot(vpc_res,simquantile_medians=true) ``` The resulting plot will look like Figure 10. To further display the observations as points on the VPC plot, you can set the keyword argument observations = true, e.g: ``` 1vpc_res=vpc(sims) 2vpc_plot(vpc_res,simquantile_medians=true,observations=true) ``` The resulting plot will look like Figure 11. For more on the many VPC options available, including changing the covariate or stratification, you can refer to the Pumas documentation (docs. pumas.ai). Instead of a full VPC plot, you can also just plot the simulated responses and observations without the VPC colour bands using the sim_plot function, e.g: ``` 1simplot(sims) ``` An example output is shown in Figure 12. ### Simulation Queries The output of a simobs call stores the simulated observations but also all the intermediate values computed such as: the parameter values used, individual coefficients, dose control parameters, covariates, differential equation solution, etc. There are a number of post-processing operations you can do on the simulation output to compute various queries and summary statistics based on the simulations. The postprocess function is a powerful tool that allows you to make various queries using the simulation results. There are multiple ways to use the postprocess function. The first way to use the postprocess function is to extract all of the information stored in the simulation result in the form of a vector of named tuples. Each named tuple has all the intermediate values evaluated when simulating 1 run. Let sims be the output of any simobs operation. Listing 21 shows how to extract all the simulation's intermediate results. ``` 1generated=postprocess(sims) ``` Listing 21: Extract intermediate values obstimes was set instead when calling simobs, the time-dependent variables will be evaluated at the time points in obsttimes instead. The second way to use postprocess is by passing in a post-processing function. The postprocessing function can be used to: * Transform the simulated quantities, or * Compare the simulated quantities to the observations. We use the do syntax here which is short for passing in a function as the first argument to postprocess. The 'do' syntax to pass in a post-processing function is shown in Listing 22. ``` 1postprocess(sims)dogen,obs 2... 3end ``` Listing 22: Compare generated quantities and observations where gen is the named tuple of all generated quantities from 1 simulation run and obs is the named tuple of observations. For instance to query the ratio of simulated observations conc that are higher than the observed quantity conc at the observations' time points, you can use the code in Listing 23. This is sometimes called the Bayesian p-value which is expected to be around 0.5. ``` 1postprocess(sims)dogen,obs 2sum(gen.conc.>obs.conc)/length(gen.conc) 3end ``` Listing 23: Bayesian p-value per simulation gen.conc is the simulated vector of conc whose length is the same as the number of observation time points. obs.conc is the observation vector conc. gen.conc.> obs.conc returns a vector of true/false, with one element for each time point. The sum of this vector gives the number of time points where the simulation was higher than the observation. Dividing by the number of time points gives the ratio. When using postprocess in this way, the output is always a vector of the query results, one number for each simulation. In the query function body, you can choose to use only gen or only obs but the header must always have both gen and obs. The third way to use the postprocess function is to compute summary statistics of the simulated quantities or of functions of the simulated quantities. Summary statistics can be computed by passing a statistic function as the stat keyword argument. For example in order to estimate the probability that a simulated value is higher than an observation, you can use the code in Listing 24. ``` 1postprocess(sims,stat=mean)dogen,obs 2gen.conc.>obs.conc 3end ``` Listing 24: Mean Bayesian p-value This function will do 2 things: 1. Concatenate the query results (e.g. gen.conc.>obs.conc) from all the simulation runs into a single vector. 2. Compute the mean value of the combined vector. Alternatively, you can use the mean function to do the same thing without using the keyword argument. Listing 25 will call the postprocess function under the hood. ``` 1mean(sims)dogen,obs 2gen.conc.>obs.conc 3end ``` Listing 25: Mean Bayesian p-value using the mean function The result of this operation will be a scalar equal to the mean value of the _concatenated_ vector of queries. In order to get the probability that the simulated quantity is higher than the observation _for each time point_, you can call the mean function externally as shown in Listing 26. ``` 1generated=postprocess(sims)dogen,obs 2gen.conc.>obs.conc 3end ``` Listing 26: Mean Bayesian p-value using the mean function This returns a vector of probabilities of the same length as the number of time points without concatenating all the queries together. To compute a summary statistic of all the generated quantities, you can also use the code in Listing 27. ``` 1postprocess(sims,stat=mean) 2 3end ``` Listing 27: Mean generated quantities without specifying a post-processing function which is also equivalent to the shorter version in Listing 28. ``` 1mean(sims) Beside mean, you can also use any of the following summary statistic functions in the same way: * std for element-wise standard deviation * var for element-wise variance * cor for correlation between multiple quantities * cov for covariance between multiple quantities These functions can be passed in as the stat keyword argument to postprocess or they can be used in the short form, e.g.: ``` 1generated=postprocess( 2sins,stat=std, 3dogen,obs 4... 5end 6std(sims)dogen,obs 7... 8end 9std(sims) 10 11generated=postprocess( 12sins,stat=var, 13dogen,obs 14... 15end 16var(sims)dogen,obs 17... 18end 19var(sims) ``` The cor and cov statistics are unique in that they require a post-processing function which outputs a vector. For example to estimate the correlation between the CL and Vc parameter in a 1 compartment model, you can use any of the following: ``` 1postprocess(s,stat=cor)dogen,obs 2[gen.CL[1],gen.Vc[1]] 3end 4cor(s)dogen,obs 5[gen.CL[1],gen.Vc[1]] 6end ``` Note that gen.CL is a vector of simulated CL values for all the time points. But since the value is constant across time, we can use the first element gen.CL[1] only. cov can be used instead of cor to compute the covariance matrix. The output of this operation is either a correlation or a covariance matrix. ### Non-Compartmental Analysis (NCA) Parameters You can easily integrate non-compartmental analysis (NCA) parameters such as area-under-curve (auc) and maximum concentration (cmax) in the simulation using the @observed block in the Pumas model, e.g: ``` 1derivedbegim [email protected]/Vc 3conc\(\sim\)@.LogNormal(log(cp),\(\sigma\)) 4end 5@observedbegim 6nc=@ncacp 7auc=NCA.auc(nca) 8cmax=NCA.cmax(nca) ``` The auc and cmax will now get stored in the output of simobs and can be queried using the various summary statistics queries. For example, the following will be estimating the probability that auc is greater than 20 and cmax is less than 30 given the simulations sims (output from simobs). ``` 1mem(sims)dogen,obs 2gen.auc>200&&gen.cmax<30 3end ``` For more on NCA integration and parameters, please refer to the Pumas documentation (docs. pumas.ai). ### Crossvalidation and Expected Log Predictive Density Crossvalidation is a technique for evaluating a model's predictive accuracy on unseen data, aka out-of-sample data. This is done by systematically leaving some data out from the training data8 when performing Bayesian inference followed by an evaluation of the average predictive accuracy of the MCMC samples using the data that was left out. Each iteration of the crossvalidation routine leaves a different subset of the data out of training and uses some or all of it for evaluating the prediction accuracy. The metric used for evaluating the prediction accuracy is typically the (conditional or marginal) log likelihood of each of the MCMC samples given the unseen data. The predictive performance metric is then averaged out across the iterations of the crossvalidation routine. Footnote 8: data used during the Bayesian inference to get samples from the posterior You can do almost all types of crossvalidation for hierarchical models using Pumas. The main function that performs crossvalidation in Pumas is the crossvalidate function. There are 2 inputs to crossvalidate: 1. The MCMC result from fit or discard. We will call this tres. 2. The crossvalidation algorithm. Let's call it cv_method. The call syntax is cv_res = crossvalidate(res, cv_method). To estimate the expected log predictive density (ELPD) given cv_res, you can use the elpd function: elpd(cv_res). There are 2 families of the crossvalidation algorithms available in Pumas: 1. Resampling-based CV which performs the MCMC again for each data point or subset left out. This algorithm is constructed using the ExactCrossvalidation struct. 2. PSIS-based CV (Vehtari et al, 2015) which uses the importance sampling and weight smoothing approach to avoid the need for resampling. This algorithm is constructed using the PSISCrossvalidation struct. The constructor for a resampling-based CV method is: ``` cv_method=ExactCrossvalidation(;split_method,split_by,ensembles&=EnsembleThreads()) ``` where split_method and split_by are keyword arguments that must be set and ensemblealg defaults to the use of multi-threading. This defines an instance of the ExactCrossvalidation algorithm for crossvalidation. In this algorithm, the fitting is re-run while leaving out a subset of the data each time. The way by which the data is split between training and validation sets is determined using the keyword arguments split_method and split_by. The split_method argument can be of any of the following types: 1. LeaveK for leave-K-out crossvalidation 2. KFold for K-fold crossvalidation 3. LeaveFutureK for leaving K future points at a time The split_by keyword argument can be of any of the following types: 1. BySubject for leaving out subjects 2. ByObservation for leaving out individual observations per subject Each of the data-splitting methods will be discussed in the next subsection. Similar to the resampling-based crossvalidation, the constructor for a PSIS-CV method is: ``` cv_method=PSISCrossvalidation(;split_method,split_by,pareto_shape_threshold=0.7) ``` where split_method and split_by are keyword arguments that must be set. This defines an instance of the PSISCrossvalidation algorithm for crossvalidation. The split_method and split_by keyword arguments are similar to the resampling-based CV case. The pareto_shape_threshold = 0.7 keyword argument will result in the removal of any CV run that leads to a Pareto shape parameter more than 0.7 when computing the expected log predictive density (ELPD) estimate. This can be useful to avoid a few bad runs rendering the whole PSIS-CV method useless. Ideally one would re-run the inference for the subset of CV runs where the shape parameter exceeded the threshold but this is not implemented in Pumas yet as of the time of this writing. Follow the Pumas documentation (docs.pumas.ai) for updates on the latest features available. #### 3.17.1 Leave-K-Out The constructor for the leave-K-out data splitting algorithm is: ``` split_method=LeaveK(;K=5,shuffle=false,rng=nothing) ``` In this algorithm, the data is split multiple times into 2 disjoint groups, each time starting from the full data set. The 2 groups are typically called training and validation subsets, where the validation subset has K data points. In the next iteration, the whole data set is re-split using another disjoint subset of K data points as the validation subset. This process is done repeatedly until almost each data point has shown up in 1 and only 1 validation subset. The data is typically a vector of some sort, e.g. observations or subjects, and the splittings are order-dependent. Before performing the splittings, you can randomly shuffle the data vector by setting the shuffle keyword argument to true (default is false) getting rid of the sensitivity to the original order of the data. You can additionally pass an optional pseudo-random number generator rng to control the pseudo-randomness for reproducibility. Assume some dummy original data ["A", "B", "C", "D"] which resembles the subjects or observations. Leave-one-out splitting without shuffling results in the data splittings shown in Table 3. where each data point shows once and only once in a validation subset. Leave-2-out splitting without shuffling results in the data splittings shown in Table 4. #### 3.17.2 K-Fold The constructor for the K-fold data splitting algorithm is: ``` split_method=KFold(;K=5,shuffle=false,rng=nothing) ``` In this algorithm, the data is split K times into 2 disjoint groups, each time starting from the full data set. The 2 groups are typically called training and validation subsets, where the validation subset has floor(N / K)9 data points, N being the total number of data points. In the next iteration, the whole data set is re-split using another disjoint validation subset of floor(N / K) different points, disjoint from the previous validation subsets. This process is done iteratively until almost each data point has shown up in 1 and only 1 validation subset. If N is divisible by K, each point will show up in 1 and only 1 validation subset. Otherwise, the remaining points will be part of the training subset for all the splittings and will not show up in any validation subset. Footnote 9: floor is a function that rounds down to an integer. The data is typically a vector of some sort, e.g. observations or subjects, and the splittings are order-dependent. Before performing the splittings, you can randomly shuffle the data vector by setting the shuffle keyword argument to true (default is false) getting rid of the sensitivity to the original order of the data. You can additionally pass an optional pseudo-random number generator rng to control the pseudo-randomness for reproducibility. Assume some dummy original data ["A", "B", "C", "D"] which resembles the subjects or observations. 4-fold splitting without shuffling results in the data splittings shown in Table 5, where each data point showed once and only once in a validation subset. 2-fold splitting without shuffling results in the data splittings shown in Table 6. #### 3.17.3 Leave-Future-K The constructor for the leave-future-K data splitting algorithm is: ``` split_method=LeaveFutureK(;K=1,minimum=2) ``` In this algorithm, the data is assumed to be a time series. The goal is to split the data into "past" and "future". Using this algorithm, the data is split multiple times into 3 disjoint groups where the third group is discarded, each time starting from the full data set. The first 2 groups are typically called the past/training subset and the future/validation subset, where the future validation subset has K future data points. In the next iteration, the whole data set is then re-split using another disjoint subset of K data points as the future validation subset. This process is done iteratively starting from the full data set and moving backward in time until the training subset has less than a pre-set minimum number of points remaining. Using this method, each data point can show up in at most 1 future validation subset. The default values of K and minimum are 1 and 2 respectively. Assume the original data is ["A", "B", "C", "D", "E", "F"]. Leave-1-future-out splitting with minimum=2 results in the data splittings shown in Table 7 where the remaining points are discarded: Leave-2-future-out splitting with minimum=2 results in the data splittings shown in Table 8. \begin{table} \begin{tabular}{c|c} \hline Training subset & Validation subset \\ \hline ["A", "B"] & ["C", "D"] \\ \hline ["C", "D"] & ["A", "B"] \\ \hline \end{tabular} \end{table} Table 6: 2-fold splitting. \begin{table} \begin{tabular}{c|c} \hline Training subset & Validation subset \\ \hline ["A", "B", "C"] & ["D"] \\ \hline ["A", "B", "D"] & ["C"] \\ \hline ["A", "C", "D"] & ["B"] \\ \hline ["B", "C", "D"] & ["A"] \\ \hline \end{tabular} \end{table} Table 3: Leave-one-out splitting. \begin{table} \begin{tabular}{c|c} \hline Training subset & Validation subset \\ \hline ["A", "B"] & ["C", "D"] \\ \hline ["A", "B", "D"] & ["C"] \\ \hline ["A", "C", "D"] & ["B"] \\ \hline ["B", "C", "D"] & ["A"] \\ \hline \end{tabular} \end{table} Table 5: 4-fold splittings. #### 3.17.4 Subject-based Splitting The constructor for the subject-based splitting method is: split_by = BySubject(; marginal = LaplaceI()) Using this method, each subject is treated as a single data point. The predictive log-likelihood computed for each subject can be either the marginal log-likelihood or conditional log-likelihood. This method has one keyword argument, marginal. If marginal is set to nothing, the predictive log-likelihood computed for each subject is the conditional log-likelihood using the typical values for the parameters. Otherwise, the predictive log-likelihood computed for each subject is the marginal log-likelihood using marginal as the marginalization algorithm. The default value of marginal is LaplaceI() which uses the Laplace method to integrate out the subject-specific parameters. Other alternatives include: FOCE() and FO(). #### 3.17.5 Observation-based Splitting The constructor for the observation-based splitting method is: split_by = ByObservation(; allsubjects = true) Using this method, each observation or collection of observations is treated as a single data point. When computing predictive log-likelihood using this method, the predictive log-likelihood computed is the conditional log-likelihood of one or more observations for one or more subjects. This method has one keyword argument, allsubjects. If allsubjects is set to true (the default value), the \(i^{th}\) observation of each subject are all grouped together into a single data point. This assumes all subjects have the same number of observations. If allsubjects is set to false, then each observation for each subject is its individual data point. Assume there are 2 subjects and 3 observations per subject. When using split_method = LeaveK(K = 1) as the splitting method together with split_by = ByObservation(allsubjects = false), the training and validation splittings are shown in Table 9. On the other hand, if allsubjects is set to true, the training and validation splittings are shown in Table 10. #### 3.17.6 Examples Assume there are 5 subjects and 10 observations per subject and that res is the result of the fit or discard function. The following are some of the combinations in which the above inputs can be used: * Leave-one-observation-out cross-validation, leaving 1 observation for all the subjects at a time. \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{Past subset} & Future subset \\ \hline [“A”, “B”, “C”, “D”, “E”] & [“F”] \\ \hline [“A”, “B”, “C”, “D”] & [“E”] \\ \hline [“A”, “B”] & [“C”] \\ \hline \hline \end{tabular} \end{table} Table 7: Leave-1-future-out splittings. \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{Past subset} & Future subset \\ \hline [“A”, “B”, “C”, “D”] & [“E”, “F”] \\ \hline [“A”, “B”] & [“C”, “D”] \\ \hline \hline \end{tabular} \end{table} Table 8: Leave-2-future-out splittings. \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{Training subset} & Validation subset \\ \hline \hline Subj 1 (obs 1, 2, 3), subj 2 & Subj 2 (obs 3) \\ \hline (obs 1, 2) & Subj 1 (obs 1, 2, 3), subj 2 & Subj 2 (obs 2) \\ \hline Subj 1 (obs 1, 2, 3), subj 2 & Subj 2 (obs 1) \\ \hline Subj 1 (obs 1, 2), subj 2 (obs 1, 2, 3) & Subj 1 (obs 3) \\ \hline Subj 1 (obs 1, 3), subj 2 (obs 1, 2, 3) & Subj 1 (obs 2) \\ \hline Subj 1 (obs 2, 3), subj 2 (obs 1, 2, 3) & Subj 1 (obs 1) \\ \hline \hline \end{tabular} \end{table} Table 9: Training and validation splits using split_method = LeaveK(K = 1) and split_by = ByObservation(allsubjects = false). \begin{table} \begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{Past subset} & Future subset \\ \hline [“A”, “B”, “C”, “D”] & [“E”, “F”] \\ \hline [“A”, “B”, “C”] & [“D”] \\ \hline [“A”, “B”] & [“C”] \\ \hline \hline \end{tabular} \end{table} Table 7: Leave-1-future-out splittings. allsubjects = true means that the same observation index is removed for all the subjects, e.g. the 10th observation for all the subjects is used for validation in the first run, then the 9th observation is used for validation in the second run, etc. ``` 1split_method=LeaveK(K=1) 2split_by=ByObservation(allsubjects=true) 3ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 4ev_res=crossvalidate(res,cv_method) ``` 5ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 6ev_res=crossvalidate(res,cv_method) ``` 7ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 8ev_res=crossvalidate(res,cv_method) ``` 9ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 10ev_res=crossvalidate(res,cv_method) ``` 11ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 12ev_res=crossvalidate(res,cv_method) ``` 13ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 14ev_res=crossvalidate(res,cv_method) ``` 15ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 16ev_res=crossvalidate(res,cv_method) ``` 17ev_method=ExactCrossvalidation(; split_method=split_method,split_by=split_by,ensemblesg=EnembleThreads()) 18ev_res=crossvalidate(res,cv_method) ``` ### Information Criteria The ELPD model evaluation metric computed from the crossvalidation output is theoretically similar to the so-called Widely Applicable Information Criteria (WAIC) (Vehtari et al, 2015). More precisely, -2 times the ELPD is comparable to the WAIC. A higher ELPD Is better and a lower WAIC is better. When the ELPD is estimated using the PSIS leave-one-out (LOO) crossvalidation method, -2 times the ELPD estimate is sometimes known as the LOOIC. Besides the ELPD estimates, Pumas also supports common information criteria (Burnham and Anderson, 2002), such as the Akaike information criteria (AIC), the corrected AIC (AICc), the Bayesian information criteria (BIC), and the WAIC. To estimate these, we first need to compute the pointwise log-likelihoods by some definition of "pointwise". To do this, you can call: ``` 1pl=loglikelihood(tree;split_method,split_by) ``` where tres is the output of fit or discard and split_method and split_by are the keyword arguments explained in Section 3.17 which define what a point is and which log-likelihood to compute. To calculate the information criteria using the pointwise log-likelihoods, you can then use any of the following functions: ``` 1Pumas.asic(pl) 2Pumas.asic(pl) 3Pumas.bic(pl) 4Pumas.wait(pl) ``` ## 4 Background and Intuition In this section, the notation used and the mathematical background of Bayesian inference in pharmacometrics will be presented. We include a brief introduction of Bayesian statistics and where it's useful in pharmacometrics, followed by an intuitive explanation of Markov Chain Monte Carlo (MCMC) and the No-U-Turn sampler (NUTS) algorithm (Hoffman and Gelman, 2014; Betancourt, 2017). We then discuss the intuition and some of the math behind prior selection, MCMC convergence diagnostics, and cross-validation and model selection. This section should prepare the readers for using the Bayesian workflow in Pumas or any standard Bayesian analysis tool by giving them _working knowledge_ of highly technical concepts, using intuition and light mathematics. ### Notation For convenience of notation, for the rest of this paper we use: 1. \(\theta\) **to refer to all the population-level parameters including all of \((\theta,\Omega,\sigma)\)** 2. \(\eta\) to refer to the patient-specific parameters for all the subjects, where \(\eta_{i}\) refers to the subject-specific parameters of subject \(i\). 3. \(x\) to refer to the covariates for all the subjects, where \(x_{i}\) refers to the subject-specific covariates of subject \(i\). To be more rigorous, \(x_{i}\) also includes all the time points at which the observations were made for subject \(i\). 4. \(y\) to refer to the observed response for all the subjects, where \(y_{i}\) refers to the subject-specific response of subject \(i\) 5. \(p(A=\alpha\mid B=\beta)\) to denote the probability density/mass of the random variable \(A\) taking the value \(\alpha\) conditional on the variable \(B\) taking the value \(\beta\). \(B=\beta\) can generally be replaced with multiple variables, e.g. \(p(A=\alpha\mid B=\beta,C=c)\). If \(A\) is a continuous random variable, \(p(A=\alpha\mid B=\beta)\) refers to the probability _density_ of \(A=\alpha\) conditioned on \(B=\beta\). If \(A\) is a discrete random variable, \(p(A=\alpha\mid B=\beta)\) refers to the probability _mass_ of \(A=\alpha\) conditioned on \(B=\beta\). When \(\alpha\) and/or \(\beta\) are dropped, they can be replaced by the value \(A\) and/or \(B\) respectively, e.g. \(p(A=A\mid B=B\) to be understood from the context. This is a slight abuse of notation using the same symbol \(A/B\) to refer to both the random variable and the specific value in its support but this is common in probability theory. 6. \(p(A,B\mid C)\) to denote \(p(A\mid B,C)\times p(B\mid C)\) which is equal to \(p(B\mid A,C)\times p(A\mid C)\) which could be the product of probability densities and/or masses depending on the supports of \(A\) and \(B\). 7. \(D\) to refer to all the observed data including both \(x\) and \(y\). 8. \(p(y_{i}\mid x_{i},\eta_{i}\theta)\) to denote the _conditional_ likelihood of \((\eta_{i},\theta)\) given subject \(i\)'s observations \((x_{i},y_{i})\). Recall that the likelihood is a function of the parameters given the data but it is also the probability of observing the data given the model's parameters. 9. \(p(y\mid x,\eta,\theta)\) to denote the _conditional_ likelihood of \((\eta,\theta)\) given all the subjects' observations \((x,y)\). Given the hierarchical nature of pharmacometric models, this is equal to \(\prod_{i}p(y_{i}\mid x_{i},\eta_{i}\theta)\). 10. \(p(y_{i}\mid x_{i},\theta)\) to denote the _marginal_ likelihood of \(\theta\) after marginalizing \(\eta_{i}\) given subject \(i\)'s observations \((x_{i},y_{i})\). This is equal to \(\int p(y_{i}\mid x_{i},\eta_{i}\theta)\cdot p(\eta_{i}\mid\theta)d\eta_{i}\). 11. \(p(y\mid x,\theta)\) to denote the _marginal_ likelihood of \(\theta\) given all the subjects' observations \((x,y)\). Given the hierarchical nature of pharmacometric models, this is equal to \(\prod_{i}p(y_{i}\mid x_{i},\theta)\). Some additional assumptions to keep in mind are that: 1. \(y_{i}\) may not be a scalar, instead, it could and often is a subject-specific time series response or multiple such time series responses. 2. \(\eta_{i}\) is not generally a scalar, it can be composed of multiple subject-specific parameters with a different prior distribution assigned to each parameter. 3. \(x_{i}\) is not generally a scalar, it can be multiple time-independent values or a combination of time-independent values and some time series. It also includes all the time points at which its corresponding \(y_{i}\) is observed. 4. \(p(A\mid B)\) will be used in equations to denote the probability density/mass function, but in text it may be used to also refer to the distribution itself as an object/concept, depending on the context. Figure 13 shows the typical model structure in pharmacometrics using the above notation when there are 3 subjects in the population. Additionally, Figure 14 shows a dummy Pumas model highlighting where each variable in Figure 13 is defined. Figure 14: A dummy Pumas model showing where each variable in Figure 13 is defined. Figure 13: Schematic of the hierarchical structure of models typically used in pharmacometrics when there are only 3 subjects in the population. The schematic can be trivially extended to more subjects. See the notation section (4.1) to understand the notations. ### Bayesian Statistics Bayesian Statistics is the use of **Bayes theorem** as the procedure to estimate parameters of interest or unobserved data (Gelman et al, 2013a). Bayes' theorem, named after Thomas Bayes10, tells us how to "invert" conditional probabilities going from \(p(B\mid A,C)\) to \(p(A\mid B,C)\) where \(C\) is optional: Footnote 10: **Thomas Bayes** (1701 - 1761) was a statistician, philosopher, and Presbyterian minister who is known for formulating a specific case of the theorem that bears his name: Bayes’ theorem. Bayes never published what would become his most famous accomplishment; his notes were edited and published posthumously by his friend **Richard Price**. The theorem’s official name is **Bayes-Price-Laplace**, because **Bayes** was the first to discover, **Price** got his notes, transcribed into mathematical notation, and read to the Royal Society of London, and **Laplace** independently rediscovered the theorem without having previous contact in the end of the XVIII century in France while using probability for statistical inference with census data in the Napoleonic era. \[p(A\mid B,C)=\frac{p(A,C)\cdot p(B\mid A,C)}{p(B,C)} \tag{3}\] In the context of statistics, Bayes' rule can be used to calculate the probability that each hypothesis is true given the observations. Assume we have 10 hypotheses \(H_{1},\ldots,H_{10}\) where each has a prior probability \(p(\text{truth}=H_{i})\). We can use Bayes' rule to calculate the posterior probability \(p(\text{truth}=H_{i}\mid\text{data})\) for each hypothesis \(H_{i}\) using: \[p(\text{truth}=H_{i}\mid\text{data})=\\ \frac{p(\text{data}\mid\text{truth}=H_{i})\cdot p(\text{truth}=H_ {i})}{p(\text{data})} \tag{4}\] where the denominator can be written as \[p(\text{data})=\\ \sum_{i=1}^{10}p(\text{data}\mid\text{truth}=H_{i})\cdot p(\text{ truth}=H_{i})\] which is the sum of the likelihoods of all the hypotheses, \(p(\text{data}\mid\text{truth}=H_{i})\), weighted by their respective prior probabilities \(p(\text{truth}=H_{i})\). While the denominator has a profound statistical meaning, it can also be viewed pragmatically as a normalization constant chosen such that \(\sum_{i=1}^{10}p(\text{truth}=H_{i}\mid\text{data})=1\). Since the denominator is the sum of the numerator terms for all \(i\), the sum of the resulting posterior probabilities is guaranteed to be 1. \(p(\text{truth}=H_{i})\) describes what is commonly known as the prior probability of a hypothesis. This can encode the modeller's domain knowledge giving unequal probabilities to different hypotheses upfront prior to observing any data. Alternatively, assigning equal probability to each hypothesis can also be done. Given multiple hypotheses and some data, the hypothesis with the highest probability given the observed data \(p(\text{truth}=H_{i}\mid\text{data})\) is the most plausible one. A hypothesis is typically a combination of a model and parameter values for the model's parameters. In the pharmacometrics context, let each set of parameter values (\(\eta\), \(\theta\)) given a specific model be a hypothesis. In this case, we have a continuum of hypotheses rather than a discrete set of hypotheses. Assuming a single model which we condition upon by putting it on the right-hand-side of the \(\mid\), and using pharmacometrics notation, Bayes' theorem for the hypotheses continuum can be written as: \[p(\eta,\theta\mid x,\text{model},y)=\\ \frac{p(y\mid x,\text{model},\eta,\theta)\cdot p(\eta,\theta\mid x,\text{model})}{p(y\mid x,\text{model})} \tag{5}\] where \(A\) in the general form is replaced by \((\eta,\theta)\), \(B\) is \(y\) and \(C\) is \((x,\text{model})\)11. Note that we conditioned on \(x\) everywhere in the above equation because we are generally not interested in modelling the probability of \(x\) per se, but rather we are interested in the probability of \(y\) given \(x\) (\(y\mid x\)). Footnote 11: We can alternatively make the model part of \(A\) instead of \(C\) when model selection is relevant but don’t consider this case for simplicity. Also note that \(p(\eta,\theta\mid x,\text{model})\), known as the prior probability, can be replaced by \(p(\eta,\theta\mid\text{model})\) since the prior typically doesn't depend on the covariates in pharmacometrics. In Pumas syntax, this means that the covariates from the @covariates block are not used in the prior specification in the @param or @random blocks. When only a single model is considered, so we can drop it from the equations, and the prior is independent of the covariates, Bayes' theorem simplifies to: \[p(\eta,\theta\mid x,y)=\frac{p(y\mid x,\eta,\theta)\cdot p(\eta,\theta)}{p(y\mid x)} \tag{6}\] To further simplify the notation, we denote \((x,y)\) by \(D\), \(p(y\mid x)\) by \(p(D)\) and \(p(y\mid x,\eta,\theta)\) by \(p(D\mid\eta,\theta)\)12. This gives us the more standard looking Bayes' theorem: Footnote 12: This is a slight abuse of notation because we chose to put \(D\) on the left-hand-side even though it includes \(x\) which is on the right-hand-side. \[p(\eta,\theta\mid D)=\frac{p(D\mid\eta,\theta)\cdot p(\eta,\theta)}{p(D)} \tag{7}\] Pharmacometric models typically describe some data generating process from which we can simulate synthetic data \(y\) given: 1) a set of covariates \(x\), 2) a specific model, and 3) a set of parameters \((\eta,\theta)\). Such a model describes a probability distribution for the response \(y\), \(p(D\mid\eta,\theta)\). This makes computing \(p(D\mid\eta,\theta)\) computationally straightforward. Similarly, the prior probability \(p(\eta,\theta)\) is typically defined in terms of standard probability distributions with known and computationally tractable probability density or mass functions. The main issue when trying to apply Bayes' theorem in practice is the calculation of the denominator term \(p(D)=p(y\mid x)\). When all the parameters are continuous, this can be written as an integral: \[p(D)=\int\int p(D\mid\eta,\theta)\cdot p(\eta,\theta)\,d\eta d\theta \tag{8}\] This is known as the marginal probability of the data (also known as the evidence or normalization constant) which is the weighted average of the conditional probabilities \(p(D\mid\eta,\theta)\) given all possible values of \((\eta,\theta)\) weighted by their prior probabilities. \(p(D\mid\eta,\theta)\) is typically known as the conditional likelihood and \(p(\eta,\theta\mid D)\) is known as the posterior probability of \((\eta,\theta)\) after observing \(D\). To better make sense of \(p(D)\), it's helpful to bring back the conditioning on the model and think of \(p(D\mid\) model) as the marginal _likelihood_13 of the model after integrating out all the population and subject-specific parameters. Footnote 13: Note that in statistics in general, \(p(D\mid\theta)\) is the probability of \(D\) given \(\theta\) and the _likelihood_ of \(\theta\) given \(D\). The computation of the high dimensional integral over \((\eta,\theta)\) is intractable in the general case. But why do we need the posterior probability? Often we are more interested in making predictions and using the posterior probability to weigh all the likely hypotheses when making predictions. Assume \(\hat{y}\) is either: 1. The unknown response of a new subject given the subject's known covariates \(\hat{x}\), or 2. The unknown partial response (e.g. at future time points) of an existing subject with a previously observed response that is part of \(D\) and some known covariates \(\hat{x}\). The covariates \(\hat{x}\) include the classic pharmaceu covariates, e.g. age and weight, but also include the time points at which the response \(\hat{y}\) is defined if it is a time series. One can write the average prediction for \(\hat{y}\) (using the posterior probability as weights) as: \[E[y\mid x=\hat{x},D]=\int\hat{y}\times p(y=\hat{y}\mid x=\hat{x},D)\,d\hat{y} \tag{9}\] where \(p(y=\hat{y}\mid x=\hat{x},D)\) is defined as: \[p(y=\hat{y}\mid x=\hat{x},D)=\] \[\int\int p(y=\hat{y}\mid x=\hat{x},\eta,\theta)\times p(\eta, \theta\mid D)\,d\eta d\theta \tag{10}\] where \(p(\eta,\theta\mid D)\)14 is the posterior probability and \(D\) refers to all the previously seen data \((x,y)\) excluding \(\hat{x}\). There are 2 problems with the above integration: Footnote 14: To use the product rule for probabilities in the standard way, we should have used \(p(\eta,\theta\mid\hat{x},D)\) instead but \(\hat{x}\) doesn’t contribute to the posterior given that \(\hat{y}\) is not observed yet, so the 2 terms are equal. 1. We cannot evaluate \(p(\eta,\theta\mid D)\) using Eq 7 because computing \(p(D)\) using Eq 8 requires a high dimensional integral which is computationally intractable. 2. Even if we are able to estimate \(p(D)\), computing \(p(y=\hat{y}\mid x=\hat{x},D)\) using Eq 10 requires another high dimensional integral over \((\eta,\theta)\). Both problems are related to the inability to tractably compute high dimensional integrals. When some or all of the parameters are discrete, the corresponding integrals become summations instead. The summation is still intractable when the parameters' dimension is high because there is a combinatorial growth of the possible combinations of values of \(\eta,\theta\) as their dimension increases. In pharmacometrics, parameters are usually continuous so we focus on continuous parameters for the rest of this paper. ### Prior Selection In this section, the main focus is on understanding how to choose good priors. We will attempt to answer the following questions: 1. When and how are priors useful? 2. When and how are priors harmful? The specifics of which priors are available in Pumas, their parameters, and how to define them can be found in the workflow section (Section 3). #### 4.3.1 Overview A prior distribution over a parameter in Bayesian statistics represents the state of belief in the values of a parameter _prior_ to observing any data. For instance, a univariate Gaussian prior distribution of \(N(0,2.0)\) with mean 0 and standard deviation 2, when used on a scalar parameter, means that we think this parameter has a probability mass of \(\approx 99.7\%\) of being between -6 and 6. More generally, a prior distribution with probability density function \(p(x)\) means that we think the parameter has a probability mass of \(\int_{a}^{b}p(x)dx\) (area under the curve) of being between \(a\) and \(b\). Once data is observed, the prior distribution is _updated_ using the observations and the likelihood values to arrive at the _posterior_ distribution. The posterior distribution represents the state of belief in the values of a parameter _after_ the data has been observed. If even more data is collected, in theory we can use the old posterior as the new prior distribution in the analysis of the new data only15. In practice, because the posterior distribution typically doesn't have a closed form solution, it can be tricky to use it as a prior in any new analysis. Therefore, a full analysis using all of the old and new data together with the _old priors_ may have to be performed. 16 Footnote 15: We can’t use the data twice for the same analysis. If the old data was already used to define the prior in the new analysis, then only the new data should be used in the new analysis. In some ways, the prior distribution is analogical to the domain of a parameter in the non-Bayesian workflow and the posterior distribution is like the maximum likelihood estimate. Before observing data, we can only speak for the domain of the parameter since we don't know its value. After observing data, we can fit the model to the data and identify the best fitting parameter values. In non-Bayesian analyses, we typically choose a domain for each parameter to help the optimization algorithm reach reasonable parameter values without wasting time trying values outside of the reasonable domains. For instance, we know that any covariance matrix parameter has to be positive definite so the optimization algorithm should never have to explore values for a covariance matrix parameter that violate this constraint. Prior distributions generalize this idea by not only having an underlying domain implied by the distribution (known as the support of the distribution) but also by allowing the specification of differential preference for certain parameter values over others. The ability to specify preference in parameter values is a powerful tool that can also be very dangerous if used in a wrong way. When used right, it can allow domain knowledge and results from similar previous studies to be reused in the current study. But when used wrong, it can be used to mask bad science and unethical behaviour using sophisticated math that is hard to scrutinize. #### 4.3.2 Good, Bad and Harmless Priors Loosely speaking, priors can be categorized into 2 categories17: Footnote 17: There are algorithms for updating the posterior samples from a previous study given some new data but we don’t cover these here in this paper. 1. Strong priors, i.e. informative priors, and 2. Strong priors, i.e. informative priors, and 2. Weak priors, i.e. weakly informative priors. These are loose categories because in reality, only the relative strength of a prior compared to the likelihood is significant. Recall that the joint probability used to drive the MCMC sampling is given by: \[p(D,\eta,\theta)=p(D\mid\eta,\theta)\times p(\eta,\theta)\] (11) which is the product of the prior probability and likelihood value. If a lot of data exist, the likelihood term will dominate most priors and the posterior distribution will be mostly reflective of the parameter values that fit the data well. If not enough data exist, the prior and its choice become more significant since it's possible some choices of the prior will lead it to dominate the above term. When the prior dominates the likelihood, the posterior distribution will be only a slight shift of the prior distribution towards the parameter values that _actually_ fit the data well. In these cases, the danger of abusing the prior in analyses is higher and we will discuss scenarios where that can happen in this section. Harmless priors are priors that largely mimic the purpose of specifying parameter domains in non-Bayesian workflows. These priors have very little to no preference between parameter values and they can be easily overpowered by even a few data points. For instance, a uniform distribution over \([0,10^{10}]\) for a positive parameter can be considered a harmless prior. This prior only encodes the domain of the parameter without favouring certain values over others in this particular parameterization. Good and bad priors share one thing in common: they are both "_informative_". More precisely, good priors are informative and bad priors are mis-informative. Table 11 summarizes the various types of informative and mis-informative priors. A good informative prior is one that has a good scientific basis and doesn't contradict the data. Given Table 11, there are 2 general ways to define sound priors: 1. Define a weakly informative prior18 and ensure that the conclusion of the study does not change in the limit as the prior gets weaker and weaker. This is a case where we are letting the data speak for itself without imposing our own bias on the results. In this case, we are only using Bayesian inference for its probabilistic soundness and ability to quantify the total uncertainty in the parameters and response even when the model is non-identifiable and/or we only have a few data points. Footnote 18: Very weak priors are sometimes called diffuse priors. 2. Use similar previous studies to guide the prior choice and test that it doesn't contradict the new data. In this case, more justification of the prior distribution is necessary and a proof that the prior doesn't contradict the data is required. One way to show that the prior doesn't contradict the data is to start with no observations at all and with a weakened version of the strong prior, e.g. by increasing the standard deviation. You can then incrementally make the prior stronger again (e.g. by decreasing the standard deviation back) until it reaches the desired level of strength, followed by incrementally adding the (previously removed) observations back to the analysis. If doing so and re-running the analysis at every increment shows a consistent trend in all of the significant statistics (e.g. the probability that the drug is effective), then the prior's strength is aligned with the story told by the data. This can be done using a sequence of prior simulations (when all the data is removed) followed by a combination of MCMC runs and posterior simulations (when the data is added back). For more methods for detecting prior-data conflicts and model mis-informativeness, the readers are referred to Kallioinen et al (2021). Another simple way to detect the potential conflict between data and informative priors is to simulate from the following distribution: \[\begin{split}(\eta,\theta)&\sim p(\eta,\theta)\\ y&\sim p(y\mid\eta,\theta)\end{split} \tag{12}\] where \(p(\eta,\theta)\) is the prior distribution. You can then do a simulation or VPC plot checking the consistency of the data and prior simulations. If the prior is weakly informative or nearly non-informative and we have a lot of data such that the likelihood dominates the prior, prior simulations that are inconsistent with the data in a VPC plot can be ignored. However if the prior is informative, it is based on previous studies and there are not enough data points in the new study to fully dominate the prior, the VPC plot of the prior simulations next to the data can be a useful diagnostic to inspect. Refer to Section 3.2 for how to do this in Pumas. When using a weakly informative prior, you may be inclined to use such simulation plots to select a good prior with a good (but not necessarily tight) coverage of the data. In general, any fine-tuning of the prior based on the data is frowned upon. This is because we would then be using the same data twice, once to fine-tune the prior and once to update the prior to get the posterior. This can result in under-estimating the posterior's variance, i.e. over-confident posteriors, which in turn leads to over-confident posterior predictions. This is analogical to replicating some or all of the data points in your data in a frequentist workflow which under-estimates the confidence intervals and standard errors. In cases where sampling fails due to numerical errors, an overly weak prior may be a cause. In this case, one may try changing the prior, e.g. truncating its support to reasonable bounds, to be more consistent with the data. However, only minimal \begin{table} \begin{tabular}{|c|c|l|} \hline **Scientific Basis** & **Contradicts Data?** & **Comment** \\ \hline None & Yes & This is cheating. Using a strong prior that contradicts and over-powers the data with no scientific basis can be used to cook any results we desire. For instance, a drug can be ineffective but if the strong prior says it’s effective with a very high probability and not enough data exists to counter that strong wrong prior, the conclusion of the “analysis” will be that the drug is effective. \\ \hline None & No & This is bad science. Using a strong prior that’s consistent with the data but that over-powers the likelihood with no scientific basis can lead to over-confident predictions and premature conclusions using fewer data points than necessary. This has a similar effect to artificially replicating the data multiple times to artificially reduce the confidence interval in the non-Bayesian workflow. \\ \hline Previous studies & Yes & This is a sign that dis-similar previous studies were used to guide the prior choice and that the prior choice should be revised because it contradicts the new data and can possibly bias the results of the analysis. \\ \hline Previous studies & No & When used with care, results from previous studies (e.g. an approximation of its parameters’ posterior distribution) can be used to guide the selection of good informative priors for a new similar study. These priors should not contradict the new data collected but may help us terminate the new study early using a smaller sample size than otherwise possible. Positively concluding a study early, after it’s become clear that a drug is effective given all the information available, means that more patients can have access to _truly effective_ drugs earlier. This is especially important for rare disease drug development where collecting more data in a study often means adding years to the clinical trial duration. This is a use case that requires more regulatory and industry agreement on best practices for defining informative prior distributions in such studies with the goal of benefiting the patients. \\ \hline \end{tabular} \end{table} Table 11: The table shows a description of a number of ways to choose **informative prior distributions**. Only the last case is a good use of informative priors. **This table is only applicable to informative priors** that may dominate the likelihood, since weakly informative priors that are dominated by the likelihood typically don’t matter as much. such changes can be allowed in the final analysis and a good post-sampling sensitivity analysis study would be needed to ensure that the conclusion of the study is not locally sensitive to the prior. More generally, numerical errors are often a sign that the model is too sensitive to some of the parameter values which may imply a structural problem in the model itself. In this case, one should also consider better fixes than simply truncating the priors, e.g. by simplifying the model or re-parameterizing it to avoid numerical instability. The Bayesian workflow paper by Gelman et al (2020) has excellent recommendations and case studies to take heed from when diagnosing failing MCMC runs. #### Correlation vs Covariance When defining models that have a covariance matrix parameter (e.g. the covariance parameter of the multivariate normal prior distribution typically used for subject-specific parameters in pharmacometrics), one is always faced with the following 2 equivalent parameterizations: 1. Use a covariance matrix parameter \(\Omega\), or 2. Use a vector of standard deviations \(\omega\) and a correlation matrix parameter \(C\). One can easily recover \(\Omega\) from \((\omega,C)\) and vice versa. Let \(D_{\omega}\) be the diagonal matrix whose elements are \(\omega\), \(\omega_{i}\) be the \(i^{th}\) element in \(\omega\), and \(\Omega[i,i]\) be the \(i^{th}\) diagonal element of \(\Omega\). The relationships between \(C\), \(\omega\) and \(\Omega\) is given by: \[\Omega =D_{\omega}\times C\times D_{\omega} \tag{13}\] \[C =D_{\omega}^{-1}\times\Omega\times D_{\omega}^{-1}\] \[\omega_{i} =\sqrt{\Omega[i,i]}\] In the non-Bayesian context, the 2 parameterizations are equivalent. However in Bayesian analysis, one should define a prior distribution on the parameters. Since prior distributions are supposed to encode the domain knowledge and state of belief about the values of the parameters, using more intuitive/interpretable parameterizations is generally recommended to better make sense of the prior distributions used. For this reason, some people prefer to use the second parameterization with separate standard deviations vector and a correlation matrix since they are more interpretable. We saw examples of prior distributions that can be used for standard deviation, correlation and covariance matrix parameters in Section 3. ### Markov Chain Monte Carlo (MCMC) Intuition In this section, the use of MCMC will be motivated showing how MCMC can be used to bypass the need for high dimensional integrals (discussed in Section 4.2) for all practical purposes. #### 4.4.1 Inference Markov Chain Monte Carlo (MCMC) (Brooks et al, 2011) bypasses the need to solve the numerical integration problem by sampling19 from the posterior probability distribution \(p(\eta,\theta\mid D)\) directly using only the tractable numerator in Eq 7. This numerator is sometimes called the _joint_ probability since it is just \(p(D,\eta,\theta)=p(D\mid\eta,\theta)\cdot p(\eta,\theta)\). Note the difference between the terms with and without conditioning \(\mid\). Having samples from the posterior allows us to estimate quantities such as: Footnote 19: A sample of a random variable or a sample from its distribution is an instantiation of said random variable. For instance, \([0,1,0,0,1,1]\) are 6 samples from a Bernoulli distribution whose support is the set \(\{0,1\}\). The support of a distribution is the domain of the random variable. For example, the set \(\{0,1\}\) is the support of the Bernoulli distribution and the set of all real numbers is the support of the normal distribution. A sample from the posterior distribution can be interpreted as the likely parameter values that could have generated the data that we observed. The term _sample_ can sometimes be used to refer to multiple such samples, to be understood from the context. \[p(f(\eta,\theta)>0\mid D) \tag{14}\] for an arbitrary function \(f\) using a simple count by checking all the samples from the posterior and counting how many satisfy the particular conditions of interest. The ratio of samples satisfying the condition \(f(\eta,\theta)>0\) is the unbiased estimate of the above probability. More concretely, this probability could be \(p(\theta_{i}>0\mid D)\) where \(\theta_{i}\) corresponds to the effect size of an experiment treatment arm compared to the placebo arm. #### 4.4.2 Prediction Besides estimating probabilities of events using the posterior samples, the posterior samples can also be used to make predictions. The functions for performing posterior predictions in Pumas were presented in Section 3.13. Assume we have \(N\) samples from the posterior \(p(\eta,\theta\mid D)\): \[\{(\eta^{(j)},\theta^{(j)}):j\in 1\ldots N\}\] Recall the intractable term in Eq 9 was \(p(y=\hat{y}\mid x=\hat{x},D)\) which can be written as: \[\int\int p(y=\hat{y}\mid x=\hat{x},\eta,\theta)\times p(\eta,\theta \mid D)\,d\eta d\theta\approx\\ \frac{1}{N}\sum_{j=1}^{N}p(y=\hat{y}\mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})\] where \(p(y=\hat{y}\mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})\) is the conditional probability of \(\hat{y}\mid\hat{x}\) evaluated using the given parameter values. Using the samples, the intractable integral was therefore reduced to a tractable average of \(N\) terms since we can easily evaluate \(p(\hat{y}\mid\hat{x},\eta^{(j)},\theta^{(j)})\) given \((\hat{x},\hat{y},\eta^{(j)},\theta^{(j)})\) and the model. The expectation term in Eq 9 can therefore be approximated by Eq 15, where \(E[y\mid x=\hat{x},\eta^{(j)},\theta^{(j)}]\) is just the mean value of the conditional distribution \(p(y=\hat{y}\mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})\) which can be evaluated from a single run of the model. In Pumas syntax, this is the output of the predict function (or the output of simobs with simulate_error = false) with parameters \((\eta^{(j)},\theta^{(j)})\), and covariates \(\hat{x}\). More generally, one can estimate the expectation of an **arbitrary function**\(g(\eta,\theta)\) with respect to the posterior distribution using: \[E[g(\eta,\theta)\mid D] =\int\int g(\eta,\theta)\times p(\eta,\theta\mid D)\,d\eta d\theta\] \[\approx\frac{1}{N}\sum_{j=1}^{N}g(\eta^{(j)},\theta^{(j)})\] For instance, \(g\) could be computing some NCA parameters (Section 3.16) based on the model's prediction or computing any other deterministic quantity that is a deterministic function of the parameters. When defining \(g\) as \(g(\eta^{\prime},\theta^{\prime})=E[y\mid x=\hat{x},\eta=\eta^{\prime},\theta= \theta^{\prime}]\), we recover the prediction special case. And when defining \(g\) as \(g(\eta,\theta)=\mathbb{1}_{f(\eta,\theta)>0}\) for another arbitrary function \(f\), where \(\mathbb{1}_{f(\eta,\theta)>0}\) is the indicator function that is \(1\) when the condition \(f(\eta,\theta)>0\) is satisfied and \(0\) otherwise, we recover the special case of estimating the probability \(p(f(\eta,\theta)>0\mid D)\). In other words, samples from the posterior are almost everything you may ever need to estimate all the quantities of interest needed to make decisions. So how do we obtain such samples without computing the intractable integrals? We use MCMC. #### 4.4.3 Simulation We saw how given \(N\) samples from the posterior, we can compute the average prediction \(E[g(\eta,\theta)\mid D]\approx\frac{1}{N}\sum_{j=1}^{N}g(\eta^{(j)},\theta^{(j )})\) for any particular choice of the function \(g\). Alternatively, you can also obtain a distribution of predictions: \[\{g(\eta^{(j)},\theta^{(j)})\text{ for }j\in 1\ldots N\} \tag{16}\] where \(g(\eta^{\prime},\theta^{\prime})=E[y\mid x=\hat{x},\eta=\eta^{\prime},\theta= \theta^{\prime}]\). This is the MCMC approximation of the distribution of \(g(\eta,\theta)\) where \((\eta,\theta)\sim p(\eta,\theta\mid D)\). For the above choice of \(g\), this distribution of predictions is typically known as the posterior predictive distribution. When \((\eta,\theta)\) are sampled from the prior instead, the distribution of \(g(\eta,\theta)\), for the above choice of \(g\), is known as the prior predictive distribution. Beside sampling predictions or more generally deterministic functions of the parameters \((\eta,\theta)\), one may also sample from the following distribution of \(\hat{y}\): \[(\eta,\theta) \sim p(\eta,\theta\mid D)\] \[\hat{y} \sim p(y=\hat{y}\mid x=\hat{x},\eta,\theta)\] In Pumas syntax, this is the output of the simobs function using the posterior parameter values and covariates \(\hat{x}\). Alternatively, \((\eta,\theta)\) may be sampled from their prior distributions instead or just fixed to particular _ground truth_ values. These prior/posterior/ground truth simulations can be used to do any of the following: 1. Generate synthetic data to test the MCMC algorithm on synthetic data before using the real data20. \[E[y\mid x=\hat{x},D] =\int\hat{y}\times p(y=\hat{y}\mid x=\hat{x},D)\,d\hat{y}\] \[\approx\int\hat{y}\times\Bigg{(}\frac{1}{N}\sum_{j=1}^{N}p(y=\hat{y }\mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})\Bigg{)}d\hat{y} \tag{15}\] \[=\frac{1}{N}\sum_{j=1}^{N}\Bigg{(}\int\hat{y}\times p(y=\hat{y} \mid x=\hat{x},\eta=\eta^{(j)},\theta=\theta^{(j)})d\hat{y}\Bigg{)}\] \[=\frac{1}{N}\sum_{j=1}^{N}E[y\mid x=\hat{x},\eta=\eta^{(j)}, \theta=\theta^{(j)}]\] 2. Identify extremely poor choices of priors to minimally guide the selection of priors by inspecting the similarity of the prior simulations and real data, e.g. using a visual predictive check (VPC) plot, also known as prior predictive check. See section 4.3 for more details on prior selection. 3. Quantify the quality of model fit by comparing posterior simulations to the real data using a VPC plot, also known as posterior predictive check, and estimating the so-called Bayesian \(p\)-value. The code for doing prior simulations and predictions in Pumas was presented in Section 3.2. Similarly, the code for doing posterior simulations and predictions was presented in Section 3.13. Finally, the codes for performing VPC, various simulation queries, e.g. the Bayesian \(p\)-value, and NCA were presented in Sections 3.14, 3.11 and 3.16 respectively. ### No-U-Turn Sampler (NUTS) Algorithm In this section, an intuitive explanation of the No-U-Turn Sampler (NUTS) (Hoffman and Gelman, 2014; Betancourt, 2017) MCMC algorithm will be given. The focus of the explanation will be to develop a strong intuition for how the algorithm works and how to tune its hyper-parameters21. We don't want to clutter the minds of the readers with equations that can be found in numerous other resources and which are not strictly necessary to be able to effectively use the algorithm. For MCMC beginners or when reading this for the first time, you may skip subsections 4.5.6, 4.5.7 and 4.5.8 without a significant loss of context. Footnote 21: The term _hyper-parameters_ generally refers to any parameters that are not being inferred by the Bayesian inference algorithm and that need to be pre-specified by the user before the Bayesian analysis. These can generally be model hyper-parameters, e.g. the parameters of the prior distributions on the population parameters \(\theta\), or they can be algorithm hyper-parameters such as the settings of the NUTS algorithm. #### 4.5.1 High Level Description MCMC sampling uses a stochastic process (random walk) in the \((\eta,\theta)\) space to collect samples from the posterior \(p(\eta,\theta\mid D)\) in an iterative manner using nothing but "local information available". Figure 15 shows a simple random walk for a 1-dimensional Gaussian random variable. In the \(j^{th}\) iteration of the algorithm, the local information available is basically the numerator in Eq 7 evaluated at a particular value \((\eta=\eta^{j-1},\theta=\theta^{j-1})\) and its gradient22 with respect to \((\eta,\theta)\) both of which can be computed easily. Given that this iterative algorithm only uses information from the previous iteration \(j-1\), it is a so-called Markov chain by definition. The goal of the MCMC family of algorithms is to make it such that the individual steps \(\{(\eta^{(j)},\theta^{(j)}):j\in 1\ldots N\}\) are valid samples from the posterior. In this section, we focus on the so-called No-U-Turn sampler (NUTS) algorithm (Hoffman and Gelman, 2014; Betancourt, 2017) which is a variant of the Hamiltonian Monte Carlo (HMC) (Neal, 2011) family of MCMC algorithms. We will not cover these algorithms in details but we will explain the intuition behind them for you to make sense of their hyper-parameters to be able to informatively tinker with them when needed. Imagine the current position of the sampler \((\eta^{(j-1)},\theta^{(j-1)})\) is a particle in the \((\eta,\theta)\) space. In the NUTS algorithm, the random walk process goes like this23: Footnote 23: The real algorithm includes many performance enhancements which are not discussed here. 1. The imaginary particle \((\eta^{(j-1)},\theta^{(j-1)})\) is given a random imaginary speed in a random direction, i.e. it is given a random imaginary velocity. The velocity is sampled from a multivariate Gaussian distribution. 2. The gradient of the log of the joint probability (log of the numerator in Eq 7) with respect to \((\eta,\theta)\) acts as an imaginary force field locally pushing the imaginary particle towards regions of high (log) prior and/or (log) likelihood and away from regions of low (log) prior and/or (log) likelihood. 3. The imaginary particle's motion is **approximately** simulated using time discretization and an approximate ODE solver24 for a total of \(T\) simulated time steps25 under the influence of the imaginary force field, where each simulated time step is of size \(\epsilon\). This simulation only requires the initial position, initial velocity and being able to calculate the force applied at any arbitrary point \((\eta,\theta)\) which is equivalent to evaluating the gradient of \(\log p(D,\eta,\theta)\) with respect to \((\eta,\theta)\), i.e. \(\frac{d\log p(D,\eta,\theta)}{d(\eta,\theta)}\). Footnote 24: The particle dynamics simulation boils down to simulating a set of ODEs with the gradient of \(\log p(D,\eta,\theta)\) as the driving force. An approximate ODE solver called the leapfrog method is used to do the simulation. The leapfrog method with a large step size is approximate because its solution violates the law of conservation of energy, even though it is so-called volume preserving. However, this is a desirable property in this case and can help fully explore the posterior even with disconnected regions of high probability mass. For the effect of the time step size on the sampling behaviour, see Section 4.5.4. 4. A randomly chosen position \((\eta,\theta)\) on the simulated imaginary trajectory of \(T\) time steps becomes the **proposal**. 5. The proposal is then accepted with a carefully chosen probability to ensure that the random walk gives us correct samples from the posterior. If the proposal is accepted, it becomes the next sample \((\eta^{(j)},\theta^{(j)})\), otherwise the previous value is sampled once more, i.e. \((\eta^{(j)},\theta^{(j)})=(\eta^{(j-1)},\theta^{(j-1)})\).26 Footnote 26: In the state-of-the-art variant of NUTS (Betancourt, 2017), the proposal selection and acceptance/rejection steps are combined into a single step which samples from the imaginary trajectory, which includes the previous particle’s position \((\eta^{(j-1)},\theta^{(j-1)})\). Sampling is done in a way that ensures the chosen next position \((\eta^{j},\theta^{j})\) is a correct sample from the posterior. However, we chose to separate the 2 steps conceptually to aid with the explanation. The above algorithm is still a random walk but it is biased towards high probability regions Figure 15: Random walk visualization for a normally distributed random variable (on the x-axis) where the probability density function (PDF) is shown on the y-axis. Firstly a proposal is made, then it is accepted with a specific acceptance probability. When the proposal is for the hiker to climb up, the move is accepted with a probability of 1. When the proposal is for the hiker to climb down, it is accepted with a probability equal to the ratio of PDFs at the 2 positions. because it uses the gradient of the log joint probability to push the particle in the right direction even if it started with a random velocity. The above sampling algorithm is done in 2 phases: an adaptation phase and a sampling phase. In the adaptation phase, the sampler is adapting its time step size and initial velocity distribution while performing the above sampling procedure. In the sampling phase, the algorithm's hyper-parameters seize to adapt and sampling continues using the same procedure above. It is generally recommended to discard the adaptation steps after sampling as _burn-in_27 as they may not be representative samples from the posterior. In Pumas, this can be done using the discard function as explained in Section 3.3. The number of adaptation steps can also be specified using the nadapts option as shown in Section 3.3. Footnote 27: Also called _warm-up_. #### MCMC Visualization To interactively visualize how an MCMC sampler works, chi-feng.github.io/mcmc-demo/ is a great resource to look at where you can change the target distribution and sampling algorithm to develop an intuition for how MCMC samplers work in different cases. You can select the "No-U-Turn Sampler" as the algorithm and then change the target distribution to visualize how NUTS behaves when sampling from different posterior distributions. #### Proposal Acceptance While the state-of-the-art variant of NUTS (Betancourt, 2017) does not use an explicit acceptance/rejection test (also known as Metropolis-Hastings test) of a proposal, what it does is analogical to a traditional proposal selection followed by an acceptance/rejection test. For pedagogical reasons, we assume these are 2 separate steps. The acceptance probability of a proposal in step 5 of the algorithm depends on: 1. The prior probability, and 2. The likelihood function (how well the proposal fits the data) A proposal leading to bad predictions that don't fit the data well compared to the previous sample \((\eta^{(j-1)},\theta^{(j-1)})\), or a proposal that is more improbable according to the prior compared to the previous sample is more likely to be rejected. On the other hand, a proposal that fits the data better than the previous sample and/or is more probable according to the prior will be more likely to be accepted. #### Effect of Time Step Size Generally speaking, the larger the time step size in the simulation, the more approximate the ODE solver is and the more exploratory/adventurous the proposals will be which lead to a lower ratio of accepted proposals. On the other hand, smaller step sizes generally lead to less exploratory proposals which are more likely to be accepted increasing the acceptance ratio. The sampler's exploration ability partly comes from the ability of the approximate ODE solver to over-shoot and jump from one area of high posterior probability28 to another when making proposals, thus exploring multiple modes even if there is a 0 probability region between the 2 modes. A zero probability \(p(\eta,\theta\mid D)\) implies zero probability \(p(D,\eta,\theta)\) (Eq 7) which will result in an infinite force pushing the particle away from that region. Therefore exact simulation will never be able to make the jump across such a region, hence the need for approximate solvers and over-shooting. Footnote 28: High posterior probability regions have high joint probability values (the numerator in Eq 7). The joint probability is the product of the prior probability and likelihood. So parameters values with a high prior probability and/or high likelihood will have high joint and posterior probabilities. #### Time Step Size Adaptation and Target Acceptance Ratio In the NUTS algorithm, you don't set the step size yourself. The NUTS algorithm adapts its step size to encourage a certain fraction of the proposals to get accepted on average. This target acceptance ratio is a hyper-parameter of NUTS. In Pumas, you can set the target acceptance ratio using the target_accept option as shown in Section 3.3. A value of 0.99 means that we want to accept 99% of the proposals the sampler makes. This will generally lead to a small distance between the proposal and the current sample since this increases the chance of accepting such a proposal. On the other hand, a target acceptance fraction of 0.1 means that we want to only accept 10% of the proposals made on average. The NUTS algorithm will therefore attempt larger step sizes to ensure it rejects 90% of the proposals. In general, a target acceptance ratio of 0.6-0.8 is recommended to use. The default value used in Pumas is 0.8. In sampling, there is usually a trade-off between exploration and exploitation. If the sampler is too adventurous, trying aggressive proposals that are far from the previous sample in each step, the sampler would be more likely to explore the full posterior and not get stuck sampling near a local mode of the posterior. However on the flip side, too much exploration will often lead to many proposal rejections due to the low joint probability \(p(D,\eta,\theta)\) of the data and the adventurous proposals. This can decrease the ratio of the effective sample size (ESS)29 to the total number of samples (also known as relative ESS) since a large number of samples will be mere copies of each other due to rejections. Footnote 29: The ESS is an approximation of the “number of independent samples” generated by a Markov chain, when estimating the posterior mean. A low ESS per sample ratio is caused by high auto-correlation in the MCMC samples and is often a bad indicator. On the other hand if we do less exploration, there are at least 2 possible scenarios: 1. The first scenario is if we initialize the sampler from a mode of the posterior. Making proposals only near the previous sample will ensure that we accept most of the samples since proposals near a mode of the posterior are likely to be good parameter values. This local sampling behavior around known good parameter values is what we call here exploitation. While the samples generated via high exploitation around a mode may not be representative of the whole posterior distribution, they might still give a satisfactory approximation of the posterior predictive distributions, which is to be judged with a VPC plot. 2. The second scenario is if we initialize the sampler from bad parameter values. Bad parameter values and low exploration often lead to optimization-like behavior where the sampler spends a considerable number of iterations moving towards a mode in a noisy fashion. This optimization-like, mode-seeking behavior causes a high auto-correlation in the samples since the sampler is mostly moving in the same direction (towards the mode). A high auto-correlation means a low ESS because the samples would be less independent from each other. 30 Also until the sampler reaches parameter values that actually fit the data well, it's unlikely these samples will lead to a good posterior predictive distribution. This is a fairly common failure mode of MCMC algorithms when the adaptation algorithm fails to find a good step size that properly explores the posterior distribution due to bad initial parameters and the model being too complicated and difficult to optimize, let alone sample from its posterior. In this case, all the samples may look auto-correlated and the step sizes between samples will likely be very small (low exploration). In Pumas, the step size is displayed as part of the live progress information during sampling as shown in Figure 1. It's often helpful to detect such a failure mode early in the sampling and kill the sampling early. Footnote 30: In Pumas, the ESS values of the population parameters are displayed in the default display of the MCMC result as shown in Figure 2. #### 4.5.6 Optional: Number of Time Steps and U-Turns Consider a single direction in the \((\eta,\theta)\) space, e.g. the axis of a particular parameter. For relatively flat regions of the posterior where a lot of the values along this direction are almost equally likely, i.e. they all the fit the data to the same level and are almost equally probable according to the prior, proposals far away from the current sample may still be accepted most of the time. This is especially likely in the parts of the posterior where the model is (almost) non-identifiable causing high parameter correlations, and the prior is indiscriminate (e.g. due to being a weak prior). On the other hand, regions of the posterior that are heavily concentrated around a mode with a high curvature often require a smaller step size to achieve reasonable acceptance ratios, since proposals that are even slightly far from the current sample may be extremely improbable according to the prior or may lead to very bad predictions. This is especially likely in regions of the posterior where the model is highly sensitive to the parameter values or if the prior is too strongly concentrated around specific parameter values. To account for such variation in curvature along the _same direction_31 in different regions of the posterior, the NUTS algorithm uses a multi-step proposal mechanism with a fixed time step size (determined during the adaptation phase and then fixed) and a dynamic number of time steps (dynamic in both the adaptation and sampling phases). More specifically, the sampler simulates a trajectory of \(T\) time steps before choosing a proposal randomly from this trajectory, where \(T\) is different for each proposal made. The number of time steps \(T\) simulated by NUTS is determined by an incremental simulation of: \(T=1+2+4+8+16+\dots\) time steps where the number of time steps in each incremental simulation is an increasing power of 2. Each incremental simulation can be either: 1. Forward in time starting from the future-most state, or 2. Backward in time starting from the past-most state. The direction of each incremental simulation is sampled randomly with 0.5 probability assigned to each direction. Table 12 shows an example of the incremental simulations for the particular choice of simulation directions: [Forward, Forward, Reverse, Reverse, Forward, Reverse]. So when do we stop simulating? The NUTS algorithm typically stops simulating when one of the following 4 conditions is met: 1. It finds a so-called U-turn, that is when the sampler begins to move back towards one end of the trajectory from the other end. 2. It reaches a pre-set maximum number of simulation steps. 3. The log prior probability and/or log likelihood drops rapidly in one of the steps, dropping by more than a pre-set threshold. 4. A numerical error occurs. The third and fourth termination criteria are often called "divergence". After the simulation terminates, a point randomly chosen on the trajectory simulated becomes the next proposal and the search is terminated. _Terminating by finding a U-turn is typically considered a sign of successful exploration._ The number of evaluations of \(\log p(D,\eta,\theta)\) in each NUTS iteration is determined by the length of the simulated trajectory which is \(\sum_{i=0}^{j-1}2^{i}=2^{j}-1\), if \(j\) incremental simulations were required to find a U-turn32. Footnote 32: In the efficient implementation of NUTS, once a U-turn is found, the final incremental simulation is interrupted so the number of model evaluations is actually somewhere between \(2^{j-1}\) and \(2^{j}-1\) In the efficient implementations of the NUTS algorithm, a binary tree data structure of depth \(j\) is used to help with the efficient selection of a proposal from all of the states \((\eta,\theta)\) visited during the simulation until a U-turn was found33, without storing all of the states. This is an efficiency enhancement trick but the term _tree depth_ stuck and became synonymous to the number of incremental simulations ran so far, \(j\). In the case where the sampler is unable to find a U-turn even after a pre-specified maximum \(j\) is reached, the sampler terminates the simulation anyways and makes a proposal. The term maximum tree depth is commonly used to refer to the maximum number of incremental simulations \(j\) allowed before having to make a proposal even if no U-turn was found. Footnote 33: The number of states visited excluding the initial state is at most \(2^{j}-1\). Adding the initial state, we have \(2^{j}\) possible states any of which could be the proposal. These can in theory be stored as the leaf nodes of a binary tree of depth \(j\) which has \(2^{j}\) leaf nodes. However in practice, only a subset of such states are stored and the tree idea is used to ensure the proposal can be efficiently chosen at random from all \(2^{j}\) possible states while satisfying the theoretical requirements of a proposal in MCMC, which is often called the detailed balance condition. #### 4.5.7 Optional: Distribution of the Initial Velocity Recall that in each NUTS iteration, we are sampling a random initial velocity for the \((\eta,\theta)\) particle before simulating the dynamics to arrive at a proposal. Hypothetically, assume that we already have samples from the posterior \(p(\eta,\theta\mid D)\). If you were to go back and re-do the sampling using NUTS, how would you sample the initial velocity of the imaginary \((\eta,\theta)\) particle to make sampling as efficient as possible? In general, it would make sense to move faster along directions in the posterior that have a higher variance and slower along directions that have a lower variance. For instance, we can compute the variance along each parameter's axis and sample higher speeds for the parameters that change more, and lower speeds for the parameters that change less. In practice, you can think of different parameters having different scales where 1 parameter may be in the 10s while another one may be in the 1000s. In that case, it makes sense to use different speeds along different directions to more efficiently sample from the posterior distribution. More generally, one may even compute the sample covariance matrix from the (hypothetical) samples available, compute the principal components and sample higher speeds along directions with more variance than the other directions. If we encode how slow we want the particle to go along each direction \(d_{i}\) by a number \(s_{i}\), setting the standard deviation of the speed along this direction to \(1/s_{i}\) can be used to achieve the desired slowness. Assume each \(d_{i}\) is an axis along a specific parameter \(i\) (which could be part of \(\eta\) or \(\theta\)). The distribution of the velocity \(v_{i}\) along \(d_{i}\) can be the following univariate Gaussian: \[v_{i}\sim N(0,(1/s_{i})^{2}) \tag{17}\] with mean \(0\) and standard deviation \(1/s_{i}\). This distribution will have us sampling speeds along the direction \(d_{i}\) that are on average inversely proportional to \(s_{i}\). Writing it for all the parameters together, we can write: \[v\sim N(0,M^{-1}) \tag{18}\] where \(M\) is a diagonal matrix of elements \(s_{i}^{2}\) on the diagonal and \(M^{-1}\) is the covariance matrix of the velocity vector \(v\). Using a diagonal \(M\) is equivalent to adapting the speeds' standard deviations along the parameters' axes. While using a dense matrix \(M\) is equivalent to the more general case of adapting the speeds' standard deviations along more optimized directions \(d_{i}\) (e.g. from principal components of the covariance matrix). It turns out that when simulating the "imaginary dynamics" in HMC/NUTS after sampling the initial velocity, the analogical _kinetic energy_ is given by: \[K(v)=v^{T}Mv/2 \tag{19}\] hence the natural inclination to call the above matrix \(M\) a "mass matrix" in the HMC/NUTS literature. Recall that in physics, the kinetic energy of a particle with a scalar speed \(v\) and mass \(m\) is \(\frac{mv^{2}}{2}\). To summarize, directions with a higher "mass" will be explored more slowly than directions with a lower mass. The ideal mass matrix \(M\) is one that approximates the _global_ precision matrix of the posterior distribution, i.e. the inverse of the covariance matrix. Equivalently, the ideal \(M^{-1}\) is one that approximates the global covariance matrix of the posterior. So far we assumed that we have samples from the posterior and are able to adapt the mass matrix manually. In practice, the NUTS algorithm adapts the mass matrix for you during the adaptation phase, and you only need to select the structure of the matrix, e.g. diagonal or dense. For large problems, a diagonal matrix is typically used in practice since the computational cost of using a dense matrix is \(O(D^{3})\), where \(D\) is the total number of parameters in \((\eta,\theta)\) combined. On the other hand, the computational cost of using a diagonal \begin{table} \begin{tabular}{|c|c|c|} \hline Increment \(j\) & Simulation Direction & Interval of the Time Steps Simulated after \(j\) Increments \\ \hline 0 & - & \([0,0]\) \\ 1 & Forward & \([0,0+1]=[0,1]\) \\ 2 & Forward & \([0,0+1+2]=[0,3]\) \\ 3 & Reverse & \([0-4,0+1+2]=[-4,3]\) \\ 4 & Reverse & \([0-4-8,0+1+2]=[-12,3]\) \\ 5 & Forward & \([0-4-8,0+1+2+16]=[-12,19]\) \\ 6 & Reverse & \([0-4-8-32,0+1+2+16]=[-44,19]\) \\ \hline \end{tabular} \end{table} Table 12: The table shows the incremental simulations of the NUTS algorithm for \(j\in[1,6]\). Notice how an increasing power of \(2\) is added to the positive direction or subtracted from the negative direction in each increment. The total number of time steps made after increment \(j\) (excluding the initial time point \(t=0\)) is \(1+2+4+8+16+\cdots=2^{0}+2^{1}+2^{2}+\cdots+2^{j-1}=\sum_{i=0}^{j-1}2^{i}=2^{j}-1\). **Check**: \(2^{1}-1=2^{0}=1\), \(2^{2}-1=2^{0}+2^{1}=3\), \(2^{3}-1=2^{0}+2^{1}+2^{2}=7\), etc. Note that the intervals above are of the number of time steps. Each time step has a simulated time step size of \(\epsilon\). matrix is only \(O(D)\). When we have many subjects in the hierarchical model, \(D\) can be quite large. Before we conclude this section, it is important to note that the HMC/NUTS algorithm is typically explained in the literature using a so-called momentum vector \(p=Mv\) while we chose to use the more intuitive velocity vector \(v\) in this paper to explain the intuition behind the algorithm. The two parameterizations are equivalent but the momentum one is the one typically used in implementations and HMC/NUTS research. When \(v\sim N(0,M^{-1})\), the corresponding distribution of the momentum vector \(p\) is \(N(0,M)\). #### 4.5.8 Optional: Hierarchical Priors and Nuts Consider the following toy model which has no observations: \[\begin{split}\text{log}\omega&\sim\text{Normal}(0,1.5)\\ \eta&\sim\text{Normal}\left(0,(e^{\text{log}\omega})\right) \end{split} \tag{20}\] This is a model with 2 parameters \((\text{log}\omega,\eta)\), a prior distribution on each of them and some exponential dependence between them in that the standard deviation of the prior on \(\eta\) depends exponentially on the value of \(\text{log}\omega\). Figure 16 shows the PDF heatmap of the joint prior distribution of the parameters \(\text{log}\omega\) (y-axis) and \(\eta\) (x-axis). Recall that the NUTS algorithm uses a multi-step trajectory with a fixed time step size in the imaginary dynamics simulation to account for variation in curvature along the same direction. Consider the direction along the x-axis in the figure. The curvature along the x-axis changes depending on where along the y-axis the imaginary \((\text{log}\omega,\eta)\) particle is. Lower values of \(\text{log}\omega\) lead to exponentially higher curvatures (reflected through the tight color band in the heatmap) along the \(\eta\) direction. So if we try to use NUTS to sample from this prior, two bad things can happen: 1. The sampler may adapt its step size to very small values to be able to sample from the lower regions of the prior and it will use a large number of time steps \(T\) to effectively explore the upper regions of the prior. In such cases, more often than not, the maximum number of allowed time steps \(2^{j}-1\) may not be enough to find the U-turn. This will hurt the performance since we will be doing many model evaluations per proposal and we may need multiple steps to fully traverse the upper regions of the prior. 2. The sampler may adapt its step size to values not small enough to sample from the lower regions in the prior. In this case, the sampler may skip sampling from the significant lower part in the prior leading to potential bias in the results. In other words, the above prior may lead to slow and biased NUTS sampling, a clearly terrible outcome. Note that of course we are not trying to sample from the prior using NUTS, because we can just sample directly from the standard distributions in the model using more direct and efficient methods. However, studying the prior's PDF and how it interacts with NUTS can help us understand how NUTS will interact with the posterior when there are a few data points available. Also note that the above model is using an explicit log scale parameterization for the standard deviation for pedagogical purposes. In reality, models may be written directly in terms of the standard deviation \(\omega\) (instead of its log) or more generally the covariance matrix \(\Omega\) for multivariate \(\eta\). However, the above example is still relevant in those cases because implementations of the NUTS algorithm do the log-scale transformation behind the scenes and so NUTS actually samples unconstrained parameters all the time even if the original model's parameters were constrained to a different support. So the same insight we build for the model above is applicable to general hierarchical models when using NUTS. Figure 16: The PDF heatmap of the prior distribution of \(\text{log}\omega\) on the y-axis and \(\eta\) on the x-axis. The standard deviation parameters (or more generally the covariance matrix) of the prior on the subject-specific parameters \(\eta_{i}\) (commonly known as the between-subject variability) in pharmacometrics are typically assigned weak priors to avoid bias. This means that they tend to have a wide variability in their value in the posterior distribution unless enough data is collected to precisely identify the value of the parameters. This combination of: 1. Weak priors on the covariance matrix parameters to avoid bias, 2. Not having enough data to precisely identify the parameter values, 3. The dependence between parameters' priors (in the standard deviation) in hierarchical models, 4. The log scale domain transformation of standard deviation and covariance parameters used by NUTS, and 5. The fixed step size used by the NUTS sampler, is an unfortunate but very common combination of factors that can lead to wrong and very slow inference. So what's the solution? One solution is to reparameterize the model as such: \[\begin{split}\text{log}\omega&\sim\text{Normal} (0,1.5)\\ \eta\text{std}&\sim\text{Normal}\left(0,1\right)\\ \eta&=e^{\text{log}\omega}\times\eta\text{std}\end{split} \tag{21}\] This reparameterization de-couples the priors of the parameters and resolves the issue in the PDF heatmap. Note that this model transformation does not change the data generating process. That is if you sample values for \(\eta\text{std}\) and \(\text{log}\omega\), the values of \(\eta\) simulated will be identical to the values simulated from the original model's prior. However, the latter parameterization is more friendly to the NUTS algorithm. In the context of pharmacometrics, the following covariance-based model: \[\begin{split}\theta&\sim p(\theta)\\ \Omega&\sim p(\Omega)\\ \eta_{i}&\sim N(\theta,\Omega)\end{split} \tag{22}\] can be transformed to: \[\begin{split}\theta&\sim p(\theta)\\ \Omega&\sim p(\Omega)\\ \eta\text{std}_{i}&\sim N(0,I)\\ \eta_{i}&=\text{chol}(\Omega)\times\eta\text{std}_{ i}+\theta\end{split} \tag{23}\] where \(\text{chol}(\Omega)\) is the lower triangular Cholesky factor (a generalized square root for matrices) of the covariance matrix \(\Omega\). Similarly, using the standard deviation and correlation matrix parameterization instead, the original model becomes: \[\begin{split}\theta&\sim p(\theta)\\ \omega&\sim p(\omega)\\ C&\sim p(C)\\ \eta_{i}&\sim N(\theta,D_{\omega}\times C\times D_{ \omega})\end{split} \tag{24}\] where \(D_{\omega}\) is the diagonal matrix whose elements are the standard deviations vector \(\omega\). The above correlation-based model can be transformed to de-couple the priors as such: \[\begin{split}\theta&\sim p(\theta)\\ \omega&\sim p(\omega)\\ C&\sim p(C)\\ \eta\text{std}_{i}&\sim N(0,I)\\ \eta_{i}&=D_{\omega}\times\text{chol}(C)\times\eta \text{std}_{i}+\theta\end{split} \tag{25}\] When using Pumas to define these models, a transformation equivalent to the above transformation is done automatically behind the scenes even if you write the model in the coupled way. 34 Footnote 34: This equivalence is in exact arithmetic but when running computation on the computer, floating point arithmetic is done. This means that the results may not be identical depending on how sensitive the model is to round-off errors in the floating point arithmetic. Before we conclude this section, we note that models where the priors on the population and subject-specific parameters are coupled are sometimes called centered parameterization (CP) models in the Bayesian literature. While de-coupling the priors using the transformations discussed above is often called the non-centered parameterization (NCP). These terms can be confusing though so we mostly avoid their use in this paper. ### Basic Summary Statistics There are a few summary statistics that one can view to assess the convergence of the MCMC chains. These include: * **Effective Sample Size (ESS)**: an approximation of the "number of independent samples" generated by a Markov chain, when estimating the posterior mean. A low ESS per sample ratio is caused by high auto-correlation in the MCMC samples and is often a bad indicator. * \(\widehat{R}\) **(Rhat)**: potential scale reduction factor, a metric to measure if the Markov chains have mixed, and, potentially, converged. Chain mixing refers to the case when different chains include samples from the same regions in the posterior as opposed to each chain including samples from a separate region of the posterior. * **Monte Carlo Standard Error (MCSE)**: the standard deviation divided by the ESS, which is a measure of estimation noise in the posterior mean. The formula for the effective sample size (ESS) when estimating the posterior mean is: \[\widehat{\eta}_{\text{eff}}=\frac{mn}{1+\sum_{t=1}^{T}\widehat{\rho}_{t}}\] where \(m\) is number of Markov chains, \(n\) is total samples per Markov chain, and \(\widehat{\rho}_{t}\) is an auto-correlation estimate. This formula is an approximation of the "number of independent sample" generated by a Markov chain when estimating the mean values of the parameters. Since we don't have a way to recover the true auto-correlation \(\rho\), instead we rely on an estimate \(\widehat{\rho}\). The higher the auto-correlation in the chains, the lower the ESS will be for the same number of MCMC samples. High auto-correlation can result from too many rejections or optimization-like behaviour where sampler is moving towards a mode. Both of these are signs of lack of convergence. In general, a high auto-correlation alone is not a sign of lack of convergence though so care must be taken when interpreting the ESS to root-cause why it might be low. The formula for the \(\widehat{R}\) is: \[\widehat{R}=\sqrt{\frac{\widehat{\text{var}}^{+}\left(\psi\mid y\right)}{W}}\] where \(\widehat{\text{var}}^{+}\left(\psi\mid y\right)\) is the Markov chains' sample variance for a certain parameter \(\psi\). We calculate it by using a weighted sum of the within-chain variance \(W\) and between-chain variance \(B\): \[\widehat{\text{var}}^{+}\left(\psi\mid y\right)=\frac{n-1}{n}W+\frac{1}{n}B\] Intuitively, the value is \(\widehat{R}=1.0\) if all chains are totally convergent. As a heuristic, if \(\widehat{R}>1.1\), you need to worry because probably the chains have not converged adequately. ### Convergence #### Signs of Lack of Convergence MCMC has an interesting property that it will asymptotically converge to the target distribution. That means, if time is not a limited resource, it is guaranteed that, irrelevant of the target distribution posterior geometry, MCMC will give you the right answer. However, for all real-world scenarios, time is a limited resource. Different MCMC algorithms, like NUTS, can reduce the sampling (and adaptation) time necessary for convergence to the target distribution. In the ideal scenario, the NUTS sampler will converge to the true posterior and doesn't miss on any mode. But, can we prove convergence? Unfortunately, this is not easy to prove in general. All the convergence diagnostics are only tests for symptoms of lack of convergence. In other words if all the diagnostics look normal, then we can't prove that the sampler didn't converge. There are some signs of lack of convergence: * Any of the moments (e.g. the mean or standard deviation) is changing with time. This is diagnosed using stationarity tests by comparing different parts of a single chain to each other. * Any of the moments is sensitive to the initial parameter values. This is diagnosed using multiple chains by comparing their summary statistics to each other. While high auto-correlation is not strictly a sign of lack of convergence, samplers with high auto-correlation will require many more samples to get to the same efficiency as another sampler with low auto-correlation. So a low auto-correlation is usually more desirable. #### 4.7.2 When Does Convergence Matter? Broadly speaking, there are 2 main classes of models we can use: 1. Causal models, sometimes known as mechanistic models. 2. Black-box models, sometimes known as regression models or machine learning models. Simply put, causal/mechanistic models are models that make sense in the domain of interest. This means that: 1. All of the variables in the model have a meaning. 2. Each relationship in the model is well thought out, minimal and based on a claimed causal relationship between a subset of the variables. The goal of mechanistic models is to understand the system of interest. Each model is typically designed to answer questions about some of the parameters in the model. For example in PK models, we typically have absorption and clearance individual parameters. So after fitting the model to the data, one can answer questions about the probability of a specific individual parameter being greater than or less than a certain threshold. Another common example is dose response models, where we typically have a coefficient that represents the effect of the dose on the disease. If the probability that this parameter is more than 0 (according to the posterior distribution) is higher than a certain threshold, we can claim that this drug is effective. Correct causal/mechanistic models are supposed be good in both interpolation and extrapolation.35 Footnote 35: Note that causality is never implied by the model alone, instead it is based on the scientist’s intuition and understanding of the model and its variables. In other words, models are nothing more than tools that can be used to express some claimed causal relationships between quantities that the scientist has in mind. On the other end of the spectrum, we have black-box models. These are models commonly characterized by: 1. Many intermediate variables that have no meaning. 2. Dense relationships between all the variables without having a precise reason for each relationship upfront. These models are often called machine learning models. Think of a linear regression model with polynomial bases up to the 5th order. Simple linear regression models with linear terms only can arguably be in the first class of causal models if the covariates are claimed to cause the response in a linear fashion. But once you get to the 3rd or 4th order polynomial bases, the higher order polynomial terms and their coefficients start losing meaning and the model becomes more black-box. In Bayesian black-box models, prior distributions are typically arbitrary (e.g. a standard Gaussian) and used only for regularization. The hyper-parameters of the prior distributions can even be optimized for maximum average posterior predictive accuracy36. Footnote 36: Using a validation data set that wasn’t used in the training/inference of the model’s parameters There are many techniques to systematically build complicated black-box models. Some examples include: * Using polynomial series terms as bases, e.g. Taylor polynomial series or the Chebyshev polynomial series * Using Fourier series terms as bases * Using deep neural networks adding neurons and layers as needed * Using a Gaussian process for nonlinear regression These are models that, given enough data for \((x,y)\) and given enough terms in the model, can fit any arbitrary function \(y=f(x)\) without having any causal reasoning or meaning built into the model. They are purely prediction machines that can be used to do interpolation and sometimes very limited extrapolation. The ability of a model class to fit any function \(f\) with a model large enough is sometimes called the universal approximation property which a number of machine learning model classes have Hornik et al (1989). In practice, some models may combine components from causal/mechanistic models and black-box models. For example, a causal model can be used to define which variables depend on which other variables but then the functional form of the dependence can be approximated using a black-box model. Combining mechanistic and black-box models is sometimes known as scientific machine learning (Rackauckas et al, 2021). The reason why we are talking about different types of models here is because the types of diagnostics to use should be consistent with the goal of the analysis you are running. If the goal is to make good predictions, regardless of the model's interpretability, then we can treat the model as a black-box and mostly rely on predictive diagnostics. In this case, good predictions are sufficient even if the model doesn't make sense or if the inference process was imperfect. To give an example, in Bayesian neural networks, extremely crude approximations are often done when inferring the posterior so long as the posterior predictions are consistent with the data (Goan and Fookes, 2020; Izmailov et al, 2019). On the other hand, if the purpose of the analysis is to understand the system and to learn about the values of the parameters in your model because they are significant in and of themselves, then causal/mechanistic models should have been used and extra care must be taken to ensure that we correctly sample from the posterior distribution and that priors were not too strong. ### Crossvalidation and Model Selection In the Bayesian workflow, it is common to evaluate and compare models using their predictive power for out-of-sample data, i.e. data not used for the fitting or inference of the model parameters. One popular model evaluation metric for out-of-sample prediction accuracy is the so-called expected log predictive density (ELPD). Other common model selection criteria include various information criteria (Burnham and Anderson, 2002) such as the Widely Applicable Information Criteria (WAIC). For a discussion of the ELPD as well as other model evaluation criteria, refer to Vehtari and Ojanen (2012); Piironen and Vehtari (2017); Gneiting and Raftery (2007). Intuitively, the ELPD is some average measure of predictive accuracy across all posterior samples, averaged over a number of prediction tasks. Let \(\mathcal{M}\) be the pharmacometrics model with parameters \((\eta,\theta)\) that describe the data generating process of the observed data \(y\mid x\). The ELPD is defined as: \[\text{ELPD}=\int\log p(\hat{y}|\hat{x},D,\mathcal{M})\cdot p_{t}(\hat{y}|\hat{ x})d\hat{y}\] where \(\hat{y}\) is unobserved data, e.g. future data points, \(p_{t}(\hat{y}\mid\hat{x})\) is the true data generating distribution of \(\hat{y}\) (unknown in practice) and \(p(\hat{y}|\hat{x},D,\mathcal{M})\) is the posterior predictive density defined as: \[p(\hat{y}|\hat{x},D,\mathcal{M})=\int p(\hat{y}|\hat{x},\eta,\theta,\mathcal{M })\cdot p(\eta,\theta|D,\mathcal{M})d\theta\] where \(p(\eta,\theta|D,\mathcal{M})\) describes the posterior distribution of \((\eta,\theta)\) given the previously observed data \(D\) and the model \(\mathcal{M}\). Since the true data generating distribution is unknown, it is common to approximate the ELPD by an empirical distribution over the observed data. One such estimator is the log pointwise predictive density (lppd). Let \((x_{i},y_{i})\) be the \(i^{th}\) observation by some arbitrary splitting of the data \(D\) (not necessarily by subjects) into \(S\) pieces and let \((\eta^{(j)},\theta^{(j)})\) be the \(j^{th}\) sample draw from the posterior \(p(\eta,\theta|D,\mathcal{M})\), for \(j\in{1,\ldots,N}\). The lppd can be calculated using Equation 26. A shortcoming of the lppd is that it is not representative of predictive accuracy on unseen data, since \((x_{i},y_{i})\) is used both for inference on the posterior and to evaluate the model out-of-sample. #### 4.8.1 Leave-K-Out Crossvalidation Crossvalidation overcomes this problem by ensuring that \((x_{i},y_{i})\) is not used for inference on the posterior when evaluating the out-of-sample performance for \(y_{i}\mid x_{i}\). The simplest way to divide the data into in-sample and out-of-sample subsets is the leave-one-out (loo) crossvalidation where in each outer iteration, one data point is considered out-of-sample and the remaining are in-sample. The leave-one-out, log predictive density (loo-lpd) is defined in Equation 27, where \(D_{-i}\) is all data excluding \((x_{i},y_{i})\) and \((\eta^{(j)}_{-i},\theta^{(j)}_{-i})\) is the \(j^{th}\) sample draw from the posterior \(p(\eta,\theta|D=D_{-i},\mathcal{M})\). This can be generalised to leave \(K\)-out cross validation where \((x_{i},y_{i})\) is interpreted as \(K\) observations, e.g \(K\) subjects or \(K\) drug concentration observations. \[\text{lppd} =\frac{1}{S}\sum_{i=1}^{S}\log p(y=y_{i}|x=x_{i},D,\mathcal{M})\] \[=\frac{1}{S}\sum_{i=1}^{S}\log\int p(y=y_{i}|x=x_{i},\eta,\theta, \mathcal{M})p(\eta,\theta|D,\mathcal{M})d\theta \tag{26}\] \[\approx\frac{1}{S}\sum_{i=1}^{S}\log\Big{(}\frac{1}{N}\sum_{j=1} ^{N}p(y=y_{i}|x=x_{i},\eta=\eta^{(j)},\theta=\theta^{(j)},\mathcal{M})\Big{)}\] \[\text{loo-lpd} =\frac{1}{S}\sum_{i=1}^{S}\log p(y=y_{i}|x=x_{i},D=D_{-i},\mathcal{ M})\] \[=\frac{1}{S}\sum_{i=1}^{S}\log\int p(y=y_{i}|x=x_{i},\eta,\theta, \mathcal{M})\cdot p(\eta,\theta|D=D_{-i},\mathcal{M})d\theta \tag{27}\] \[\approx\frac{1}{S}\sum_{i=1}^{S}\log\Big{(}\frac{1}{N}\sum_{j=1} ^{N}p(y=y_{i}|x=x_{i},\eta=\eta_{-i}^{(j)},\theta=\theta_{-i}^{(j)},\mathcal{ M})\Big{)}\] #### 4.8.2 Leave-Future-K-Out Crossvalidation When working with time-series data, it can often be more useful to evaluate models based on their ability to predict future values using nothing but past values for training. This gives rise to another variant of crossvalidation called leave-future-one-out (lfoo) crossvalidation and the lfoo-lpd which is defined in Equation 28, where \(t\) is the minimum number of data points used for training/inference, \(D_{1:i-1}\) is the past data and \((\eta_{-(i:S)}^{(j)},\theta_{-(i:S)}^{(j)})\) is the \(j^{th}\) sample draw from the posterior \(p(\eta,\theta|D=D_{1:i-1},\mathcal{M})\) which is obtained by excluding the future data \(D_{i:S}\) from the inference. #### 4.8.3 Crossvalidation for Hierarchical Models When performing crossvalidation in a hierarchical model, there are multiple ways to measure the predictive power of the model. For instance in hierarchical pharmacometric modeling, the goal is to learn a population model to make predictions on new patients while simultaneously learning subject-specific models to make future predictions for specific subjects having seen their past response to drugs. These models are useful for dose selection and dose adaptation for new or existing patients with the objective of maximizing the therapeutic effect while avoiding toxicity. Depending on the prediction task of interest, one may choose to treat each time observation as a data point or each entire patient/subject as a data point. If the goal is to evaluate the model's ability to predict responses for new patients, leave-one-subject-out crossvalidation should be used. Alternatively, if the goal is to evaluate the model's ability to predict future drug concentrations or any other observable time-dependent quantity the model predicts, then leaving future observations out for each subject makes more sense. This will be called leave-one-observation-out or leave-one-future-observation-out crossvalidation. The choice of what constitutes a point to leave out when doing crossvalidation affects the way the predictive likelihoods are computed: \[p(y=y_{i}|x=x_{i},\eta=\theta^{(j)},\theta=\theta^{(j)},\mathcal{M})\] When leaving subjects out, we are often interested in the marginal likelihood of this subject given a posterior sample draw of the population parameters \(\theta\), marginalizing out the subject-specific parameters \(\eta\). Alternatively, the conditional likelihood can also be used for some default or typical values of the subject-specific parameters, e.g. the mode of the prior distribution. To marginalize subject-specific parameters, approximate integration methods such as LaplaceI and FOCE can be used to obtain the marginal likelihood. On the other hand when leaving observations out in a single subject, the quantity of interest is often the conditional likelihood given each sample from the joint posterior of population and subject-specific parameters of individual subjects given previous data of the subject. #### Pareto Smoothed Importance Sampling Crossvalidation Evaluating the loo-lpd or lfoo-lpd is expensive since one needs to draw samples from \(N\) or \(N-t\) different posteriors, e.g. from \(p(\eta,\theta|D=D_{-i},\mathcal{M})\) for loo-lpd. Typically this will be done by MCMC, e.g. the NUTS algorithm, which in spite of recent progress, remains computationally expensive when the number of parameters is large and the curvature of the posterior is uneven along one or more dimensions. One approach to overcome this difficulty, is the Pareto-smoothed importance sampling method for leave-one-out, crossvalidation (PSIS-LOO-CV) (Vehtari et al, 2015). In PSIS-LOO-CV, MCMC is run only once on the full data. The same samples are then re-used in each outer iteration of CV but using different weights. The weights are determined using importance sampling (IS) by comparing the likelihood with one data point left out to the likelihood of the full dataset. The raw importance weights are then smoothed by fitting them to a generalized Pareto distribution. The smoothed weights can then be used to estimate the ELPD contribution of each data point. Beside the ability to approximate the ELPD, PSIS-LOO-CV also provides a useful diagnostic which is the shape parameter of the Pareto distribution fitted to the raw weights when leaving out each data point. Data points that when removed lead to a large shape parameter are more influential than data points which have a low shape parameter. For highly influential points where the Pareto shape parameter is higher than 0.7, the ELPD contribution for this point can be considered unreliable. In those cases, resampling from the posterior after removing the influential point is recommended. ## 5 Example Models Listings 29, 30, 31 and 32 are examples of some common models in pharmacometrics. ``` 1@modelbegin 2@parambegin 3tvcl\(\sim\)LogNormal(log(10),0.25)#CL 4tvq\(\sim\)LogNormal(log(15),0.5)#Q 5tvc\(\sim\)LogNormal(log(35),0.25)#V1 6tvp\(\sim\)LogNormal(log(105),0.5)#V2 7tvka\(\sim\)LogNormal(log(2.5),1)#ka 8\(\sigma\sim\)truncated(Cauchy(),0,Inf)#sigma 9end 10@prebegin 11CL=tvcl 12Vc=tvc 13Q=tvq 14Vp=tvp 15Ka=tvka 16end 17@dynamicsbegin 18Depot'=-Ka*Depot 19Central'=Ka*Depot-(CL+Q)/Vc*Central+Q/Vp*Peripheral 20Peripheral'=Q/Vc*Central-Q/Vp*Peripheral 21end 22@derivedbegin 23cp:[email protected]/Vc 24conc\(\sim\)@.LogNormal(log(cp),\(\sigma\)) 25end 26end ``` Listing 29: Single Subject PK Model ``` 1@modelbegin 2@parambegin 3tvcl\(\sim\)LogNormal(log(10),0.25)#CL 4tvq\(\sim\)LogNormal(log(15),0.5)#Q 5tvc\(\sim\)LogNormal(log(35),0.25)#V1 6tvp\(\sim\)LogNormal(log(105),0.5)#V2 7tvka\(\sim\)LogNormal(log(2.5),1)#ka 8\(\sigma\sim\)truncated(Cauchy(0,5),0,Inf)#sigma 9C\(\sim\)LKJCholesky(5,1.0) 10\(\omega\sim\)Constrained( 11MwNormal(zeros(5),Diagonal(0.4^2*ones(5))), 12lower=zeros(5), 13upper=fill(Inf,5), 14init=ones(5), 15} 16end 17@randombegin 18\(\sim\)WvLogNormal(I(5)) 19end 20@prebegin 21\(\eta=\omega\).*(getchol(C).L*\(\eta\)std) 22CL=tvcl*\(\eta\)[1] 23Q=tvq*\(\eta\)[2] 24Vc=tvc*\(\eta\)[3] 25Vp=tvp*\(\eta\)[4] 26Ka=tvka*\(\eta\)[5] 27end 28dydamicsbegin 29Depot'=-Ka*Depot 30Central'=Ka*Depot-(CL+Q)/Vc*Central+Q/Vp*Peripheral 31Peripheral'=Q/Vc*Central-Q/Vp*Peripheral 32end 33dederivedbegin 34cp:[email protected]/Vc 35conc\(\sim\)@.LogNormal(log(cp),\(\sigma\)) 36end ``` Listing 30: Population PK Model * 1@modelbegin * 2 @parambegin [MISSING_PAGE_POST] nd * 22 @randombegin * 23 @\(\eta\)Ka \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)Ka)) * 24 @\(\eta\)Ke \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)Ka)) * 25 @\(\eta\)Vd \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)Vd)) * 26 @\(\eta\)n \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)n)) * 27 @\(\eta\)d \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\delta\))) * 28 @\(\eta\)c \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)c)) * 29 @\(\eta\)EC50 \(\sim\) LogNormal(0.0, sqrt(\(\omega^{2}\)EC50)) * 30 end * 31 @prebegin * 32 @\(\eta\)c \(\eta\ ``` 1@modelbegin 2@parambegin 3\(\lambda_{1}\sim\)LogNormal(0.0, 2.0)#basalhazard 4\(\beta\sim\)LogNormal(0.0, 2.0)#fixedeffectDOSE 5\(\omega\sim\)LogNormal(0.0, 2.0)#inter-subjectvariability 6end 7@randombegin 8\(\eta\sim\)LogNormal(0.0, \(\omega\)) 9end 10@covariatesDOSE 11@prepbegin 12_\(\lambda_{1}\)=\(\lambda_{1}\)*\(\eta\)#basalhazardwithinter-subjectvariability 13_\(\lambda_{0}\)=\(\lambda_{1}\)*\(\beta^{\text{-}}\)DOSE#totalhazard 14end 15@varsbegin 16\(\lambda\)=\(\lambda_{0}\) 17end 18@dynamicsbegin 19\(\Lambda^{\prime}\)=\(\lambda\) 20end 21@derivedbegin 22dv\(\sim\)@.TimeToEvent(\(\lambda\), \(\Lambda\)) 23end 24end ``` Listing 32: Time-To-Event Model ## 6 Conclusion In this work, we presented a comprehensive Bayesian analysis workflow using Pumas. All the syntax and relevant theory were presented following an intuition-first approach as much as possible, with numerous cross-links. If you are an existing Pumas user and you have further questions, you can reach out to us via the Pumas Discourse platform (discourse.pumas.ai). You can also find more focused tutorials and complete scripts on the Pumas tutorials website (tutorials.pumas.ai) for your continued learning. If after reading this paper, you would like to read and learn more about Bayesian statistics, the following are some excellent resources that can be used to further your learning: 1. Bayesian Data Analysis (Gelman et al, 2013) 2. Statistical Rethinking: A Bayesian Course with Examples in R and Stan (McElreath, 2020), 3. Regression and Other Stories (Gelman et al, 2020), 4. Data Analysis Using Regression and Multilevel/Hierarchical Models (Gelman and Hill, 2006), 5. Probabilistic Machine Learning: An Introduction (Murphy, 2022) 6. Probability Theory: The Logic of Science (Jaynes, 2003) ## 7 Acknowledgements We would like to acknowledge all of the reviewers of the early drafts of this work for their valuable feedback. In particular, we would like to thank **Haden Bunn** (Pumas-AI Inc.), **Joga Gobburu** (Pumas-AI Inc. and the University of Maryland Baltimore), **Yoni Nazarathy** (Pumas-AI Inc. and the University of Queensland), **Vaibhav Dixit** (Pumas-AI Inc.), **Russell Tsuchida** (CSIRO Data61), **Anastasios Panagiotelis** (University of Sydney) and **Mutaz Jaber** (Gilead Sciences Inc.).
2309.12872
Deep regression learning with optimal loss function
In this paper, we develop a novel efficient and robust nonparametric regression estimator under a framework of feedforward neural network. There are several interesting characteristics for the proposed estimator. First, the loss function is built upon an estimated maximum likelihood function, who integrates the information from observed data, as well as the information from data structure. Consequently, the resulting estimator has desirable optimal properties, such as efficiency. Second, different from the traditional maximum likelihood estimation (MLE), the proposed method avoid the specification of the distribution, hence is flexible to any kind of distribution, such as heavy tails, multimodal or heterogeneous distribution. Third, the proposed loss function relies on probabilities rather than direct observations as in least squares, contributing the robustness in the proposed estimator. Finally, the proposed loss function involves nonparametric regression function only. This enables a direct application of existing packages, simplifying the computation and programming. We establish the large sample property of the proposed estimator in terms of its excess risk and minimax near-optimal rate. The theoretical results demonstrate that the proposed estimator is equivalent to the true MLE in which the density function is known. Our simulation studies show that the proposed estimator outperforms the existing methods in terms of prediction accuracy, efficiency and robustness. Particularly, it is comparable to the true MLE, and even gets better as the sample size increases. This implies that the adaptive and data-driven loss function from the estimated density may offer an additional avenue for capturing valuable information. We further apply the proposed method to four real data examples, resulting in significantly reduced out-of-sample prediction errors compared to existing methods.
Xuancheng Wang, Ling Zhou, Huazhen Lin
2023-09-22T13:53:25Z
http://arxiv.org/abs/2309.12872v1
# Deep regression learning with optimal loss function ###### Abstract Due to powerful function fitting ability and effective training algorithms of neural networks, in this paper, we develop a novel efficient and robust nonparametric regression estimator under a framework of feedforward neural network (FNN). There are several interesting characteristics for the proposed estimator. First, the loss function is built upon an estimated maximum likelihood function, who integrates the information from observed data, as well as the information from data structure. Consequently, the resulting estimator has desirable optimal properties, such as efficiency. Second, different from the traditional maximum likelihood estimation (MLE), we do not require the specification of the distribution, hence the proposed estimator is flexible to any kind of distribution, such as heavy tails, multimodal or heterogeneous distribution. Third, the proposed loss function relies on probabilities rather than direct observations as in least square loss, hence contributes the robustness in the proposed estimator. Finally, the proposed loss function involves nonparametric regression function only. This enables the direct application of the existing packages, and thus the computation and programming are simple. We establish the large sample property of the proposed estimator in terms of its excess risk and minimax near-optimal rate. The theoretical results demonstrate that the proposed estimator is equivalent to the true MLE in which the density function is known. Our simulation studies show that the proposed estimator outperforms the existing methods in terms of prediction accuracy, efficiency and robustness. Particularly, it is comparable to the true MLE, and even gets better as the sample size increases. This implies that the adaptive and data-driven loss function from the estimated density may offer an additional avenue for capturing valuable information. We further apply the proposed method to four real data examples, resulting in significantly reduced out-of-sample prediction errors compared to existing methods. _Keywords:_ Estimated maximum likelihood estimation, feedforward neural network, excess risk, kernel density estimation. Introduction Consider a nonparametric regression model, \[Y = g(\mathbf{X})+\epsilon, \tag{1}\] where \(Y\in\mathbb{R}\) is a response variable, \(\mathbf{X}\in\mathcal{X}\subseteqq\mathbb{R}^{d}\) is a \(d\)-dimensional vector of predictors, \(g:\mathcal{X}\rightarrow\mathbb{R}\) is an unknown regression function, \(\epsilon\) is an error independent of \(\mathbf{X}\). Nonparametric regression is a basic and core problem in statistics and machine learning, where the purpose is estimating the unknown target regression function \(g\) given independent and identically distributed (i.i.d.) samples \(S\equiv\left(\mathbf{X}_{i},Y_{i}\right)_{i=1}^{n}\) with the sample size \(n\). Since the distribution of \(\epsilon\) is unknown, \(g(\cdot)\) is usually estimated based on the least square (LS) criterion, that is, \[\hat{g}=\operatorname*{arg\,min}_{g:\mathbb{R}^{d}\rightarrow\mathbb{R}}\frac{ 1}{n}\sum_{i=1}^{n}\left\{Y_{i}-g(\mathbf{X}_{i})\right\}^{2}. \tag{2}\] Driven by various nonparametric approximation techniques, there is a vast literature on nonparametric regression. For example, tree regression (Breiman, 2017), random forests (Breiman, 2001), and nonparametric smoothing methods such as nearest neighbor regression (Cheng, 1984; Devroye et al., 1994), kernel regression (Nadaraya, 1964; Watson, 1964; Hall and Huang, 2001), local polynomial regression (Fan and Gijbels, 2018), spline approximation (Schumaker, 2007) and reproducing kernel regression (Berlinet and Thomas-Agnan, 2011; Lv et al., 2018), among others. Recently, attributed to powerful function fitting ability, well-designed neural network architectures and effective training algorithms and high-performance computing technologies, deep neural network (DNN) with the empirical LS loss function has enjoyed tremendous success in a variety of applications, such as the fields of computer vision, natural language processing, speech recognition, among others. Based on the theoretical results concerning approximation error and stochastic error, with the LS loss, several inspiring works have obtained the minimax near-optimal rate at \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{s}\) for learning the regression function \(g\) under feedforward neural network (FNN), with the assumption that \(g\) is \(\beta\)-H\(\ddot{o}\)lder smooth. In these works, the response variable or the error term is assumed to be bounded (Gyorfi et al., 2002; Farrell et al., 2021), have finite \(p\)-th moment with \(p>1\)(Kohler and Langer, 2021; Kohler et al., 2022), sub-Gaussian (Bauer and Kohler, 2019; Chen et al., 2019; Schmidt-Hieber, 2019; 2020; Fan and Gu, 2022; Bhattacharya et al., 2023), sub-exponential (Jiao et al., 2021; Yan and Yao, 2023) or have finite variance (Liu et al., 2022). The LS criterion based estimators are mathematically convenient, easily implemented, and efficient when the error \(\epsilon\) is normally distributed. However, as it is expressed in (2), the LS loss is sensitive to large errors, that is, the LS estimator is severely influenced by outliers, resulting in unstable and unreliable estimation. In the era of "big data", data generation mechanism and collection are unmanageable, and thus non-Gaussian noises or outliers are almost inevitable. To address the unstableness, a lot of robust methods based on traditional nonparametric regression techniques have been developed, for example, the kernel M-smoother (Hardle, 1989), median smoothing(Tukey et al., 1977), locally weighted regression (Stone, 1977; Cleveland, 1979), the local least absolute method (Wang and Scott, 1994), quantile regression (Koenker and Bassett Jr, 1978; He et al., 2013; Lv et al., 2018), among others. Recently, within the framework of FNN, several robust methods have been introduced to address non-Gaussian noise problems, and corresponding convergence rates for learning the function \(g\) have also been established. For instance, Lederer (2020); Shen et al. (2021) and Fan et al. (2022) have explored non-asymptotic error bounds of the estimators that minimizing robust loss functions, such as the least-absolute deviation loss (Bassett Jr and Koenker, 1978), Huber loss (Huber, 1973), Cauchy loss and Tukey's biweight loss (Beaton and Tukey, 1974). Particularly, based on a general robust loss function satisfying a Lipschitz continuity, Farrell et al. (2021) have demonstrated the convergence rate \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{4}\) with the assumption that the response is bounded, which means that heavy-tail error is not applicable. To relax the bounded restriction on the response, Shen et al. (2021b) and Shen et al. (2021a) have established the convergence rate \(n^{-\frac{2\beta}{2\beta+d}+1/p}(\log n)^{c}\) under the assumption that the \(p\)-th moment of response is bounded for some \(p>1\). These methods are proposed for improving the robustness, they are sub-optimal in terms of efficiency. This work attempts to provide a loss function which is efficient as well as robust for nonparametric regression estimation within the framework of FNN. It is worth noting that in the least squares (LS) criterion, observations in which the response variable \(Y_{i}\) deviates significantly from the conditional mean \(g(\mathbf{X}_{i})\) play a significant role, which may seem counterintuitive. In fact, when estimating the conditional mean \(g(\cdot)\), observations in which \(Y_{i}\) is closer to \(g(\mathbf{X}_{i})\) are supposed to logically carry more information than those where the response is away from the conditional mean. This can be expressed in terms of probability of observation. Therefore, we propose a loss function based on the estimated likelihood function, which has the form of \[\hat{g}=\operatorname*{arg\,max}_{g:\mathbb{R}^{d}\to\mathbb{R}}\frac{1}{n} \sum_{i=1}^{n}\log\hat{f}(Y_{i}-g(\mathbf{X}_{i})), \tag{3}\] where \(\hat{f}\) is an estimator of the density function of \(\epsilon_{i}=Y_{i}-g(\mathbf{X}_{i})\). We simplify the FNN estimators of \(g(\cdot)\) based on maximizing Estimated log-Likelihood functions (3) by EML-FNN, which is expected to have the desirable optimal properties since we use the density function and leverage the data distribution. In addition, different from the traditional maximum likelihood estimator (MLE) where \(f(\cdot)\) is known, the proposed EML-FNN is flexible as it avoids specifying the error distribution. Moreover, the quasi-likelihood loss (3), which relies on probabilities rather than direct observations as in LSE, contributes the robustness of the proposed EML-FNN. More interesting, in comparison to the MLE where \(f(\cdot)\) is known, the adaptive form via estimating the density \(f(\cdot)\) in (3) proves to be effective in learning the data structure and offers an additional avenue for capturing information. This is supported by our simulation studies. Specifically, Figures 1 to 3 reveal the following results: when \(\varepsilon_{i}\) follows a normal distribution where the LSE is equivalent to the MLE, the EML-FNN performs slightly better than FNN estimators based on Least Square Error (LSE-FNN) for data with a larger sample size (\(n=1024\)). However, when \(\varepsilon_{i}\) deviates from a normal distribution or has heterogeneous variances, the EML-FNN significantly outperforms the LSE-FNN. The enhanced performance may be attributed to the utilization of structural information via the estimated density function \(\hat{f}(\cdot)\). With the explicit form of Nadaraya-Watson kernel estimator for the density function of \(\epsilon_{i}\), we develop a FNN estimation for \(g\) that circumvents estimating the unknown density function, resulting in an objective function that solely involves \(g\). This enables the direct application of existing packages Pytorch (Paszke et al., 2019) and Scikit-learn (Pedregosa et al., 2011) in python, simplifying the computation and programming. We establish the large sample property of \(\hat{g}\) in terms of its excess risk and the minimax rate, which demonstrate that the proposed estimator for \(g\) is equivalent to the one based on (3) when the density function is known. As a result, the proposed deep learning approach for \(g\) exhibits the desired optimal properties, such as efficiency (Zhou et al., 2018, 2019). Finally, we employ the proposed method to analyze four real datasets. Table 1 shows that the proposed EML-FNN provides much higher prediction accuracy than the existing methods for each dataset. The paper is structured as follows. In Section 2, we introduce the proposed EML-FNN. In Section 3, we establish the large sample property of \(\hat{g}\) in terms of its excess risk and the minimax rate. Section 4 provides simulation studies to investigate the performance of the proposed method via the comparison with the competing estimation methods. In Section 5, we apply the proposed method to analyze four real data. We conclude the paper with a discussion in Section 6. Technical proofs are included in the Supplementary Material. ## 2 Method We estimate \(g\) under the framework of FNN. In particular, we set \(\mathcal{G}\) to be a function class consisting of ReLU neural networks, that is, \(\mathcal{G}:=\mathcal{G}_{\mathcal{D},\mathcal{U},\mathcal{W},\mathcal{S}, \mathcal{B}}\), where the input data is the predictor \(X\), forming the first layer, and the output is the last layer of the network; Such a network \(\mathcal{G}\) has \(\mathcal{D}\) hidden layers and a total of \((\mathcal{D}+2)\) layers. Denote the width of layer \(j\) by \(d_{j}\), \(j=0,\cdots,\mathcal{D},\mathcal{D}+1\) with \(d_{0}=d\) representing the dimension of the input \(X\), and \(d_{\mathcal{D}+1}=1\) representing the dimension of the response \(Y\). The width \(\mathcal{W}\) is defined as the maximum width among the hidden layers, i.e., \(\mathcal{W}=\max\left(d_{1},...,d_{\mathcal{D}}\right)\). The size \(\mathcal{S}\) is defined as the total number of parameters in the network \(\mathcal{G}\), given by \(\mathcal{S}=\sum_{i=0}^{\mathcal{D}}d_{i+1}\times(d_{i}+1)\); The number of neurons \(\mathcal{U}\) is defined as the total number of computational units in the hidden layers, given by \(\mathcal{U}=\sum_{i=1}^{\mathcal{D}}d_{i}\). Further, we assume every function \(g\in\mathcal{G}\) satisfies \(|g|_{\infty}\leq\mathcal{B}\) with \(\mathcal{B}\) being a positive constant. With \(g\in\mathcal{G}\), \(g\) can be estimated by \[\arg\min_{g\in\mathcal{G}}\left\{\frac{1}{n}\sum_{i=1}^{n}\rho(Y_{i}-g( \boldsymbol{X}_{i}))\right\}, \tag{4}\] where \(\rho(\cdot)\) is a given loss function, for example, the least squares \(\rho(t)=t^{2}\); least absolute criteria \(\rho(t)=|t|\); Huber loss, Cauchy loss, and Tukey's biweight loss, and so on. The LS based estimator is efficient only when the error \(\epsilon\) is normally distributed. The estimators based on robust loss functions such as least absolute, Huber, Cauchy and Tukey's biweight are robust but they are sub-optimal in terms of efficiency. When \(f(\cdot)\) is known, an ideal estimator of \(g\) can be obtained by, \[\hat{g}=\arg\min_{g\in\mathcal{G}}\mathcal{R}_{n}(g):=\arg\min_{g\in\mathcal{G}} \left\{\frac{1}{n}\sum_{i=1}^{n}\left(-\log f(Y_{i}-g(\mathbf{X}_{i}))\right)\right\}. \tag{5}\] However, in reality, \(f\) is usually unknown. To ensure that we do not misspecify the distribution and simultaneously obtain an estimator based on optimal loss, we employ kernel techniques to estimate the density function \(f\). That is, \[\hat{f}(z)=\frac{1}{n}\sum_{i=1}^{n}\mathcal{K}_{h}(\epsilon_{i},z), \tag{6}\] where \(\mathcal{K}_{h}(y_{1},y_{2})=K(\frac{y_{1}-y_{2}}{h})/h\), \(h\) is a bandwidth and \(K(\cdot)\) is a kernel function. Replacing \(f(\cdot)\) in (5) with \(\hat{f}\), we estimate \(g\) by, \[\hat{g}=\arg\min_{g\in\mathcal{G}}\hat{\mathcal{R}}_{n}(g)=\arg\min_{g\in \mathcal{G}}n^{-1}\sum_{i=1}^{n}\left(-\log\hat{f}(Y_{i}-g(\mathbf{X}_{i}))\right). \tag{7}\] That is, \[\hat{g}=\arg\min_{g\in\mathcal{G}}\hat{\mathcal{R}}_{n}(g)=\arg\min_{g\in \mathcal{G}}n^{-1}\sum_{i=1}^{n}\left(-\log\frac{1}{n}\sum_{j=1}^{n}\mathcal{ K}_{h}(Y_{j}-g(\mathbf{X}_{j}),Y_{i}-g(\mathbf{X}_{i}))\right). \tag{8}\] Recall that the conventional FNN (Chen et al., 2019; Nakada and Imaizumi, 2019; Schmidt-Hieber, 2020; Kohler and Langer, 2021; Jiao et al., 2021; Kohler et al., 2022; Liu et al., 2022; Fan and Gu, 2022; Yan and Yao, 2023; Bhattacharya et al., 2023) minimizes a least square objective and is sensitive to the data's distribution type and outliers, which leading to the development of the robust FNN (Lederer, 2020; Shen et al., 2021; Fan et al., 2022). However, the enhanced robustness comes at the cost of efficiency. In contrast to existing methods, our approach stands out by utilizing a MLE criterion as the objective function, thereby achieving both efficiency and robustness. In particular, efficiency is attained by fully leveraging the data distribution, robustness is added to our estimator because our proposed loss function relies on probabilities rather than direct observations as in LS. Moreover, the kernel-based estimation approach benefits from the smooth continuity of kernel functions, facilitating gradient calculations and overcoming non-differentiability issues when dealing with densities such as the uniform distribution, mixture distribution, and heteroscedasticity. Finally, the proposed loss function (8) involves \(g\) only. This enables the direct application of packages Pytorch (Paszke et al., 2019) and Scikit-learn (Pedregosa et al., 2011) in python, simplifying the computation and programming. The proposed \(\hat{\mathcal{R}}_{n}(g)\) involves a tuning parameter \(h\). According to the property of kernel approximation, a smaller \(h\) yields a more accurate density approximation but with a larger variance. Fortunately, the summation over individuals mitigates the increased variance caused by a small \(h\). Therefore, when computational feasibility allows, a smaller value of \(h\) is preferred. The conclusion is supported by both our theoretical and numerical results. In practice, we use the Gaussian kernel function and set \(\hat{f}=1e-5\) when \(\hat{f}<1e-5\) because logarithmic transformation is required in the objective function. ## 3 Large sample properties In this section, we establish the large sample property of \(\hat{g}\) in terms of its excess risk, which is defined as the difference between the risks of \(g\) and \(g^{*}\): \[\mathcal{R}(g)-\mathcal{R}(g^{*})=\mathbb{E}\left(-\log f(Y_{i}-g(\mathbf{X}_{i}) )\right)-\mathbb{E}\left(-\log f(Y_{i}-g^{*}(\mathbf{X}_{i}))\right),\] where \(g^{*}\) is defined as \[g^{*}:=\arg\min_{g}\mathcal{R}(g)=\arg\min_{g}\mathbb{E}\left(-\log f\left(Y_ {i}-g(\mathbf{X}_{i})\right)\right).\] The minimizer is taken over the entire space, and thus implies that \(g^{*}\) does not necessarily belong to the set \(\mathcal{G}\). We further define \(g^{*}_{\mathcal{G}}:=\arg\min_{g\in\mathcal{G}}\mathcal{R}(g)\) in the set \(\mathcal{G}\). Denote \(f^{(r)}(\cdot)\) to be the \(r\)th derivative of \(f\), and \(f_{\mathbf{x}}(\cdot)\) to be the density function of covariates \(\mathbf{X}\), who is supported on a bounded set, and for simplicity, we assume this bounded set to be \([0,1]^{d}\). In the rest of the paper, the symbol \(c\) denotes a positive constant which may vary across different contexts. The following conditions are required for establishing the rate of the excess risk: 1. Kernel: Let \(U_{r}=\int K(t)t^{r}dt\) and \(v_{r}=\int K^{2}(t)t^{r}dt\). Assume the kernel function \(K(\cdot)\) has a bounded second derivative, \(U_{0}=1\) and \(U_{1}=0\). 2. Bandwidth: \(h\to 0\) and \(nh\to\infty\) as \(n\to\infty\). 3. Density function \(f\): Assume the density function \(f(\cdot)\) has a continuous second derivative and satisfies \(f(\epsilon)>c>0\) for any \(\epsilon\) belonging to the support set of \(f\). 4. Function class for \(g\) and \(g^{*}\): For any function \(g\in\mathcal{G}\) and the true function \(g^{*}\), we assume \(\|g\|_{\infty}<\mathcal{B}\) and \(\|g^{*}\|_{\infty}<\mathcal{B}\). Condition (C1) is a mild condition for the kernel function, which is easy to be satisfied when the kernel function is a symmetric density function. Condition (C2) is the most commonly used assumption for the bandwidth. Condition (C3) requires a lower bound of the density function to avoid tail-related problems. The simulation studies, where the lower bounded condition is not met across all four distributions, demonstrate that the proposed estimator maintains its effectiveness even in scenarios where the condition does not hold. Condition (C4) is a bounded condition for the function class \(\mathcal{G}\) and the true function \(g^{*}\), which is commonly used in Shen et al. (2019); Lu et al. (2021); Chen et al. (2019); Yarotsky (2017). It is noteworthy that, in cases where the explicit depiction of the approximation error for the function class \(\mathcal{G}\) to \(g^{*}\) becomes necessary, an additional condition will be introduced concerning the category to which \(g^{*}\) belongs. This is demonstrated in Corollary 1. Define \(\mathcal{G}|_{\mathbf{x}}:=\{g(\mathbf{x}_{1}),g(\mathbf{x}_{2}),\cdots,g(\mathbf{x}_{n}):g\in \mathcal{G}\}\) for a given sequence \(\mathbf{x}=(\mathbf{x}_{1},\cdots,\mathbf{x}_{n})\) and denote \(\mathcal{N}_{2n}(\delta,\|\cdot\|_{\infty},\mathcal{G}|_{\mathbf{x}})\) to be the covering number of \(\mathcal{G}|_{\mathbf{x}}\) under the norm \(\|\cdot\|_{\infty}\) with radius \(\delta\). Let \(A\preceq B\) represent \(A\leq cB\) for a postive constant \(c\). In the following Theorems 1 and 2, we show the excess risk of the proposed estimator under the true density function and the estimated density function to see how much difference of the proposed estimator from the true MLE estimator, which is defined as \[\hat{g}_{oracle}=\arg\min_{g\in\mathcal{G}}\left\{\frac{1}{n}\sum_{i=1}^{n} \big{(}-\log f(Y_{i}-g(\mathbf{X}_{i}))\big{)}\right\}. \tag{9}\] **Theorem 1**.: _Under Conditions (C3) and (C4), we have that, as \(n\to\infty\),_ \[\mathbb{E}\left(\mathcal{R}(\hat{g}_{oracle})-\mathcal{R}(g^{*})\right) \preceq\frac{\log\mathcal{N}_{2n}(n^{-1},\|\cdot\|_{\infty},\mathcal{G}|_{\bm {x}})}{n}+\big{(}\mathcal{R}(g^{*}_{\mathcal{G}})-\mathcal{R}(g^{*})\big{)}.\] Recall that the excess risk of the LS estimator, takes the form: \(\frac{\mathcal{B}^{2}\log 2\mathcal{N}_{2n}(n^{-1},|\cdot|_{\infty},\mathcal{G}|_{ \mathbf{x}})(\log n)^{c}}{n}+\big{(}\mathcal{R}(g^{*}_{\mathcal{G}})-\mathcal{R}( g^{*})\big{)}\) for some positive constant \(c\) with the condition of bounded response (Gyorfi et al., 2002; Farrell et al., 2021) or bounded \(p\)-th moment (Kohler and Langer, 2021; Kohler et al., 2022). For the robust loss considered in Shen et al. (2021), the excess risk has the form of: \(\frac{\lambda_{L}\mathcal{B}\log 2\mathcal{N}_{2n}(n^{-1},|\cdot|_{\infty}, \mathcal{G}|_{\mathbf{x}})(\log n)^{c}}{n^{1-1/p}}+\big{(}\mathcal{R}(g^{*}_{ \mathcal{G}})-\mathcal{R}(g^{*})\big{)}\), where \(p\) represents the bounded \(p\)-th moment of the outcome, and \(\lambda_{L}\) represents the Lipschitz coefficient of robust loss function. Clearly, the oracle estimator \(\hat{g}_{oracle}\) presents a slightly more favorable excess risk bound compared to the OLS estimator, as it lacks the \((\log n)^{c}\) multiplier. Additionally, our estimator converges faster than the robust estimators with a rate of \((\log n)^{c}/n^{1-1/p}\) for robust estimators versus a reduced rate of \(1/n\) for our estimator in estimation error. It is important to highlight that, unlike the requirement of a Lipschitz condition for the robust loss, we instead invoke a lower bound condition (C3) for the density function. The introduction of a lower bound to the density function is helpful to the stability of our estimator. On the other hand, by leveraging the inherent benefits of the density function, our proposed estimator exemplifies a harmonious blend of robustness and efficiency that is crucial for practical applications. **Theorem 2**.: _For the proposed estimator \(\hat{g}\), under conditions (C1)-(C4), we have_ \[\mathbb{E}\left(\mathcal{R}(\hat{g})-\mathcal{R}(g^{*})\right) \preceq \left(\frac{\log\mathcal{N}_{2n}(\frac{1}{n},\|\cdot\|_{\infty}, \mathcal{G}|_{\mathbf{x}})}{n}\right)+\left(\mathcal{R}(g^{*}_{\mathcal{G}})- \mathcal{R}(g^{*})\right)+\left(\|g^{*}_{\mathcal{G}}-g^{*}\|_{\infty}^{2}+h^{ 2}\right).\] Theorems 1 and 2 shows that the upper bounds of the excess risk for both \(\hat{g}_{oracle}\) and \(\hat{g}\) encompass two terms: \(\frac{\log\mathcal{N}2n(\frac{1}{n},\|\cdot\|_{\infty},\mathcal{G}|_{\mathbf{x}})} {n}\) and \(\mathcal{R}(g^{*}_{\mathcal{G}})-\mathcal{R}(g^{*})\), which represent the estimation error of \(\hat{g}\) evaluated at the true density function \(f\), and the approximate bias of the FNN space towards the true function \(g^{*}\), respectively. The disparity in excess risks between \(\hat{g}_{oracle}\) and \(\hat{g}\) is encapsulated in \(\|g^{*}_{\mathcal{G}}-g^{*}\|_{\infty}^{2}+h^{2}\), which describes the error introduced by substituting \(f\) with its kernel estimator \(\hat{f}\). The error implies that utilizing the kernel estimator \(\hat{f}\) in lieu of \(f\) does not introduce additional variance. However, it does lead to significant approximation bias when using a larger value of \(h\), thus advocating the preference for a smaller value of \(h\) to mitigate this bias, particularly, the bias is ignorable if \(h^{2}\preceq\frac{\log\mathcal{N}_{2n}(\frac{1}{n},\|\cdot\|_{\infty}, \mathcal{G}|_{\mathbf{x}})}{n}\) and the FNN function closely approximates the true function \(g^{*}\). The former can be satisfied by taking a small \(h\) and the later holds due to powerful function fitting ability of FNN. The simulation studies in Section 4 further confirm the conclusion. With the discussion above, we hence investigate the efficiency of the proposed estimator via that for the oracle estimator \(\hat{g}_{oracle}\). For simplicity, we assume \(g^{*}=g^{*}_{\mathcal{G}}\), that is, the true function belongs to the FNN space. Recall that for \(g\in\mathcal{G}\), we have \[g(x)=\mathbf{W}_{\mathcal{D}}^{\top}\sigma\left(\mathbf{W}_{\mathcal{D}-1}^{\top} \sigma(\mathbf{W}_{\mathcal{D}-2}^{\top}\sigma(\mathbf{W}_{\mathcal{D}-3}\cdots\sigma (\mathbf{W}_{0}^{\top}\mathbf{X}+\mathbf{a}_{0}))+\mathbf{a}_{\mathcal{D}-2})+\mathbf{a}_{ \mathcal{D}-1}\right)+a_{\mathcal{D}},\] where \(\sigma(\cdot)\) is a given activation function and \(\mathbf{W}_{r},\mathbf{a}_{r},r=0,\cdots,\mathcal{D}\) are parameters. Then, we can write \(g(\mathbf{x})=g(\mathbf{x};\mathbf{\theta})\) with \(0,\cdots,\mathcal{D}\). Denote \(g^{*}(\mathbf{x})=g(\mathbf{x};\mathbf{\theta}^{*})\). We can obtain that \(\hat{g}_{oracle}(\mathbf{x})=g(\mathbf{x};\hat{\mathbf{\theta}}_{o})\) with \(\hat{\mathbf{\theta}}_{o}\) satisfying \(\hat{\mathbf{\theta}}_{o}=\arg\min_{\mathbf{\theta}}\left\{\frac{1}{n}\sum_{i=1}^{n} \big{(}-\log f(Y_{i}-g(\mathbf{X}_{i};\mathbf{\theta}))\big{)}\right\}.\) If \(\mathbb{E}\left[\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}} \right)\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}}\right)^{ \top}\right]\) is positive definite around \(\mathbf{\theta}^{*}\) and \(\mathbb{E}(\hat{\mathbf{\theta}}_{o})=\mathbf{\theta}^{*}\), we have \(\mathbb{E}(\hat{\mathbf{\theta}}_{o}\hat{\mathbf{\theta}}_{o}^{\top})=n^{-1}\left\{ \mathbb{E}\left[\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}} \right)\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}}\right)^{ \top}|_{\mathbf{\theta}=\mathbf{\theta}^{*}}\right]\right\}^{-1}\)(Onzon 2011). Then for any unbiased estimator \(\check{\mathbf{\theta}}\) that \(\mathbb{E}(\check{\mathbf{\theta}})=\mathbf{\theta}^{*}\), based on the Multivariate Cramer-Rao Lower Bound, it holds that \[\mathbb{E}(\check{\mathbf{\theta}}\check{\mathbf{\theta}}^{\top})\succeq n^{-1}\left\{ \mathbb{E}\left[\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}} \right)\left(\frac{d\log f(Y-g(\mathbf{X};\mathbf{\theta}))}{d\mathbf{\theta}}\right)^{ \top}|_{\mathbf{\theta}=\mathbf{\theta}_{o}}\right]\right\}^{-1}=\mathbb{E}(\hat{\bm {\theta}}_{o}\hat{\mathbf{\theta}}_{o}^{\top}).\] which leads that \(\operatorname{Var}(\check{\mathbf{\theta}})\succeq\operatorname{Var}(\hat{\mathbf{ \theta}}_{o})\), where \(A\succeq B\) represents \(A-B\) is a semi-positive matrix. Combining with the delta method, it holds that \(\operatorname{Var}(\check{g}):=\operatorname{Var}(g(\mathbf{x};\check{\mathbf{\theta}} ))\geq\operatorname{Var}(\hat{g}_{oracle})\). From this perspective, we can characterize \(\hat{g}_{oracle}\) as an efficient estimator, while \(\hat{g}\) also possesses such efficiency under certain straightforward conditions, such as \(h^{2}\preceq\frac{\mathcal{S}}{n}\), where \(\mathcal{S}\) is the length of \(\mathbf{\theta}\). Now, we further explore how the excess risk relies on FNN structure, as well as the function class which \(g^{*}\) belongs to. Let \(\beta=s+r\), \(r\in(0,1]\) and \(s=\lfloor\beta\rfloor\in\mathbb{N}_{0}\), where \(\lfloor\beta\rfloor\) denotes the largest integer strictly smaller than \(\beta\) and \(\mathbb{N}_{0}\) denotes the set of non-negative integers. For a finite constant \(B_{0}>0\), the \(H\ddot{o}lder\) class \(\mathcal{H}_{\beta}([0,1]^{d},B_{0})\) is defined as \[\mathcal{H}_{\beta}([0,1]^{d},B_{0})=\{g:[0,1]^{d}\mapsto\mathbb{R},\max_{ \|\alpha\|_{1}<s}\|\partial^{\alpha}g\|_{\infty}\leq B_{0},\max_{\|\alpha\|_{ 1}=s}\sup_{x\neq y}\frac{|\partial^{\alpha}g(x)-\partial^{\alpha}g(y)|}{\|x-y \|_{2}^{r}}\leq B_{0}\}\] where \(\partial^{\alpha}=\partial^{\alpha_{1}}\cdots\partial^{\alpha_{d}}\) with \(\alpha=(\alpha_{1},\cdots,\alpha_{d})^{T}\in\mathbb{N}_{0}^{d}\) and \(\|\alpha\|_{1}=\sum_{i=1}^{d}\alpha_{i}\). Denote \(\lceil a\rceil\) to be the smallest integer no less than \(a\) and \(\mathbb{N}^{+}\) to be the set of positive integers. Based on Lemma 1 of Jiao et al. (2021) for the approximation error in terms of FNN structures and Lemma 2 of Bartlett et al. (2019) for the bounding covering number, we can conclude the following Corollary 1 from Theorem 2: **Corollary 1**.: _Given \(H\ddot{o}lder\) smooth functions \(g^{*}\in\mathcal{H}_{\beta}([0,1]^{d},B_{0})\), for any \(D\in\mathbb{N}^{+}\), \(W\in\mathbb{N}^{+}\), under conditions of Theorem 2, Lemma 1 in Jiao et al. (2021) and Lemma 2 in Bartlett et al. (2019), if the FNN with a ReLU activation function has width \(\mathcal{W}=C_{3}(\lfloor\beta\rfloor+1)^{2}d^{\lfloor\beta\rfloor+1}W\left\lceil \log_{2}(8W)\right\rceil\) and depth \(\mathcal{D}=C_{4}(\lfloor\beta\rfloor+1)^{2}D\left\lceil\log_{2}(8D)\right\rceil\), then_ \[\mathbb{E}\left(\mathcal{R}(\hat{g})-\mathcal{R}(g^{*})\right) \preceq \frac{\mathcal{S}\mathcal{D}\log(\mathcal{S})}{n}+h^{2}+(WD)^{-4 \beta/d}.\] In Corollary 1, the first term comes from the covering number of \(\mathcal{G}\), which is bounded by its VC dimension \(\mathcal{N}_{2n}(\frac{1}{n},|\cdot|_{\infty},\mathcal{G}|_{\mathbf{x}})=O( \mathcal{S}\mathcal{D}\log(\mathcal{S}))\)(Bartlett et al., 2019), where \(\mathcal{S}\) and \(\mathcal{D}\) are the total number of parameters and hidden layers, respectively. The third term follows from the approximation results from Jiao et al. (2021) that \(\left\|g^{*}-g^{*}_{\mathcal{G}}\right\|_{\infty}\leq 18B_{0}(\lfloor\beta \rfloor+1)^{2}d^{\lfloor\beta\rfloor+\max\{\beta,1\}/2}(WD)^{-2\beta/d}\) and \(\mathbb{E}(\mathcal{R}(g^{*}_{\mathcal{G}})-\mathcal{R}(g^{*}))\simeq\|g^{*} _{\mathcal{G}}-g^{*}\|_{\infty}^{2}\) where \(A\simeq B\) represents \(A\preceq B\) and \(B\preceq A\). If given \(\mathcal{S}=\mathcal{O}(n^{\frac{d}{2\beta+d}}\log n)\) and \(\mathcal{D}=\log n\), following Corollary 1 and \(\mathcal{S}=O(\mathcal{W}^{2}\mathcal{D})\), it holds that \(\mathbb{E}\left(\mathcal{R}(\hat{g})-\mathcal{R}(g^{*})\right)\preceq n^{- \frac{2\beta}{2\beta+d}}(\log n)^{3}+h^{2}+n^{-\frac{2\beta}{2\beta+d}}\). Hence, we have the following Corollary 2. **Corollary 2**.: _Under the conditions in Corollary 1, if \(h^{2}=O(n^{-\frac{2\beta}{2\beta+d}})\), then_ \[\mathbb{E}\left(\mathcal{R}(\hat{g})-\mathcal{R}(g^{*})\right)\preceq n^{- \frac{2\beta}{2\beta+d}}(\log n)^{3},\] _which is comparable to \(n^{-\frac{2\beta}{2\beta+d}}\), the lower bound of the minimax learning rate of \(g^{*}\)(Stone, 1982), i.e., \(\min_{\hat{g}}\max_{g^{*}\in\mathcal{H}_{\beta}([0,1]^{d},B_{0})}\mathbb{E} \left[\int_{[0,1]^{d}}(\check{g}(\mathbf{x})-g^{*}(\mathbf{x}))^{2}f_{\mathbf{x}}(\mathbf{x})d \mathbf{x}\right]\succeq n^{-\frac{2\beta}{2\beta+d}}\), where \(\check{g}\) is an estimator of \(g^{*}\) based on the data set \(S\) and the expectation is taken with respect to the randomness of \(S\)._ It is interesting to compare several established convergence rates under the framework of FNN. Particularly, using the LS loss for \(g\in\mathcal{H}_{\beta}([0,1]^{d},B_{0})\), Chen et al. (2019); Nakada and Imaizumi (2019); Schmidt-Hieber (2020); Jiao et al. (2021); Liu et al. (2022); Bhattacharya et al. (2023) and Yan and Yao (2023) have obtained upper bound of the minimax learning rate of \(g\) at \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{s}\), which is nearly minimax optimal (Donoho et al. 1995; Stone 1982); Using a Lipschitz continuous loss function, Farrell et al. (2021) have obtained the convergence rate at \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{4}\) under a bounded response condition; Shen et al. (2021b) and Shen et al. (2021a) have obtained the convergence rate \(n^{-\frac{2\beta}{2\beta+d}+1/p}(\log n)^{c}\) under the assumption of the bounded \(p\)-th moment for some \(p>1\) to allow heavy-tailed response \(Y\). The severity of heavy tail decreases as \(p\) increases. In particular, if the response \(Y\) is sub-exponentially distributed, \(p=\infty\), the convergence rate achieves the minimax near-optimal rate. Obviously, the proposed EML-FNN also enjoys nearly minimax optimal rate under condition (C3) with the lower bounded restriction on the density function. It seems that, to achieve the optimal convergence rate, a bounded condition on the tail-probability may be essential. In fact, a similar condition also appears for a quantile regression model, that under the assumption that the conditional density of \(Y\) given \(\mathbf{X}\) around \(\tau\)-th quantile is bounded by below, Padilla et al. (2020) have obtained the convergence rate \(n^{-\frac{2\beta}{2\beta+d}}(\log n)^{2}\) for a quantile regression model under the framework of FNN. ## 4 Simulation study We investigate the performance of the proposed EML-FNN (simplified as EML) by comparing it with the least square estimator (LSE) and several robust methods with \(g\) approximated by FNN. We consider four commonly used robust methods: (1) Least absolute deviation method (LAD) with the loss function \(\rho(x)=|x|\); (2) Huber method with the loss function \(\rho(x)=0.5x^{2}I(|x|\leq\zeta)+(\zeta\,|x|-\zeta^{2}/2)I(|x|>\zeta)\) at \(\zeta=1.345\); (3) Cauchy method with the loss function \(\rho(x)=\log\{1+\kappa^{2}x^{2}\}\) at \(\kappa=1\); (4) Tukey's biweight method with the loss function \(\rho(x)=t^{2}[1-\{1-(x/t)^{2}\}^{3}]I(|x|\leq t)/6+t^{2}I(|x|>t)/6\) at \(t=4.685\). We also investigate the effect of bandwidth on our method in Section 4.3. All feedforward network architecture, initial values, and data for training are the same for all methods involved. The computations were implemented via packages Pytorch (Paszke et al., 2019) and Scikit-learn (Pedregosa et al., 2011) in python. Specifically, we use the network _Net-d5-w256_ with Relu activated functions, which comprises of 3 hidden layers, resulting in a network depth of 5 with the corresponding network widths \((d,256,256,256,1)\). We use the Adam optimization algorithm (Kingma and Ba, 2014) with a learning rate of 0.0003 for network parameters initialized by uniform distribution (He et al., 2015). The coefficients used for calculating the running average of the gradient and the squared gradient are \(\beta=(0.9,0.99)\). We set the training batch size to be equal to the size of the training data \(n\), and train the network for at least 1000 epochs using a dropout rate of 0.01 until the training loss converges or reaches a satisfactory level. To enhance flexibility and simplicity, we adopt a varying bandwidth \(h(\epsilon_{i})=|\max(\epsilon_{i}(v))-\min(\epsilon_{i}(v))|\), where the set \(\epsilon_{i}(v)\) is the neighborhood of \(\epsilon_{i}\) encompassing a proportion \(v\) of the total sample (Loftsgaarden and Quesenberry, 1965). Then the selection of the bandwidth is translated into selecting a value for \(v\) from the interval \((0,1]\). The constrained interval simplifies the process of bandwidth selection. We evaluate the performance of \(\hat{g}\) by the bias, standard deviation (SD) and root mean square error(RMSE), defined as \(bias=\left[\frac{1}{n_{grid}}\sum_{i=1}^{n_{grid}}(E\widehat{g}(z_{i})-g(z_{i }))^{2}\right]^{\frac{1}{2}}\), \(SD=\left[\frac{1}{n_{grid}}\sum_{i=1}^{n_{grid}}E(\widehat{g}(z_{i})-E\widehat {g}(z_{i}))^{2}\right]^{\frac{1}{2}}\), and \(RMSE=\sqrt{bias^{2}+SD^{2}}\), where \(z_{i}(i=1,...,n_{grid})\) are grid points on which \(g(\cdot)\) is evaluated, which firstly randomly generated from the distribution of \(\mathbf{X}\) and then fixed, \(n_{grid}=2048\) is the number of grid points, and \(E\widehat{g}(z_{i})\) is approximated by its sample mean based on 100 replications. ### Data generation Denote \(\mathbf{X}_{i}=(X_{i1},\cdots,X_{id})^{\top}\) with each component of \(\mathbf{X}_{i}\) being _i.i.d._ generated from a uniform distribution \(U(0,1)\). We consider three target functions: (1) \(g_{5}(\mathbf{X}_{i})=x_{i1}^{3}+x_{i2}^{2}+x_{i3}+|x_{i4}|+cos(x_{i5})\); (2) \(g_{10}(\mathbf{X}_{i})=x_{i1}^{3}+x_{i2}^{2}+x_{i3}+|x_{i4}|+cos(x_{i5})+sin(x_{i6}) +e^{x_{i7}}+log(1+x_{i8})+x_{i9}^{\frac{1}{2}}+x_{i10}^{\frac{1}{3}}\); and (3) \(g_{20}(\mathbf{X}_{i})=x_{i1}^{5}+x_{i2}^{4}+x_{i3}^{3}+x_{i4}^{2}+x_{i5}+|x_{i6}| +x_{i7}^{\frac{1}{2}}+x_{i8}^{\frac{1}{3}}+x_{i9}^{\frac{1}{4}}+x_{i10}^{\frac {1}{5}}+|x_{i11}^{3}|+cos(x_{i12})+sin(x_{i13})+cos(x_{i14}^{2})+sin(x_{i15}^ {2})+e^{x_{i16}}+log(1+x_{i17})+e^{x_{i18}^{2}}+log(1+x_{i19}^{2})+log(1+x_{i2 0}^{\frac{1}{2}})\), which are \(p=5,10\) and \(20\)-dimensional functions, respectively, where \(x_{ij}=\mathbf{X}_{i}^{\top}\mathbf{\beta}_{j},j=1,...,20\), \(\mathbf{\beta}_{j}\) is a \(d\)-dimensional vector with \(\mathbf{\beta}_{j}\left[\left((j-1)\times\left\lfloor\frac{d}{p}\right\rfloor+1 \right):\left(j\times\left\lfloor\frac{d}{p}\right\rfloor\right)\right]=\frac{ \left(\mathbf{\gamma}^{\top},\cdots,\mathbf{\gamma}^{\top}\right)}{\left\lfloor\frac{d }{20p}\right\rfloor\times\left\lVert\mathbf{\gamma}\right\rVert_{1}}\) and the remaining components of \(\mathbf{\beta}_{j}\) are \(0\), where \(\mathbf{\gamma}=(1,2,\cdots,20)^{\top}\). In a word, the non-zero elements of \(\mathbf{\beta}_{j}\) are integer values ranging from \(1\) to \(20\) but scaled according to \(L_{1}\) norms. We consider the following four distributions for the error \(\epsilon_{i}\): (I) Standard normal distribution: \(\epsilon_{i}\sim\mathcal{N}(0,1)\); (II) Mixture gaussian distribution: \(\epsilon_{i}\sim 0.7\mathcal{N}(0,1)+0.3\mathcal{N}(0,5)\); (III) Student-t distribution: \(\epsilon_{i}\sim t(2)\); and (IV) Heteroscedasticity: \(\epsilon_{i}\sim\mathcal{N}(0,3X_{i1}+4X_{i2})\). We then generate \(Y_{i}\) by \(Y_{i}=g_{p}(\mathbf{X}_{i})+\varepsilon_{i}\) with \(p=5,10,20\), respectively. We set \(n=256,1024\) and consider \(d=100,200,400,500,600,800\) for the above three target functions and four error distributions, respectively. All simulation results with \(100\) replications are presented in Figures 1 to 3. ### Numerical Results From Figures 1 to 3, it is evident that the proposed EML consistently and significantly outperforms the robust-based methods in terms of bias, SD, and RMSE. When the errors follow the normal distribution, the LSE is optimal. In this case, the proposed EML performs comparably to LSE, and both outperform the robust-based methods. This indicates that the loss caused by estimating the density function can be ignored, which aligns with the theoretical findings in Theorem 2. Upon closer examination, we can see that EML even slightly outperforms LSE for normal data as the sample size increases, for instance, when \(n=1024\). This observation implies that the ability of the proposed EML to learn the data structure may become more pronounced as the sample size grows. For non-normal and heteroscedasticity situations, the LSE performs the worst among all the methods and the proposed EML significantly outperforms the LSE. Figures 1 to 3 also show that the performance of all the methods improves with increasing sample sizes or decreasing dimensions. Figure 1: The bar chart of the bias and standard deviation of \(g_{5}\) using six methods for four error distributions with sample sizes \(n=256,1024\) and input dimensions \(d=100,500\), respectively. Figure 2: The bar chart of the bias and standard deviation of \(g_{10}\) using six methods for four error distributions with sample sizes \(n=256,1024\) and input dimensions \(d=200,600\), respectively. ### Effect of the bandwidth Now, we examine the effect of bandwidth on the proposed method. In Figures 4 and 5, we present the bias, SD, and prediction error (PE) of the EML-FNN estimator when the band Figure 3: The bar chart of the bias and standard deviation of \(g_{20}\) using six methods for four error distributions with sample sizes \(n=256,1024\) and input dimensions \(d=400,800\), respectively. widths vary from \(0.2\) to \(0.8\) for \(g_{5}\) under four error distributions given \((n,d)=(1024,500)\). PE is defined as \(PE=\frac{1}{t}\sum_{i=1}^{t}\|g(X_{i}^{test})-Y_{i}^{test}\|^{2}\), where \(\{(X_{i}^{test},Y_{i}^{test}\}_{i=1}^{t}\) represents the test data, which shares the same distribution as the training data. From Figures 4 and 5, we can see that a smaller bandwidth provides a better estimator in terms of bias, SD, and PE, and the proposed EML estimator is robust to variations in bandwidth within a certain range that approaches zero. These findings are consistent with the theoretical result presented in Theorem 2, which indicates that a small bandwidth is favored, and the extra risk is independent of the bandwidth if the bandwidth is appropriately small. Additionally, the comparison of Figures 4 and 5 reveals that the PE is more stable than bias and SD as the bandwidth changes. ## 5 Real data example We applied our proposed EML-FNN and other competing methods to analyze four real datasets based on the model (1) using the observations \((\mathbf{X}_{i},Y_{i})_{i=1}^{n}\). 1. Boston House Price Dataset. It is available in the scikit-learn library (Pedregosa et al., 2011) and encompasses a total of \(n=506\) observations. The purpose of the analysis is predicting the house price based on the 13 input variables \(\mathbf{X}_{i}\), such as urban crime rates, nitric oxide levels, average number of rooms in a dwelling, weighted distance to central areas, and average owner-occupied house prices, among others. Following Kong and Xia (2012) and Zhou et al. (2019), we employ the logarithm of the median price of owner-occupied residences in units of $1,000 as our response \(Y_{i}\). Figure 5: The bar chart of the mean prediction error (PE) of the proposed EML-FNN for \(g_{5}\) under four error distributions with \((n,d)=(1024,500)\), as the bandwidths vary from 0.2 to 0.8. 2. QSAR aquatic toxicity Dataset. The dataset was provided by Cassotti et al. (2014) and was used to develop quantitative regression QSAR models for predicting acute aquatic toxicity towards Daphnia Magna. It consists of a total of \(n=546\) observations, each has 8 molecular descriptors serving as covariates \(\mathbf{X}_{i}\), including PSA(Tot) (Molecular properties), SAacc (Molecular properties), H-050 (Atom-centred fragments), MLOGP (Molecular properties), RDCHI (Connectivity indices), GATS1p (2D autocorrelations), nN (Constitutional indices), C-040 (Atom-centred fragments). The response variable \(Y_{i}\) is the acute aquatic toxicity, specifically the LC50, which is defined as the concentration causing death in 50% of the test D. magna over a test duration of 48 hours. 3. QSAR fish toxicity Dataset. Another version of the dataset for quantitative regression QSAR models was provided by Cassotti et al. (2015). This dataset includes 908 observations, each observation has 6 input variables (\(\mathbf{X}_{i}\)) including molecular descriptors: MLOGP (molecular properties), CIC0 (information indices), GATS1i (2D autocorrelations), NdssC (atom-type counts), NdsCH ((atom-type counts), SM1_Dz(Z) (2D matrix-based descriptors). The response variable \(Y_{i}\) is the LC50 which is the concentration that causes death in 50% of test fish over a test duration of 96 hours. 4. Temperature forecast Dataset. The dataset was provided by Cho et al. (2020) and aims to correcting bias of next-day maximum and minimum air temperatures forecast from the LDAPS model operated by the Korea Meteorological Administration over Seoul, South Korea. The data consists of summer data spanning from 2013 to 2017. The input data \(\mathbf{X}_{i}\) is largely composed of predictions from the LDAPS model for the subsequent day, in-situ records of present-day maximum and minimum temperatures, and five geographic auxiliary variables. In this dataset, two outputs (\(Y_{i}\)) are featured: next-day maximum and minimum air temperatures. We preprocessed all the datasets by applying Z-score normalization to each predictor variable. Inspired from transfer learning, we employed the widely-used fine-tuning technique to simplify the computation. We initiated the process by training a single network model based on, for example, the Cauchy loss function by employing the methodology outlined in Section 4. Subsequently, we leveraged this trained model as a foundation to train all other models with a learning rate of 0.00003. All four datasets were randomly \begin{table} \begin{tabular}{c|c|c|c} \hline & **Boston** & **Aquatic Toxicity** & **Fish Toxicity** \\ & (Pedregosa et al., 2011) & (Cassotti et al., 2014) & (Cassotti et al., 2015) \\ \hline **LS** & 0.1045 & 1.2812 & 2.0153 \\ **LAD** & 0.1054 & 1.2184 & 2.142 \\ **Huber** & 0.1155 & 1.2003 & 2.1403 \\ **Cauchy** & 0.1192 & 1.2697 & 2.1179 \\ **Tukey’s biweight** & 0.1153 & 1.3148 & 2.1436 \\ **EML** & **0.0833** & **1.1497** & **1.8918** \\ \hline & **Temperature Forecast** & & \\ & (Cho et al., 2020) & & \\ \hline **LS** & 10.9303 & 5.5969 & \\ **LAD** & 10.6615 & 5.71 & \\ **Huber** & 10.6054 & 5.3211 & \\ **Cauchy** & 11.4592 & 6.0396 & \\ **Tukey’s biweight** & 11.2879 & 5.2332 & \\ **EML** & **4.4085** & **2.2196** & \\ \hline \end{tabular} \end{table} Table 1: Mean prediction error for four real datasets. split into training and test sets with a ratio of 4:1 to calculate PE. The entire procedure was repeated 50 times, and the average PE were calculated and presented in Table 1. The results in Table 1 clearly demonstrate the significant superiority of our approach over other competing methods in terms of a remarkable improvement in prediction accuracy across all four datasets. Particularly noteworthy is the outstanding performance achieved when applying our proposed EML technique to the Temperature Forecast dataset, where the improvement of the prediction accuracy achieves up to 50%. To understand the underlying reasons behind this improvement, we proceeded to plot the Q-Q plot in Figure 6 on the estimated error distribution for each of four real datasets. From the Q-Q plot in Figure 6, we can see the distribution of Boston house prices quite close to normal distribution. Following this, the toxicity data exhibits relatively closer resemblance to normality, characterized by a limited number of outliers. In contrast, the temperature data diverges substantially from the normal distribution, with a notable prevalence of extreme values. Based on these findings, we can draw the conclusion that the prediction performances, as illustrated in Table 1, are linked to the degree to which the respective distributions adhere to normality. Furthermore, from Tables 1 and Figure 6, we also can see that all the methods exhibit enhanced predictive accuracy when handling datasets that are more similar to a normal distribution. This observation further highlights the influence of distribution characteristics on the resulting estimator and emphasizes the importance of incorporating distribution information into the analysis. ## 6 Concluding Remarks The paper presents an innovative approach to nonparametric regression using FNN. This approach is characterized by its efficiency in both estimation and computation, its adaptability to diverse data distributions, and its robustness in the presence of noise and uncertainty. The key contributions are as follows: (1) Estimation efficiency: The method intro Figure 6: The Q-Q plot of the estimated density function for four real datasets. duces a novel loss function that incorporates not only observed data but also potentially implicit information about the data structure. By integrating this hidden information, the loss function transforms into an estimated maximum likelihood function, resulting in desirable properties such as efficiency. (2) Distribution-free: The method is independent of data distribution assumptions. Consequently, the approach adeptly handles data with varying distributional characteristics, such as heavy tails, multimodal distributions, and heterogeneity. (3) Probabilistic Robustness: The loss function is formulated through a probabilistic framework. This probabilistic approach effectively reduce the impact of substantial noise and outliers within the data, thereby enhancing its robustness. (4) Kernel-Based Smoothness: The method leverages the inherent smoothness of kernel functions. This enables the calculation of gradients and addresses challenges related to non-differentiability, when dealing with densities such as uniform distributions, mixture distributions, and cases of heteroscedasticity. (5) Computational efficiency: The proposed loss function exclusively involves the regression function \(g\). This design facilitates the straightforward utilization of existing software packages, simplifying the computational and programming. In summary, the method's capacity to accommodate various data distributions without the need for distributional assumptions renders it versatile and applicable to a wide range of real-world scenarios. By utilizing a reasonably small bandwidth, the proposed estimator is proved to be equivalent to the maximum likelihood estimator (MLE) where the density function is known. Furthermore, it nearly attains the minimax optimal rate, with only an additional logarithmic factor. Its exceptional performance is further exemplified through comprehensive simulation studies and its successful application to four distinct real-world datasets. There are several directions for future research. First, it might be possible to extend our method to a more complicated model, such as the generalized regression model for a discrete response. Secondly, practical scenarios often involve multiple responses that exhibit correlations, as seen in the Temperature Forecast Dataset's maximum and minimum air temperatures. By further modeling inter-response correlations, predictive capabilities could be enhanced. Lastly, it remains our responsibility to consistently enhance the associated software packages, ensuring seamless application. Despite having introduced an efficient and user-friendly package named EML-FNN, continued optimization and refinement are necessary.
2309.06807
Bayesian uncertainty-weighted loss for improved generalisability on polyp segmentation task
While several previous studies have devised methods for segmentation of polyps, most of these methods are not rigorously assessed on multi-center datasets. Variability due to appearance of polyps from one center to another, difference in endoscopic instrument grades, and acquisition quality result in methods with good performance on in-distribution test data, and poor performance on out-of-distribution or underrepresented samples. Unfair models have serious implications and pose a critical challenge to clinical applications. We adapt an implicit bias mitigation method which leverages Bayesian predictive uncertainties during training to encourage the model to focus on underrepresented sample regions. We demonstrate the potential of this approach to improve generalisability without sacrificing state-of-the-art performance on a challenging multi-center polyp segmentation dataset (PolypGen) with different centers and image modalities.
Rebecca S. Stone, Pedro E. Chavarrias-Solano, Andrew J. Bulpitt, David C. Hogg, Sharib Ali
2023-09-13T08:54:22Z
http://arxiv.org/abs/2309.06807v2
# Bayesian uncertainty-weighted loss for improved generalisability on polyp segmentation task ###### Abstract While several previous studies have devised methods for segmentation of polyps, most of these methods are not rigorously assessed on multi-center datasets. Variability due to appearance of polyps from one center to another, difference in endoscopic instrument grades, and acquisition quality result in methods with good performance on in-distribution test data, and poor performance on out-of-distribution or underrepresented samples. Unfair models have serious implications and pose a critical challenge to clinical applications. We adapt an implicit bias mitigation method which leverages Bayesian epistemic uncertainties during training to encourage the model to focus on underrepresented sample regions. We demonstrate the potential of this approach to improve generalisability without sacrificing state-of-the-art performance on a challenging multi-center polyp segmentation dataset (PolyGen) with different centers and image modalities. ## 1 Introduction Colorectal cancer (CRC) is the third most common cancer worldwide [27] with early screening and removal of precancerous lesions (colorectal adenomas such as "polyps") suggesting longer survival rates. While surgical removal of polyps (polypectomy) is a standard procedure during colonoscopy, detecting these and their precise delineation, especially for sessile serrated adenomas/polyps, is extremely challenging. Over a decade, advanced computer-aided methods have been developed and most recently machine learning (ML) methods have been widely developed by several groups. However, the translation of these technologies in clinical settings has still not been fully achieved. One of the main reasons is the generalisability issues with the ML methods [2]. Most techniques are built and adapted over carefully curated datasets which may not match the natural occurrences of the scene during colonoscopy. Recent literature demonstrates how intelligent models can be systematically unfair and biased against certain subgroups of populations. In medical imaging, the problem is prevalent across various image modalities and target tasks; for example, models trained for lung disease prediction [25], retinal diagnosis [6], cardiac MR segmentation [23], and skin lesion detection [1, 17] are all subject to biased performance against one or a combination of underrepresented gender, age, socio-economic, and ethnic subgroups. Even under the assumption of an ideal sampling environment, a perfectly balanced dataset does not ensure unbiased performance as relative quantities are not solely responsible for bias [31, 19]. This, and the scarcity of literature exploring bias mitigation for polyp segmentation in particular, strongly motivate the need for development and evaluation of mitigation methods which work on naturally occurring diverse colonoscopy datasets such as PolypGen [3]. ## 2 Related work Convolutional neural networks have recently worked favourably towards the advancement of building data-driven approaches to polyp segmentation using deep learning. These methods [18, 34] are widely adapted from the encoder-decoder U-Net [24] architecture. Moreover, addressing the problem of different polyp sizes using multi-scale feature pruning methods, such as atrous-spatial pyramid pooling in DeepLabV3 [8] or high-resolution feature fusion networks like HRNet [28] have been used by several groups for improved polyp segmentation. For example, MSRFNet [29] uses feature fusion networks between different resolution stages. Recent work on generalisability assessment found that methods trained on specific centers do not tend to generalise well on unseen center data or different naturally occurring modalities such as sequence colonoscopy data [2]. These performance gaps were reported to be large (drops of nearly 20%). Out-of-distribution (OOD) generalisation and bias mitigation are challenging, open problems in the computer vision research community. While in the bias problem formulation, models wrongly correlate one or more spurious (non-core) features with the target task, the out-of-distribution problem states that test data is drawn from a separate distribution than the training data. Some degree of overlap between the two distributions in the latter formulation exists, which likely includes the core features. Regardless of the perspective, the two problems have clear similarities, and both result in unfair models which struggle to generalise for certain sub-populations. In the literature, many works focus on OOD detection, through normal or modified softmax outputs [13], sample uncertainty thresholds from Bayesian, ensemble, or other models [20, 14, 7], and distance measures in feature latent space [12]. Other approaches tackle the more difficult problem of algorithmic mitigation through disentangled representation learning, architectural and learning methods, and methods which optimise for OOD generalisability directly [26]. Similarly, several categories of bias mitigation methods exist. Some methods rely on two or more models, one encouraged to learn the biased correlations of the majority, and the other penalised for learning the correlations of the first [21, 16]. Other approaches modify the objective loss functions to reward learning core rather than spurious features [33, 22], or by neutralising representations to remove learned spurious correlations [10]. Others use data augmentation [6], or explore implicit versions of up-weighting or re-sampling underrepresented samples by discovering sparse areas of the feature space [4] or dynamically identifying samples more likely to be underrepresented [30]. De-biasing methods leveraging Bayesian model uncertainties [15, 5, 30] provide the added benefits of uncertainty estimations which are useful in clinical application for model interpretability and building user confidence. To tackle the generalisability problem for polyp segmentation, we consider the diversity of features in a multi-centre polyp dataset [3]. Our contributions can be listed as: 1) adapting an implicit bias mitigation strategy in [30] from a classification to a segmentation task; 2) evaluating the suitability of this approach on three separate test sets which have been shown to be challenging generalisation problems. Our experiments demonstrate that our method is comparable and in many cases even improves the performance compared to the baseline state-of-the-art segmentation method while decreasing performance discrepancies between different test splits. ## 3 Method The encoder-decoder architecture for semantic segmentation has been widely explored in medical image analysis. In our approach we have used DeepLabV3 [9] as baseline model that has SOTA performance on the PolypGen dataset [3]. We then apply a probabilistic model assuming a Gaussian prior on all trainable weights (both encoder and decoder) that are updated to the posterior using the training dataset. For the Bayesian network with parameters \(\boldsymbol{\theta}\), posterior \(p(\boldsymbol{\theta}\;\mid D)\), training data with ground truth segmentation masks \(D=(X,Y)\), and sample \(x_{i}\), the predictive posterior distribution for a given ground truth segmentation mask \(y_{i}\) can be written as: \[p(y_{i}\mid D,x_{i})=\int p(y_{i}\mid\boldsymbol{\theta},x_{i})p(\boldsymbol{ \theta}\mid D)d\boldsymbol{\theta} \tag{1}\] While Monte-Carlo dropout [11] at test-time is a popular approach to approximating this intractable integral, we choose stochastic gradient Monte-Carlo sampling MCMC (SG-MCMC [32]) for a better posterior. Stochastic gradient over mini-batches includes a noise term approximating the gradient over the whole training distribution. Furthermore, the cyclical learning rate schedule introduced in [35] known as cyclical SG-MCMC, or cSG-MCMC, allows for faster convergence and better exploration of the multimodal distributions prevalent in deep neural networks. Larger learning step phases provide a warm restart to the subsequent smaller steps in the sampling phases. The final estimated posterior of the Bayesian network, \(\boldsymbol{\Theta}=\{\boldsymbol{\theta}_{1},...\boldsymbol{\theta}_{M}\}\), consists of \(M\) moments sampled from the posterior taken during the sampling phases of each learning cycle. With functional model \(\boldsymbol{\Phi}\) representing the neural network, the approximate predictive mean \(\mu_{i}\) for one sample \(x_{i}\) is: \[\mu_{i}\approx\frac{1}{M}\sum_{m=1}^{M}\boldsymbol{\Phi}_{\theta_{m}}(x_{i}) \tag{2}\] We can derive a segmentation prediction mask \(\hat{y}_{i}\) from \(\mu_{i}\) by taking the maximum output between the foreground and background channels. The epistemic uncertainty mask corresponding to this prediction (Equation 3) represents the _model uncertainty_ for the predicted segmentation mask, the variance in the predictive distribution for that sample. \[\sigma_{i}\approx\frac{1}{M}\sqrt{\sum_{m=1}^{M}\left(\boldsymbol{\Phi}_{\theta _{m}}(x_{i})-\mu_{i}\right)^{2}} \tag{3}\] We add epistemic uncertainty-weighted sample loss [30] that identifies high-uncertainty sample regions during training. It also scales the pixel-wise contribution of these regions to the loss computation via a simple weighting function (Equation 4). This unreduced cross-entropy loss is then averaged over each image and batch (see Fig. 1). \[\hat{L}(\hat{y}_{i},y_{i})=L_{CE}(\hat{y}_{i},y_{i})*(1.0+\sigma_{i,y_{i}})^{\kappa} \tag{4}\] The shift by a constant (1.0) normalises the values, ensuring that the lowest uncertainty samples are never irrelevant to the loss term. \(\kappa\) is a tunable debiasing parameter; \(\kappa=1\) being a normal weighting, whereas \(\kappa\rightarrow\infty\) increases the importance of high-uncertainty regions. As too large a \(\kappa\) results in degraded performance due to overfitting, the optimal value is determined by validation metrics. Figure 1: Pixel-wise weighting of cross entropy (CE) loss contribution based on epistemic uncertainty maps for each training sample; the model is encouraged to focus on regions for which it is more uncertain. ## 4 Experiments and results ### Dataset and experimental setup PolypGen [3] is an expert-curated polyp segmentation dataset comprising of both single frames and sequence frames (frames sampled at every 10 frames from video) from over 300 unique patients across six different medical centers. The natural data collection format is video from which single frames and sequence data are hand-selected. The single frames are clearer, better quality, and with polyps in each frame including polyps of various sizes (10k to 40k pixels), and also potentially containing additional artifacts such as light reflections, blue dye, partial view of instruments, and anatomies such as colon linings and mucosa covered with stool, and air bubbles (Fig. 2). The sequence frames are more challenging and contain more negative samples without a polyp and more severe artifacts, which are a natural occurrence in colonoscopy. Our training set includes 1449 single frames from five centers (C1 to C5) and we evaluate on the three tests sets used for generalisability assessment in literature [2, 3]. The first test dataset has 88 single frames from an unseen center C6 (C6-SIN), and the second has 432 frames from sequence data also from unseen center C6 (C6-SEQ). Here, the first test data (C6-SIN) comprises of hand selected images from the colonoscopy videos while the second test data (C6-SEQ) includes short sequences (every \(10^{th}\) frame of video) mimicking the natural occurrence of the procedure. The third test dataset includes 124 frames but from seen centers C1 - C5; however, these are more challenging as they contain both positive and negative samples with different levels of corruption that are not as present in Figure 2: Samples from the EndoCV2021 dataset; from (_top_) C1-5 single frames and (_bottom_) C1-5-SEQ; (_top_) highlights the data distribution of each center (C1-C5), which consists of curated frames with well-defined polyps; (_bottom_) demonstrates the variability of sequential data due to the presence of artifacts, occlusions, and polyps with different morphology. the curated single frame training set. As no C6 samples nor sequence data are present in the training data, these test sets present a challenging generalisability problem. 1. Footnote 1: C1-5-SEQ and C6-SEQ data are referred to as DATA3 and DATA4, respectively, in [2] Training was carried out on several IBM Power 9 dual-CPU nodes with 4 NVIDIA V100 GPUs. Validation metrics were used to determine optimal models for all experiments with hyper-parameters chosen via grid search. Perhaps due to some frames containing very large polyps with high uncertainties, we found that the gradients of Bayesian models with uncertainty-weighted loss (BayDeepLabV3+Unc) occasionally exploded during the second learning cycle, and clipping the absolute gradients at 1.0 for all weights prevented this issue. All Bayesian DeepLabV3+ (BayDeepLabV3+) models had 2 cycles, a cycle length of 550 epochs, noise control parameter \(\alpha=0.9\), and an initial learning rate of 0.1. For BayDeepLabV3+Unc, we found optimal results with de-biasing tuning parameter \(\kappa=3\). Posterior estimates for BayDeepLabV3+ and BayDeepLabV3+Unc included 6 and 4 samples per cycle, respectively. ### Results We use the state-of-the-art deterministic model 2 and checkpoints to evaluate on the three test sets, and compare against the baseline Bayesian model BayDeepLabV3+ and BayDeepLabV3+Unc with uncertainty-weighted loss. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Dataset** & **Method** & **JAC** & **Dice** & **F2** & **PPV** & **Recall** & **Accuracy** \\ \hline \multirow{4}{*}{C6-SIN} & SOTA & 0.738\(\pm\)0.3 & 0.806\(\pm\)0.3 & 0.795\(\pm\)0.3 & **0.912\(\pm\)0.2** & 0.793\(\pm\)0.3 & 0.979\(\pm\)0.1 \\ & BayDeepLabV3+ & 0.721\(\pm\)0.3 & 0.790\(\pm\)0.3 & **0.809\(\pm\)0.3** & 0.836\(\pm\)0.2 & **0.843\(\pm\)0.3** & **0.977\(\pm\)0.1** \\ & Ours & **0.740\(\pm\)0.3** & **0.810\(\pm\)0.3** & 0.804\(\pm\)0.3 & 0.903\(\pm\)0.1 & 0.806\(\pm\)0.3 & **0.977\(\pm\)0.1** \\ \hline \multirow{4}{*}{C1-5-SEQ} & SOTA & **0.747\(\pm\)0.3** & **0.819\(\pm\)0.3** & **0.828\(\pm\)0.3** & 0.877\(\pm\)0.2 & 0.852\(\pm\)0.3 & 0.960\(\pm\)0.0 \\ & BayDeepLabV3+ & 0.708\(\pm\)0.3 & 0.778\(\pm\)0.3 & 0.805\(\pm\)0.3 & 0.784\(\pm\)0.3 & **0.885\(\pm\)0.2** & **0.963\(\pm\)0.0** \\ & Ours & 0.741\(\pm\)0.3 & 0.810\(\pm\)0.3 & 0.815\(\pm\)0.3 & **0.888\(\pm\)0.2** & 0.836\(\pm\)0.3 & 0.961\(\pm\)0.0 \\ \hline \multirow{4}{*}{C6-SEQ} & SOTA & 0.608\(\pm\)0.4 & 0.676\(\pm\)0.4 & 0.653\(\pm\)0.4 & 0.845\(\pm\)0.3 & 0.719\(\pm\)0.3 & 0.964\(\pm\)0.1 \\ & BayDeepLabV3+ & 0.622\(\pm\)0.4 & 0.682\(\pm\)0.4 & 0.669\(\pm\)0.4 & 0.802\(\pm\)0.3 & **0.764\(\pm\)0.3** & 0.965\(\pm\)0.1 \\ \cline{1-1} & Ours & **0.637\(\pm\)0.4** & **0.697\(\pm\)0.4** & **0.682\(\pm\)0.4** & **0.858\(\pm\)0.3** & 0.741\(\pm\)0.3 & **0.967\(\pm\)0.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of the state-of-the-art deterministic DeepLabV3+, BayDeepLabV3+, and our proposed BayDeepLabV3+Unc, showing mean and standard deviations across the respective test dataset samples. **First** and _second_ best results for each metric per dataset formatted. We report results for Jaccard index (JAC), Dice coefficient (Dice), F\({}_{\beta}\)-measure with \(\beta\) = 2 (F2), positive predictive value (PPV), recall (Rec), and mean pixel-wise accuracy (Acc). PPV in particular has high clinical value as it indicates a more accurate delineation for the detected polyps. Recall and mean accuracy are less indicative since the majority of frames are background in the segmentation task and these metrics do not account for false positives. A larger number of Figure 3: Performance gaps of the three models (state-of-the-art deterministic DeepLabV3+, BayDeepLabV3+, and BayDeepLabV3+Unc) between the three different test sets; _(top)_ comparing performance on single vs. sequence frames from out-of-distribution test set C6 (C6-SIN vs. C6-SEQ), and _(bottom)_ sequence frames from C1 - C5 vs. unseen C6 (C1-5-SEQ vs. C6-SEQ). The subtext above bars indicates the percent decrease in performance gap compared to SOTA; a larger percent decrease and shorter vertical bar length indicate better generalisability. false positive predictions can cause inconvenience to endoscopists during colonoscopic procedure and hence can hinder clinical adoption of methods. Figure 3 illustrates that our approach maintains SOTA performance across most metrics and various test settings, even outperforming in some cases; simultaneously, the performance gaps between different test sets representing different challenging features (1) image modalities (single vs. sequence frames) and (2) source centers (C1 - C5 vs. C6) are significantly decreased. Simply turning the SOTA model Bayesian improves the model's ability to generalise, yet comes with a sacrifice in performance across metrics and datasets. Our proposed uncertainty-weighted loss achieves better generalisability without sacrificing performance (also see Table 1). We note performance superiority to SOTA especially on C6-SEQ, approximately 3% improvement on Dice. We can also observe slight improvement on PPV for test sets with sequence (both held-out data and unseen centre data). Finally, we note that in clinical applications, the uncertainty maps for samples during inference could be useful for drawing clinicians' attention towards potentially challenging cases, increasing the likelihood of a fairer outcome. ## 5 Conclusion We have motivated the critical problem of model fairness in polyp segmentation on a multi-center dataset, and modified a Bayesian bias mitigation method to our task. The results on three challenging test sets show strong potential for improving generalisability while maintaining competitive performance across all metrics. Furthermore, the proposed mitigation method is implicit, not requiring comprehensive knowledge of biases or out-of-distribution features in the training data. This is of particular importance in the medical community given the sensitivity and privacy issues limiting collection of annotations and metadata. Our findings are highly relevant to the understudied problem of generalisation across high variability colonoscopy images, and we anticipate future work will include comparisons with other methods to improve generalisability and an extension to the approach. We also anticipate having access to additional test data for more in-depth analysis of the results. ###### Acknowledgements. R. S. Stone is supported by an Ezra Rabin scholarship.
2301.13396
Study of Optical Networks, 5G, Artificial Intelligence and Their Applications
This paper discusses the application of artificial intelligence (AI) technology in optical communication networks and 5G. It primarily introduces representative applications of AI technology and potential risks of AI technology failure caused by the openness of optical communication networks, and proposes some coping strategies, mainly including modeling AI systems through modularization and miniaturization, combining with traditional classical network modeling and planning methods, and improving the effectiveness and interpretability of AI technology. At the same time, it proposes response strategies based on network protection for the possible failure and attack of AI technology.
Quanda Zhang, Qi Zhang
2023-01-31T04:06:18Z
http://arxiv.org/abs/2301.13396v1
# Study of Optical Networks, 5G, Artificial Intelligence and Their Applications ###### Abstract This paper discusses the application of artificial intelligence (AI) technology in optical communication networks and 5G. It primarily introduces representative applications of AI technology and potential risks of AI technology failure caused by the openness of optical communication networks, and proposes some coping strategies, mainly including modeling AI systems through modularization and miniaturization, combining with traditional classical network modeling and planning methods, and improving the effectiveness and interpretability of AI technology. At the same time, it proposes response strategies based on network protection for the possible failure and attack of AI technology. AI, 5G, Optical Networks ## I Introduction Artificial intelligence (AI, artificial intelligence) technology is very early has been used in many fields, but for many years this technology has not gained high attention until AlphaGo defeated Chinese and Korean Go players After the hand, it began to become a research hotspot, and researchers tried to AI technology is applied in different fields, including optical communication network network. In the past two years, the United States Optical Communications Conference (OFC, optical fiber communication) and the European Conference of Optical Communications (ECOC, European conference of optical communication), at least 16 conference topics focused on AI or machine learning (ML, machine learning) technology. This paper combines AI technology and ML technology AI technologies are regarded as the same class of technologies, and at the same time, although AI technologies cover a wide range, The AI technology referred to in this article is mainly neural network technology. AI technology has received widespread attention mainly due to the following two reasons. First, AI technology is relatively easy to get started and use. it comes in black Model the system in a box way, through a large number of samples Learning, let the black box connect neurons by itself, and distribute neurons Connect weights without requiring the user to understand why neurons behave the way they do connections and are assigned current weights. Users only need to provide enough learning samples, increase the number of neurons and the number of hidden layers, It can improve the prediction accuracy of AI technology. Second, AI technology is After the AlphaGo incident, it has almost been deified, and almost everyone knows that "people "artificial intelligence", and in the academic circle, the paper labeled AI Papers also seem to be easier to publish, so this also leads to a current phenomenon Like, that is, for almost all problems, regardless of whether it is suitable or not, the use of AI technology for modeling and solving. AI technology is very successful in solving some problems, as before Go and some image-to-speech recognition scenarios mentioned above, but cannot Due to the successful solution of a certain field or certain problems, AI is regarded as a "universal method". This paper aims at the current AI technology in optical communication Discuss the application of AI technology in the network, including the application of AI technology in optical communication network applicability in the network, and raises the potential risk of using AI technology Some coping strategies. ## II AI Applications in Optical Networks AI technology has been widely used in the literature of optical communication networks [1, 2, 3, 4, 5, 6, 7]. A great deal of research can be found in this area. This paper introduces several representative applications of AI technology in optical communication networks. 1) On receiving At the end, using the digital signal processing method combined with AI technology, it can effectively improve the detection sensitivity of optical signals and improve the optical fiber transmission system performance and improve the spectrum utilization efficiency of the network [7, 8]. 2) In the optical network, there are a large number of end-to-end optical channels, and these optical channels are respectively related parameters (including transmission rate, modulation format, number of optical fiber links, number of optical amplifiers and gain, etc.) and their receiving the signal transmission quality (QoT, quality of transmission) detected by the end is used as input and output, and through a lot of learning, it can be realized Prediction of QoT for different end-to-end optical channels in optical networks; where QoT often expressed as the signal-to-noise ratio of the optical channel (OSNR, optical signal to noise ratio), its accurate prediction can reduce the optical channel OSNR margin configuration, thereby improving the spectrum utilization efficiency of the network [9, 10]. 3) pass Continuously learn the fault events in the optical network, and use the fault and fault cause Because it is used as input and output, it can accurately analyze and diagnose the cause of the fault early warning of future failures [3]. 4) Combined with the need for network security, AI technology can also be used for early warning and identification of network attacks on the optical layer [11]. ## III AI Applications in 5G Communication AI and ML are being utilized in 5G and mmWave communication [12, 13, 14, 15, 16, 17, 18] to enhance performance, reduce costs, and boost efficiency. Applications of AI in this field include network optimization, predictive maintenance, self-organizing networks, traffic prediction, security, resource allocation, network slicing, edge computing, interference management, and spectrum management. These technologies are still in the early stages of development but have the potential to significantly improve the performance, efficiency, and cost-effectiveness of these networks. As AI technologies continue to evolve and become more widely adopted, they will likely play an increasingly important role in the development and deployment of 5G and mmWave communication systems. However, it is also crucial to consider potential risks and challenges such as ensuring privacy, security, and ethical considerations. AI techniques apply the same "black box" approach to different application scenarios leading to method innovation and analysis of the underlying mechanism slack. A very typical example is as follows. Thanks to AI technologies such as deep learning can effectively recognize some image patterns, there are studies that the researchers applied this technology to the identification of lesions in different parts of the human body [19, 20]. Based on the same method and process, different human parts are continuously used bit pictures, which can form a large number of so-called "research results" and thesis. Obviously, from the perspective of cultivating students and scientific research, students The development of research skills and professionalism acquired practically in the project are very few, and the actual work is only to collect relevant image data and Write a small amount of Python code, and finally hand over the training task to the graph Processor (GPU, graphics processing unit) to complete, no Carry out in-depth thinking on the method and mechanism of specific research questions It is obviously not conducive to innovation and effective innovation, and it is impossible to grasp (in fact, it is currently impossible to grasp) what is going on in the black box.
2309.06813
A Strong Blend in the Morning: Studying the Circumgalactic Medium Before Cosmic Noon with Strong, Blended Lyman-$α$ Forest Systems
We study of the properties of a new class of circumgalactic medium absorbers identified in the Lyman-$\alpha$ forest: "Strong, Blended Lyman-$\alpha$" (or SBLA) absorption systems. We study SBLAs at $2.4<z<3.1$ in SDSS-IV/eBOSS spectra by their strong extended Lyman-$\alpha$ absorption complexes covering 138 $\,\,{\rm km}\,{\rm s}^{-1}$ with an integrated $\log (N_{HI}/$cm$^{-2}) =16.04\substack{+0.05 \\ -0.06}$ and Doppler parameter $b=18.1 \substack{+0.7 \\ -0.4}\,\,{\rm km}\,{\rm s}^{-1}$. Clustering with the Lyman-$\alpha$ forest provides a large-scale structure bias of $b = 2.34\pm0.06$ and halo mass estimate of $M_h \approx 10^{12}{\rm h^{-1}M_{sol}}$ for our SBLA sample. We measure the ensemble mean column densities of 22 metal features in the SBLA composite spectrum and find that no single-population multiphase model for them is viable. We therefore explore the underlying SBLA population by forward modelling the SBLA absorption distribution. Based on covariance measurements and favoured populations we find that $\approx 25$\% of our SBLAs have stronger metals. Using silicon only we find that our strong metal SBLAs trace gas with a $\log(n_H / $cm$^{-3}) > -2.40$ for $T=10^{3.5}$K and show gas clumping on $<210$ parsec scales. We fit multiphase models to this strong sub-population and find a low ionization phase with $n_H=1$cm$^{-3}$, $T=10^{3.5}$K and $[X/H]=0.8$, an intermediate ionization phase with $\log(n_H / $cm$^{-3}) = -3.05$, $T=10^{3.5}$K and $[X/H]=-0.8$, and a poorly constrained higher ionization phase. We find that the low ionization phase favours cold, dense super-solar metallicity gas with a clumping scale of just 0.009 parsecs.
Sean Morrison, Debopam Som, Matthew M. Pieri, Ignasi Pérez-Ràfols, Michael Blomqvist
2023-09-13T09:04:03Z
http://arxiv.org/abs/2309.06813v2
# A Strong Blend in the Morning: ###### Abstract We study of the properties of a new class of circumgalactic medium absorbers identified in the Lyman-\(\alpha\) forest: "Strong, Blended Lyman-\(\alpha\)" (or SBLA) absorption systems. We study SBLAs at \(2.4<z<3.1\) in SDSS-IV/eBOSS spectra by their strong extended Lyman-\(\alpha\) absorption complexes covering \(138\) km s\({}^{-1}\) with an integrated \(\log(N_{HI}/\mathrm{cm}^{-2})=16.04^{+2.08}_{-0.08}\) and \(b=18.1^{+0.27}_{-1.44}\) km s\({}^{-1}\). Clustering with the Lyman-\(\alpha\) forest provides a large-scale structure bias of \(b=2.34\pm 0.06\) and halo mass estimate of \(M_{h}\approx 10^{12}\)h\({}^{-1}\)M\({}_{\sun}\) for our SBLA sample. We measure the ensemble mean column densities of 22 metal features in the SBLA composite spectrum and find that no single-population multiphase model for them is viable. We therefore explore the underlying SBLA population by forward modelling the SBLA absorption distribution. Based on covariance measurements and favoured populations we find that \(\approx 25\%\) of our SBLAs have stronger metals. Using silicon only we find that our strong metal SBLAs trace gas with a \(\log(n_{H}/\mathrm{cm}^{-3})>-2.45\) for \(T=10^{3.5}\)K and show gas clumping on \(<255\) parsec scales. We fit multiphase models to this strong sub-population and find a low ionization phase with \(n_{H}=1\mathrm{cm}^{-3}\), \(T=10^{3.5}\)K and \([X/H]=0.8\), an intermediate ionization phase with \(\log(n_{H}/\mathrm{cm}^{-3})=-3.35\), \(T=10^{3.5}\)K and \([X/H]=-1.1\), and a poorly constrained higher ionization phase. We find that the low ionization phase traces cold, dense super-solar metallicity gas with a clumping scale of just 0.009 parsecs. keywords: intergalactic medium - quasars: absorption lines - galaxies: formation - galaxies: evolution - galaxies: high-redshift ## 1 Introduction The history of the universe can be thought of as an evolution through a series of distinct epochs; the hot Big Bang, the dark ages, the epoch of the first stars, hydrogen reionization, the galaxy formative era reaching a crescendo when the star formation rate peaks at \(z\approx 2\)(Madau & Dickinson, 2014), and finally a gradual decline in star formation activity (driven in-part by dark energy driving the expansion of the universe) leading to the mature galaxies we see today. The star formation rate peak is often known as 'cosmic noon'. The period leading up to that epoch (which we might call the 'cosmic morning') is one of the most active periods in the history of the universe. This is the epoch where gas is actively accreting onto galaxies and fuelling imminent star formation. It is also the epoch where galaxies increasingly respond to star formation and eject outflows into their surrounding environments. The zone where accretion and outflows occur is known as the 'circumgalactic medium' (or CGM), in general regions outside galaxies are known as the 'intergalactic medium' (or IGM). The cosmic morning is also notable as the epoch where key UV atomic transitions are redshifted into the optical window allowing us to study them from the ground-based observatories in great detail. In particular, the richly informative Lyman-\(\alpha\) (Ly\(\alpha\)) forest is well-studied at \(z>2\), typically towards background quasars (Gunn & Peterson, 1965; Lynds, 1971). This leads to samples of Lyman-\(\alpha\) forest spectra going from a few hundred at \(z<2\) to a few hundred thousand at \(z>2\). This combination of high-observability and high-information-content is encouraging for developing an understanding of galaxy formation, however, progress has been held back by the fact that at these high redshifts galaxies are faint and so have been observed in much smaller numbers than the active galactic nuclei hosting quasars. Wide-area surveys of galaxies at \(z>2\) are on their way (e.g. HETDEX, Hill et al., 2008 and PFS, Takada et al., 2014) but in the meantime and in complement to them, we can study galaxies in absorption. The most widely known and accepted way of doing this is to study damped Lyman-\(\alpha\) systems (or DLAs; Wolfe et al., 2005), which are systems with a column density \(N_{HI}>10^{20.3}\)cm\({}^{-2}\) such that ionizing photons cannot penetrate them. These systems are typically easy to identify in absorption through their wide damping wings. A wider category of systems (including DLAs) that do not allow the passage of ionizing photons (i.e. self-shielded) are named Lyman limit systems (or LLSs), which have column densities \(N_{HI}>10^{17.2}\)cm\({}^{-2}\). Partial Lyman limit systems with \(10^{16.2}\)cm\({}^{-2}<N_{HI}<10^{17.2}\)cm\({}^{-2}\) absorb a sufficient fraction of ionizing photons and modify ionization fractions of species they host (though the boundary lower of this group is somewhat ill-defined). DLAs are thought to have a particularly strong connection to galaxies since the densities inferred are approximately sufficient to provoke star formation (e.g. Rafelski et al., 2011). LLSs are less clear, sometimes they are thought to be closely associated with galaxies and in other cases they are though to trace cold streams of inflowing gas (e.g. Fumagalli et al., 2011). Self-shielded systems cover a small covering fraction of the CGM (typically defined as regions within the virial radius of a galaxy hosting dark matter halo). The overwhelming majority of CGM regions are not detectable as Lyman limit systems but are optically thin with \(10^{14}\)cm\({}^{-2}\leq N_{HI}\lesssim 10^{16}\)cm\({}^{-2}\)(e.g. Fumagalli et al., 2011 and Hummels et al., 2019). Conversely, many of these strong optically thin systems are not CGM systems but instead probe diffuse IGM gas. In other words, these systems are critically important tracers of the CGM but their CGM/IGM classification is challenging. Furthermore given that lines with \(N_{HI}\gtrsim 10^{14}\)cm\({}^{-2}\) are on the flat part of the curve of growth (e.g. Charlton and Churchill, 2000) and therefore suffer from degeneracy between column density and line broadening even the column density itself is a non-trivial measurement. We explore a wider sample of CGM systems that are not optically thick to ionizing photons, nor do they require a challenging estimation of column density for confirmation. The sample requires only that the absorption in the Lyman-\(\alpha\) forest be strong and blended. This population has already been studied in Pieri et al. (2010) and followed up in Pieri et al. (2014) through spectral stacking. Weretrum to this sample with a refined error analysis of the stacked spectra, a formalised measurement of column densities, halo mass constraints and more extensive interpretation, in particular modelling of the underlying metal populations in the stack of spectra. There are various observational studies of the CGM that provide gas details such as thermal properties, density, metallicity, sometimes with respect to a galactic disk, often as a function of impact parameter to the likely closest galaxy (e.g. Steidel et al., 2010; Bouche et al., 2013; Werk et al., 2014; Augustin et al., 2018; Qu et al., 2022). SINFONI and MUSE integral field unit data have provided a particular boost to the detail and statistics of CGM observations (e.g. Peroux et al., 2016; Fumagalli et al., 2014; Fossati et al., 2021). Despite the exquisite detail offered by these datasets, an unbiased, large statistical sample of spectra is needed in order to develop a global picture of the CGM. Obtaining such samples with this level of detail remains a distant prospect. Hence, we take a brute force approach. We identify CGM regions as a general population with diverse gas properties studied simultaneously. These samples number in the 10s or even 100s of thousands recovered from SDSS spectra of \(z>2\) quasars. The selection function is simple (see Section 3) but the challenge resides in the interpretation of this rich but mixed sample. Complexity exists not only in the unresolved phases, but also in the diversity of systems selected. In previous work Pieri et al. (2010, 2014); Morrison et al. (2021); Yang et al. (2022) the focus has been to interpret the multi-phase properties of a hypothetical mean system that is measured with high precision in the composite spectrum of the ensemble. We revisit these measurements, and go further to study the underlying populations of metals features: both their individual expected populations and the degree of covariance between them. We focus in particular on a strong population of metals that we infer and find signal of metal rich, high-density, cold gas clumping on remarkably small-scales. Much remains to be explored but we offer a framework for studying the CGM in the largest Lyman-\(\alpha\) forest samples. In light of the improved understanding outlined here, we define these absorption systems (initially discovered in Pieri et al., 2010) as a new class of CGM absorption systems defined by the both absorption strength and clustering on \(\sim 100\) km s\({}^{-1}\) scales, and we name them "Strong, Blended Lyman-\(\alpha\)' or SBLA absorption systems. Over the coming decade quasar surveys at \(z>2\) will grow and will increasing be accompanied by galaxy surveys at the same redshifts, making this statistical population analysis an increasingly powerful tool. This publication is structured as follows. We begin by describing the dataset (including quasar continuum fitting to isolate the foreground transmission spectrum). In Section 3 we describe various ways of selecting SBLAs for different purity and absorption strength before settling on a analysis sample in subsequent sections. We then review the stacking methodology in Section 4 and follow this in Section 5 with a comprehensive end-to-end error analysis of the resulting composite spectrum of SBLAs. In Section 6 we present large-scale structure biases for SBLAs and inferences regarding their halo masses. In Section 7 we begin to explore our results with measurements in the composite H i and metal column densities, the sensitivity to physical conditions and the covariance between metal lines. We then go on to model and constrain the underlying absorber populations and explore the properties of the strong metal population in Section 8. We follow up with an extensive discussion Section 9 and conclusions. We also provide appendices on the correlation function methodology used to measure structure bias (Appendix A), details on the error analysis (Appendix B), SBLAs studied in high-resolution spectra (Appendix C), and finally measurements of the covariance between our metal features (Appendix D). ## 2 Data SDSS-IV (Blanton et al., 2017) carried out three spectroscopic surveys using the 2.5-meter Sloan telescope (Gunn et al., 2006) in New Mexico. These surveys included APOGEE-2 (an infrared survey of the Milky Way Stars), Extended Baryon Oscillation Spectroscopic Survey (eBOSS; a optical cosmological survey of quasars and galaxies) and MaNGA (an optical IFU survey of \(\sim\)10,000 nearby galaxies), eBOSS, an extension of the SDSS-III (Eisenstein et al., 2011; Dawson et al., 2013) BOSS survey, utilizes the BOSS spectrograph. The BOSS instrument (Smee et al., 2013) employs a twin spectrograph design with each spectrograph separating incoming light into a blue and a red camera. The resulting spectral coverage is over 3450A - 10,400A with a resolving power (\(\lambda/\Delta\lambda\)) ranging between \(\sim\) 1650 (near the blue end) to \(\sim\) 2500 (near the red end). We discard regions with a 100 pixel boxcar smoothed signal-to-noise ratio (S/N)\(<\) 1, in order to exclude from our analysis regions of sharply falling S/N at the blue extreme of SDSS-IV quasar spectra. Pixels flagged by the pipeline, pixels around bright sky lines and the observed Galactic Ca ii H&K absorption lines were also masked throughout our stacking analysis. We use a high redshift quasar sample derived from the final data release of eBOSS quasars (Lyke et al., 2020, hereafter DR16Q) from SDSS-IV Data Release 16 (Ahumada et al., 2020). The spectra of objects targeted as quasars are reduced and calibrated by the SDSS spectroscopic pipeline (Bolton et al., 2012) which also classifies and determines the redshifts of sources automatically. Unlike the quasar catalogues from DR12Q (Paris et al., 2017) and earlier, the additional quasars in DR16Q are primarily selected via the automated pipeline, with a small visually inspected sample for validation. Ensuring the availability of enough Ly\(\alpha\) forest pixels required for an accurate continuum estimate restricts the minimum redshift of our quasar sample to \(z_{em}\geq 2.15\). We also discard DR16Q quasars with median Ly\(\alpha\) forest S/N \(<\) 0.2 pixel\({}^{-1}\) or median S/N \(<\) 0.5 pixel\({}^{-1}\) over 1268 A - 1380 A given the difficulty in the accurate estimation of continua for very noisy spectra. Since the presence of BAL troughs contaminate the Ly\(\alpha\) forest with intrinsic quasar absorption and likely affects continuum estimation, we discard quasars flagged as BAL quasars in DR16Q. Pixels which were flagged by the pipeline as problematic during the extraction, flux calibration or sky subtraction process were excluded from our analysis. Spectra of DR16Q quasars with more than 20% pixels within 1216 \(<\)\(\lambda_{rest}\)\(<\) 1600 A or in the Ly\(\alpha\) forest region flagged to be unreliable by the pipeline were discarded. DLAs and their wings (where the estimated flux decrement is \(>\) 5%) in our data were masked using the DLA catalogue internal to the DR16Q catalogue, presented in Chabanier et al. (2022) and based on the Parks et al. (2018) convolutional neural network deep learning algorithm designed to identify and characterise DLAs. Spectra with more than one DLAs are entirely discarded throughout our analysis. Further steps are taken to prepare the data for the selection of Ly\(\alpha\) systems to be stacked and the spectra to be stacked themselves. Steps taken for the calculation of forest correlation functions are explained separately in Section 6) ### Preparation for Lyman-\(\alpha\) absorber selection We take two approaches for the normalisation of the quasar continua in our stacking analysis. For SBLA detection we follow the method described in Lee et al. (2013) over 1040 - 1185 A in the rest frame. The modified version of the MF-PCA technique presented in Lee et al. (2012) fits the 1216 \(<\)\(\lambda_{rest}\)\(<\) 1600 A region of a quasar spectrum with PCA templates providing a prediction for the continuum shape in the Ly\(\alpha\) forest. The predicted continuum is then re-scaled to match the expected evolution of the Ly\(\alpha\) forest mean flux from Faucher-Giguere et al. (2008). The above definition of the forest region avoids contamination from higher order Lyman series lines and conservatively excludes the quasar proximity zone. We discard any spectrum for which the estimated continuum turns out to be negative. Metal absorption lines are masked using a \(3\sigma\) iterative flagging of outlier pixels redward of the Ly\(\alpha\) forest from a spline fit of the continua. With all the cuts mentioned above, we are left with an analysis sample of 198,405 quasars with a redshift distribution shown in Figure 1 along with the distribution of all \(z\geq 2\) quasars from DR16Q. ### Preparation of spectra to be stacked The mean-flux regulated PCA continua described above provide robust estimates of the Ly\(\alpha\) forest absorption and are therefore well-suited for the search and selection of SBLAs for stacking. However, these continua are limited to \(<\)1600 A in the quasar rest frame and present discontinuities due to the mean-flux regulation process. For spectra to be stacked, we required wide wavelength coverage without discontinuities and so we use spline fitting. We split each spectrum into 25A chunks over the entire observed spectral range and calculate the median flux in each spectral chunk before fitting a cubic spline to these flux nodes. Pixels falling 1\(\sigma\) below the fit within the Ly\(\alpha\) forest or 2\(\sigma\) below outside the forest are then rejected and the flux nodes are recalculated followed by a re-evaluation of the spline fit. This absorption-rejection is iterated until convergence to estimate the quasar continuum. The cubic spline fitting breaks down in regions with large gradients in the continuum, usually near the centres of strong quasar emission lines. We, therefore, mask data around the peaks of emission features commonly seen in quasar spectra before the continuum fitting is performed. In addition, as sharp edges (caused by missing data as a result of masking the emission peaks) can induce instability in the fits using the smooth cubic spline function, we discard a buffer region around the emission line masks. The extents of the masked region (\(\lambda_{mask}\)) and the corresponding buffer (\(\pm\lambda_{buffer}\)), in the quasar rest frame, depend on the typical strength of the emission line concerned and are listed in Table 1 along with the rest frame locations of the emission line centres. ## 3 Selection of strong, blended Lyman \(\alpha\) forest absorption systems When analysing the absorption in the Ly\(\alpha\) forest, typically two approaches are taken. One may treat the forest as a series of discrete Figure 1: Redshift distribution of the 198,405 quasars in our initial sample is shown in black. The thick grey solid curve represents the distribution of all \(z\geq 2\) quasars from DR16Q. Also shown are the 4 samples of SBLAs FS0 (light green dashed line), FS1 (light red dashed dotted line), FS2 (orange dotted line), and P30 (dashed double dotted line) as discussed in Section 3. identifiable systems such can be fit as Voigt profiles and therefore derive their column densities and thermal and/or turbulent broadening. Alternatively one may treat the forest as a continually fluctuating gas density field and therefore take each pixel in the spectrum and infer a measurement of gas opacity (the so-called 'fluctuating Gunn-Peterson approximation'). For the former, the assumption is that the gas can resolved into a discrete set of clouds, which is physically incorrect for the Ly\(\alpha\) forest as a whole but a useful approximation in some conditions. For the latter, it is assumed that line broadening effects are subdominant to the complex density structure as a function of redshift in the Ly\(\alpha\) forest. In this work, we take the second approach, selecting absorption systems based on the measured flux transmission in a spectral bin in the forest, \(F_{\rm Ly\alpha}\). The absorbers in our sample are selected from wavelengths of 1040 A \(<\lambda<\)1185 A in the quasar rest frame. This range was chosen to eliminate the selection of Ly\(\beta\) absorbers and exclude regions of elevated continuum fitting noise from Ly\(\beta\) and O vi emission lines at the blue limit, and absorbers proximate to the quasar (within 7563 \(\,\rm km\,s^{-1}\)) at the red limit. We follow the method of Pieri et al. (2014) (hereafter P14) to generate their three strongest absorber samples, which they argued select CGM systems with varying purity. We limit ourselves to \(2.4<z_{abs}<3.1\) to retain sample homogeneity with varying wavelength. Without this limit there would be limited sample overlap across the composite spectrum (the blue end of the composite would measure exclusively higher redshift SBLAs and the red end would measure exclusively lower redshift SBLAs). Specifically, (P14) chose this redshift range to allow simultaneous measurement of both the full Lyman series and Mg ii absorption. We take main samples explored in P14 using a signal-to-noise per pixel \(>3\) over a 100 pixel boxcar. Of the 198405 quasars available, 68525 quasars had forest regions of sufficient quality necessary for the recovery of Strong, Blended Lyman-\(\alpha\) absorbers. These samples are: FS0 with \(-0.05\leq F_{\rm Ly\alpha}<0.05\), FS1 with \(0.05\leq F_{\rm Ly\alpha}<0.15\), and FS2 with \(0.15\leq F_{\rm Ly\alpha}<0.25\). The numbers of systems identified are given in Table 2. This is approximately quadruple the number of SBLAs with respect to P14 (though they were not given this name at the time). We also consider samples defined by their purity as discussed below. All remaining absorbers (after the flagging discussed in the previous section) are assumed to arise due to the Ly\(\alpha\) transition with \(2.4<z<3.1\), and are available for selection. Given the strength of absorbers selected here this is a fair assumption and in cases this is not true, the effect is easily controlled for (e.g. 'the shadow Si iii' features discussed in P14). The spectral resolution of the BOSS spectrograph varies from \(R=1560\) at 3700A to \(R=2270\) at 6000A. For chosen redshift range the resolution at the wavelength of the selected Ly\(\alpha\) absorption is \(R\approx 1800\) and this is therefore our effective spectral resolution throughout this work. This equates 167 \(\,\rm km\,s^{-1}\) or 2.4 pixels in the native SDSS wavelength solution. This allows us to rebin the spectra by a factor of 2 before selection of our Ly\(\alpha\) absorbers to reduce noise and improve our selection of absorbers. It has the added benefit of precluding double-counting of absorbers within a single resolution element. This results in the selection of absorbers on velocity scales of \(\sim 138\)\(\,\rm km\,s^{-1}\). Given that Lyman-\(\alpha\) absorbers have a median Doppler parameter of \(b\approx 30\)\(\,\rm km\,s^{-1}\) (and \(\sigma=10\)\(\,\rm km\,s^{-1}\); Hu et al. 1995; Rudie et al. 2012) our absorber selection is both a function of absorber strength and absorber blending. More detail is provided on the meaning of this selection function in P14. One of the key results of P14 was that regions of the Ly\(\alpha\) forest with transmission less than 25% in bins of 138 \(\,\rm km\,s^{-1}\) are typically associated with the CGM of Lyman break galaxies (using Keck HIRES and VLT UVES spectra with nearby Lyman break galaxies). The metal properties in the composite spectra were strongly supportive of this picture. We further reinforce this picture with improved metal measurements, constraints on their halo mass both from large-scale clustering and arguments regarding halo circular velocities Section 6. Given the weight of evidence that these systems represent a previously unclassified sample of galaxies in absorption, we chose to explicitly define them as a new class and name them "Strong, Blended Lyman-\(\alpha\)" (SBLAs) forest absorption systems. The preferred definition here is a noiseless transmitted Ly\(\alpha\) flux \(F_{\rm Ly\alpha}<0.25\) over bins of 138 \(\,\rm km\,s^{-1}\) for consistency with this Lyman break galaxy comparison and comparison with P14. Refinement of this class of SBLAs and/or alternative classes of SBLAs are possible with modifications of transmission or velocity scale. In the arguments that follow, statements regarding purity refer specifically to the successful recovery of this idealised SBLA class. As pointed out in Section 2, DLAs from DR16Q (presented in Chabanier et al. 2022) are masked in our selection of SBLAs, however, no catalogue of Lyman limit systems (LLS) are available and are therefore potentially among the SBLA sample. As P14 discussed at length, even if one assumes that all LLS are selected (which is not a given) no more than 3.7% of SBLAs should be a LLS. SBLAs are much more numerous and this is not surprising in light of simulations (e.g. Hummels et al. 2019) showing that the covering fraction of LLS (including DLA) is small compared to regions of integrated column density \(\approx 10^{16}\rm cm^{-2}\) we find here. The presence of even small numbers of Lyman limit systems can be impactful for our ionization corrections, however, and we return to this topic Section 7.4 and Section 8. ### Using Ly\(\alpha\) Mocks to Characterise Sample Purity The FS0 sample provides higher purity SBLA selection than FS1 or FS2 (P14). However, we note that there exists sets of systems that do not meet these requirements but have equivalent or better purity compared to subsets of FS0 systems with limiting S/N or flux. For example, systems with \(F_{\rm Ly\alpha}=0.06\) and S/N/A \(=10\) will have a higher SBLA purity than systems with \(F_{\rm Ly\alpha}=0.04\) and S/N/A \(\approx 3\), even though the latter meets the requirements for sample FS0 and the former does not. We have therefore explored the optimal combination of inter-dependent S/N and flux transmission thresholds to obtain a desired limiting purity. We used the official SDSS BOSS Lyman-\(\alpha\) forest mock data-set produced for DR11 (Bautista et al. 2015) without the \begin{table} \begin{tabular}{l c c c} \hline Emission Line & \(\lambda_{rest}\) & \(\lambda_{mask}\) & \(\pm\lambda_{buffer}\) \\ & (Å) & (Å) & (Å) \\ \hline Ly\(\beta\) & 1033.03 & 1023 \(-\) 1041 & 5 \\ Ly\(\alpha\) & 1215.67 & 1204 \(-\) 1240 & 10 \\ O i & 1305.42 & 1298 \(-\) 1312 & 5 \\ Si iv & 1396.76 & 1387 \(-\) 1407 & 10 \\ C iv & 1549.06 & 1533 \(-\) 1558 & 10 \\ He ii & 1637.84 & 1630 \(-\) 1645 & 5 \\ C iii & 1908.73 & 1890 \(-\) 1919 & 10 \\ Mg ii & 2798.75 & 2788 \(-\) 2811 & 5 \\ \hline \end{tabular} \end{table} Table 1: Emission line masks and buffer regions used in cubic-spline continuum estimation. All wavelengths listed are in quasar rest frame. addition of DLAs (which are masked in our analysis) and metal absorption lines (which are rare particularly for the strong, blended absorption studied here). The signal-to-noise was calculated using a 100-pixel boxcar smoothing of the unbinned data (replicating the selection function in the data), and then was rebinned to match the resolution used in our selection function. We then compared the observed (noise-in) Ly\(\alpha\) flux transmission in the models with the true (noiseless) flux transmission of these system in various ranges of observed flux transmission and S/N. The purity is the fraction of systems selected which that meets the SBLA definition of true (noiseless) flux transmission \(F_{\rm Ly\alpha}<0.25\). We then accept ranges that meet a given purity requirement. We estimated the purity for a grid of S/N/A \(>0.4\) (and step size of 0.2) and \(-0.05\leq\)F\(<0.35\) (and step size of 0.05). The flux and S/N/A of the selected lines in the real data are compared to this grid to give an estimate of the purity of the selection. By building samples in this way we are not limited to the high signal-to-noise data used in P14. Though we focus on F50 for consistency with (P14), we demonstrate here how expanded samples can be prepared. Using this approach, we propose three additional samples defined by their limiting SBLA purities. Noting that the mean purity of the F50 sample of \(\approx 90\%\), we produce a sample of 90% minimum purity, which we label P90. We do indeed obtain a more optimal sample with both higher mean purity and nearly double the number of SBLAs with sample P90 compared to F50. We further produce samples with minimum purity of 75% and 30%, labelled P75 and P30 respectively. The numbers and resulting mean purity taken from these mock tests are showing in Table 2. These tests indicate that there are around 200,000 SBLAs between \(2.4<z<3.1\) are present in data. Our companion paper, Perez-Rafols et al. (2023), uses a version of our P30 sample without a redshift limit to measure large-scale structure clustering. This provided us with 742,832 SBLAs. Assuming that our inferred purity for P30 is correct for this sample also, we obtain around half a million true SBLAs in our most inclusive sample. This is more than an order of magnitude more CGM systems than our DLA sample. ## 4 Stacking Procedure We follow the method originally set out in Pieri et al. (2010) (hereafter P10) and further elaborated in P14 for building composite spectra of Ly\(\alpha\) forest absorbers through the process of stacking SDSS spectra. For every selected Ly\(\alpha\) absorber with redshift \(z_{\alpha}\) the entire continuum fitted quasar spectrum is treated initially as if it were the spectrum of that system alone. In practise, one produces a rest frame spectrum of that absorber by dividing the wavelength array by \((1+z_{\alpha})\). This is done many times for many different selected absorbers (sometimes using the quasar spectra more than once). This ensemble of SBLA rest frame spectra constitutes the stack of spectra to be analysed. Typically one collapses this stack to a single value at every wavelength using some statistic. In P10 and P14 two statistics were applied; the median and the arithmetic mean (though in some circumstances the geometric mean may be the more suitable choice). In Section 8 below we will explore what we can learn from the full population of absorbers and relax the implicit assumption that all systems in a given sample are the same. In this work we will focus on the arithmetic mean with no S/N weighting for reasons which will become clear in Section 8. Stating this explicitly, we calculate the mean of the stack of spectra (or'mean stacked spectrum') as \[F_{S}(\lambda_{r})=\sum_{i=1}^{n}F_{i}(\lambda_{r})/n \tag{1}\] where \(\lambda_{r}\) indicates the wavelength in the rest frame of the SBLA system selected and the set of \(i=1,n\) indicates SBLAs that contribute a measurement at the specified rest frame wavelength. Following the method of P10 and P14, in order to calculate the arithmetic mean, we sigma clip the high and low 3% of the stack of spectra to reduce our sensitivity to outliers. We also allow that the overwhelming majority of the absorption in the spectra are not associated with the selected SBLAs. These unassociated absorbers do not generate any absorption features correlated with our selected Ly\(\alpha\), but they do have an impact on the mean stacked spectrum. When a mean stacked spectrum is calculated, a broad suppression of transmitted flux is seen (see Figure 2). Since this absorption is not associated with the selected systems, it is therefore undesirable in the pursuit of a composite absorption spectrum of the selected systems. The stacked flux departs from unity even in regions where Lyman-series and metals features are not expected despite the fact that each spectrum was continuum normalised before being stacked Figure 2. These broad flux variations result mainly from the smoothly varying average contribution of uncorrelated absorption. The artefacts of the stacking procedure are unwanted in a composite spectrum of the selected systems but vary smoothly enough that one can distinguish them from absorption features of interest. Since they are analogous to quasar continua, P10 gave these artefacts in the stacked spectra the name 'pseudo-continua'. They argued that the effect of this contamination in the mean stacked spectrum can be best approximated by an additive factor in flux decrement. This is because quasar absorption lines are narrower than the SDSS resolution element and hence would typically be separable lines in perfect spectral resolution. These uncorrelated absorbers are present on either side of the feature of interest and it is reasonable to assume that they will continue through the feature of interest contributing to the absorption in every pixel of the feature on average without typically occupying the same true, resolved redshift range. In this regime each contributing absorber makes an additive contributions to the flux decrement in a pixel. The alternative regime where absorption is additive in opacity, leads to a multiplicative correction, but weak absorption features (such as those we measure here ) are insensitive to the choice of a multiplicative or additive correction. In light of these two factors we continue under the approximation of additive contaminating absorption. \begin{table} \begin{tabular}{l c c c c} \hline Sample & F\({}_{\rm lower}\) & F\({}_{\rm upper}\) & \(<\)Purity(\%)\(>\) & Number of SBLAs \\ \hline \hline F50 & -0.05 & 0.05 & 89 & 42,210 \\ F51 & 0.05 & 0.15 & 81 & 86,938 \\ F52 & 0.15 & 0.25 & 55 & 141,544 \\ P30 & -0.05 & 0.25\({}^{ab}\) & 63 & 335,259 \\ P75 & -0.05 & 0.25\({}^{aa}\) & 90 & 124,955 \\ P90 & -0.05 & 0.25\({}^{aa}\) & 97 & 74,660 \\ \hline \end{tabular} \({}^{a}\) Hard limit. True maximum is a function of of S/N tuned for desired minimum purity. \({}^{b}\) Redshift limited version of Pérez-Rafols et al. (2023) sample. \end{table} Table 2: Possible \(2.4<z<3.1\) SBLA samples, their flux transmission boundaries (in 138 km s\({}^{-1}\) bins and their purity to true (noiseless) flux transmission of \(F_{\rm Ly\alpha}<0.25\) We therefore arrive at a composite spectrum of SBLAs by correcting the stacked spectrum using \[F_{C}(\lambda_{r})=F_{S}+(1-P), \tag{2}\] where (again) \(F_{S}\) represents the mean stacked flux and \((1-P)\) represents the flux decrement of the 'pseudo-continuum' and can be estimated by fitting a spline through flux nodes representing this pseudo-continuum. To calculate these nodes we first manually select regions of the stacked spectrum in areas where signal from correlated absorption is not seen and/or expected. Then for each such 'pseudo-continuum patch', we define the corresponding node using the mean of flux and wavelength values of all stacked pixels within this patch. In estimating the pseudo-continuum we typically use \(\sim\) 10 A wide "patches" of spectrum. However, smaller continuum patches were used in regions crowded by correlated absorption features, while much wider segments were selected for relatively flat regions of the stacked spectrum. Figure 2 shows the pseudo-continuum along with the regions used to estimate it for the mean stacked spectrum corresponding to FS0. The corresponding composite spectrum is shown in Figure 3. Figure 2: The stacked spectrum of the SBLA system sample FS0 (systems selected with flux in the range \(-0.05\leq F<0.05\) (FS0) is plotted with solid blue curve. The stacked spectrum show broad continuum variations resulting from uncorrelated absorption. The overlaid orange curve represents this pseudo-continuum. The regions used to estimate the pseudo-continuum are shown as green shaded regions withing vertical green dashed lines. ## 5 Improved estimations of measurement uncertainty In this work, we explore a more inclusive treatment of measurement uncertainty than P10 and P14 allowing more reliable fits and more quantitative model comparison. We will initially summarise the previous method in order expand on our more precise error estimations. ### Quick bootstrap method In P10 and P14 the errors were estimated for the stacked spectrum alone, i.e. prior to the pseudo-continuum normalisation step above. In taking this approach, they did not formally include the potential contribution to the uncertainty of the pseudo-continuum normalisation. Instead they took the conservative choice to scale the errors generated by the bootstrap method by a factor of root-2 assuming that pseudo-continuum fitting introduced an equal contribution to the uncertainty of the final composite spectrum. Errors in the stacked spectrum were estimated by bootstrapping the stack of spectra. At every wavelength bin in the stack, 100 bootstrap realisations were produced and the error was calculated as the standard deviation of the means calculated from those random realisations. This was performed independently for each bin. In the process of exploring improved estimates of uncertainty in the composite spectrum of Ly\(\alpha\) forest systems, we have learned that 100 realisations is not a sufficient number for precision error estimates. Based on these convergence tests we advocate generating 1,000 realisations to have high confidence of accuracy. See Appendix B for more detail on this choice. ### End-to-end bootstrap method In this work we wish to relax the assumption of P14 that pseudo-continuum fitting introduces an uncertainty to the composite spectrum equal to, but uncorrelated with, the combination of other sources of error. In order to do this, we seek to estimate the errors from the telescope all the way to final data analysis step of producing a composite spectrum. In order to build an end-to-end error estimation framework we begin by bootstrapping the sample of SBLAs and their accompanying spectra. For each random realisation of the sample, we construct a realisation of the stacked spectrum following the same approach as that in the quick bootstrap method. The key difference is that we do not simply calculate an uncertainty in the stacked spectrum and propagate it forward analytically through the pseudo-continuum normalisation to the composite spectrum. Instead we include this process in the bootstrap analysis by performing the pseudo-continuum fit and normalisation upon each realisation. The patches used to fit the pseudo-continuum of our observed stacked spectrum (as described in Section 4) were applied to each of Figure 3: Composite spectrum of the SBLA system sample F80 (systems selected with flux between \(-0.05\leq F<0.05\)) produced using the arithmetic mean statistic. Error bars are shown in blue. Vertical dashed lines indicate metal lines identified and dotted vertical lines denote the locations of the Lyman series. Note the scale of the y-axis in each panel: this is our lowest S/N composite spectrum and yet we measure absorption features with depth as small as 0.0005. the bootstrap realisations to obtain spline nodes for a unique pseudo-continuum per realisation. This created an ensemble of 1,000 bootstrapped realisations of the (pseudo-continuum normalised) composite spectrum, \((F_{C}^{c})_{i}\), where \(i\) denotes the \(i\)th bootstrap realisation at every wavelength. Finally, the error in the composite flux \(\sigma_{F_{C}}\) is estimated to be the standard deviation of the ensemble \((F_{C}^{c})_{i}\) at every wavelength. The resulting uncertainties in the composite flux derived using the end-to-end error estimation method are shown in Figure 3 using blue error bars. Figure 4 illustrates the end-to-end error estimation mechanism taking a region of the stack around the Si ii \(\lambda 1260\) absorption signal. The stack is shown in the top panel of the figure along with a pair of continuum patches on either sides of the absorption feature as well as the pseudo-continuum estimate. This panel also marks the locations of three pixels chosen as example to illustrate the method: the pixel at the centre of the absorption feature and the pixels at the middle of the continuum patches on the 'blue' and'red' side of the feature. The panels in the bottom row of Figure 4 show the distribution of realisations for the stacked spectrum (open histogram) and composite spectrum (filled histogram). For convenience of comparison, each distribution is plotted with respect to that distribution's mean (i.e \(f_{pix,i}=(\tilde{F_{C}})_{i}-\langle\tilde{F_{C}}\rangle\) or \(f_{pix,i}=(\tilde{F_{S}})_{i}-\langle\tilde{F_{S}}\rangle\)). The wavelength for each distribution is indicated by the vertical dot-dash line of matching colour in the top panel. The interval described by the standard deviation of each distribution is indicated using vertical solid lines for the stacked spectrum (\(\pm\sigma_{F_{S}}\)) and vertical dashed lines for the composite spectrum (\(\pm\sigma_{F_{S}}\)). We can further compare the uncertainty derived for the composite spectrum and the stacked spectrum through the ratio \(\epsilon=\sigma_{F_{C}}/\sigma_{F_{S}}\)) as a function of wavelength. An \(\epsilon>1\) indicates that uncertainty is increased by the pseudo-continuum fitting, whereas \(\epsilon<1\) indicates that pseudo-continuum fitting is suppressing variance. We again take the example of Si ii 1260A and show \(\epsilon\) as a function of wavelength in Figure 5. As illustrated for Si ii 1260A, line absorption features show an additional uncertainty and the regions between them show variance suppression. The latter is to be expected because the pseudo-continuum fitting suppresses large-scale deviations in uncorrelated absorption by erasing low order modes in the spectra. One the other hand the absorption features themselves are free to deviate and show the increased uncertainty of Figure 4: Illustration of the end-to-end error estimation mechanism using a regions of the F80 stack around the Si ii \(\lambda 1260\) absorption feature. **Top row:** The stacked spectrum around the absorption feature centred at 1260Å is shown using a black curve. The shaded grey regions represent a pair of continuum patches on either sides of the feature. The pseudo-continuum is also shown using the orange dashed curve. The green, blue and red vertical lines mark the locations of three pixels chosen for illustration: the pixel at the centre of the absorption feature and the pixels at the midpoints of the continuum patches located on the left and right of the feature, respectively. **Bottom row:** Each panel shows the distributions of the stacked and composite flux across all the realisations at one of the pixels marked in the upper panel. The wavelength of each distribution is indicated by their colour and the colour of the dot-dash line in the top panel. The distributions are shown on a linearly shifted flux scale so that the mean of each distribution corresponds to \(f_{pix}=0\). The stacked flux distribution is shown using a open histogram while the composite flux distribution is shown using a shaded histogram and their corresponding standard deviations are shown using vertical solid and dashed lines, respectively. interest. The value of \(\epsilon\) for every measured metal line measurement bin is given in Table 4. The pseudo-continuum normalisation process does increase the uncertainty at the absorption locations, but the increase is smaller than the 41% increase implied by the root-2 assumption of P14. Only C iii(977A) and Si ii(1190A) show a great than 10% increase in errors and so overall a more accurate (but less conservative) error estimate would have been to be neglect the contribution of pseudo-continuum fitting. We note, however, that the degree of noise suppression in feature free regions and the degree of noise inflation at absorption feature centres are both dependent on the placement and width of patches are used to generate spline nodes (shown in Figure 2). Therefore we advise caution if using quick bootstraps with these \(\epsilon\) measurements as correction factors, if precise error estimates are needed. The placement of these patches may change if absorption features are broader/narrower than the results presented here, leading to changes in \(\epsilon\). ## 6 Measurement of the Sbla halo mass We cross-correlate the main FS0 sample of SBLAs with the Ly\(\alpha\) forest in order to measure large-scale structure bias, and constrain SBLA halo mass. The Ly\(\alpha\) forest is prepared in a distinct way for this analysis using the standard method developed for correlation function analyses, as outlined in our companion paper (Perez-Rafols et al., 2023, hereafter PR22). We summarise the data preparation briefly in Appendix A and refer the reader to that paper for a detailed discussion. Figure 6 shows the measured cross-correlation and the best-fit model. The best fit has \(\chi^{2}=5060.602\) for 4904 degrees of freedom (probability \(p=0.058\)). The best-fit value of the SBLA bias parameter is \[b_{\rm SBLA}=2.34\pm 0.06, \tag{3}\] where the quoted uncertainty only includes the stochastic errors. The recovered \(b_{\rm SBLA}\) value is consistent with that found by PR22. If all SBLAs were sited on halos of a single mass, this mass would be \(\sim 7.8\times 10^{11}\rm h^{-1}M_{\sun}\). However, SBLAs are likely found in halos with a range of masses. Following what Perez-Rafols et al. (2018) proposed for DLAs (see their equations 15 and 16 and their figure 8), a plausible distribution of the SBLA cross-section, \(\Sigma\) (\(M_{h}\)), is a power law in halo mass, starting with some minimal halo mass: \[\Sigma\left(M_{h}\right)=\Sigma_{0}\left(\frac{M_{h}}{M_{\rm min}}\right)^{- \alpha}\ \left(M_{h}>M_{\rm min}\right). \tag{4}\] Using this cross-section, the mean halo mass is computed as \[\overline{M_{h}}=\frac{\int_{M_{\rm min}}^{\infty}n(M)\Sigma(M)MdM}{\int_{M_{ \rm min}}^{\infty}n(M)\Sigma(M)dM}\, \tag{5}\] where \(n\left(M\right)\) is the number density of halos for a given mass. For plausible values of \(\alpha=0.50\), \(0.75\) and \(1.00\) this yields a mean mass of \(1.3\times 10^{12}\rm h^{-1}M_{\sun}\), \(9.4\times 10^{11}\rm h^{-1}M_{\sun}\), and \(7.6\times 10^{11}\rm h^{-1}M_{\sun}\) respectively. We note that a detailed study of this cross-section using simulations is necessary to make more accurate mass estimates, but our finding indicate that SBLAs reside in halos of mass \(\approx 10^{12}\rm h^{-1}M_{\sun}\). It is informative to compare this with order of magnitude estimates of the halo mass derived by assuming that the width of the SBLA line blend is driven by the circular velocity of virialised halo gas undergoing collapse. This connection between halo circular velocity, halo virial mass and galaxy populations has been well-explored (e.g. Thoul & Weinberg, 1996). Specifically we apply the relationship between maximal circular velocity and halo mass modelled by Zehavi et al. (2019). Using these relations, we infer that a circular velocity of 138 km s\({}^{-1}\) at \(z\sim 2.4\) leads to halo mass estimate of \(M_{h}\sim 3\times 10^{11}\rm h^{-1}M_{\sun}\). This value is broadly consistent with our findings from SBLA clustering, supporting our assumption that blending scale is associated with halo circular velocity and so halo mass. This may shed some light on the reason why SBLAs are CGM regions. ## 7 Average Sbla absorption properties As one can see in Figure 3, absorption signal is measurable in the composite spectrum from a a wide range of transitions: Lyman-series lines (Ly\(\alpha\)- Ly\(\theta\)) and metal lines (O i, O vi, C ii, C iii, C iv, Figure 5: The ratio, \(\epsilon\), between \(1\sigma\) error in the composite flux (\(\sigma_{F_{\rm{C}}}\)) to that of the stacked flux (\(\sigma_{F_{\rm{S}}}\)) for the FS0 sample is plotted over the region around the Si ii\(\lambda\)1260 feature. The shaded grey regions represent a pair of continuum patches on either side of the feature. The vertical lines correspond to the locations of the pixels marked in Figure 4. Figure 6: Cross-correlation function averaged over the full angular range \(0<|\mu|<1\) for the fitting range \(10<r<80\ h^{-1}\)Mpc. The solid line shows the best-fit model. Si ii, Si iii, Si iv, N v, Fe ii, Al ii, Al iii, and Mg ii), but care must be taken to measure them in a way that is self-consistent and without bias. Although these features appear to be absorption lines, they are in fact a complex mix of effects that precludes the naive application of standard absorption line analysis methods appropriate for individual spectrum studies. P14 demonstrated that the main difference in interpretation of the 3 potentially CGM dependent samples (which we have named) FS0, FS1 and FS2 was the purity of CGM selection in light of spectral noise given the large excess of pixels with higher transmission that might pollute the sample. Since FS0 has the lowest transmission, it is the purest of these samples. Hence, in this work directed at understanding CGM properties, we focus on interpreting FS0 sample properties. Throughout this work we only present lines measured with 5\(\sigma\) significance or greater. N v, for example, fails to meet this requirement and is not included in the measurements presented below. ### Line-of-sight integration scale There are two approaches to the measurement of absorption features seen in the composite spectra (as identified in P14); the measurement of the full profile of the feature and the measurement of the central pixel (or more accurately resolution element). In order to understand this choice, it is necessary to reflect, briefly, on the elements that give rise to the shape and strength of the features. The signal present for every absorption feature is a combination of * the absorption signal directly associated with the selected Ly\(\alpha\) absorption, * possible associated absorption complexes extending over larger velocities (typically associated with gas flows, often with many components), and * sensitivity to large-scale structure (including redshift-space distortions) reflected in the well-documented (e.g Chabanier et al., 2019) fact that Ly\(\alpha\) forest absorption is clustered, leading to potential clustering in associated absorbers also (e.g Blomqvist et al., 2018). In large-scale structure terminology the first two points are 'one-halo' terms and the last one is a 'two-halo' term. This two-halo effect is clearly visible in the wide wings of the Ly\(\alpha\) absorption feature extending over several thousand \(\,\mathrm{km\,s^{-1}}\). Since the metal features seen are associated with Ly\(\alpha\) every one must present an analogous (albeit weak) signal due to the clustering of SBLA. Although this large-scale structure signal is present in the composite, our stacking analysis is poorly adapted to the measurement of large-scale structure since the signal is degenerate with the pseudo-continuum fitting used, and the preferred measurement framework for this signal is the Ly\(\alpha\) forest power spectrum (McDonald et al., 2006). As outlined in Section 3, the selection of SBLA to be stacked includes clustering and therefore both complexes and large-scale structure. Therefore even the central pixel includes all the above effects to some extent but limiting ourselves to the measurement of the central pixel sets a common velocity integration scale for absorption measurement. In fact, since the resolution of SDSS is 2.4 pixels, the appropriate common velocity scale is two native SDSS pixels. We therefore take the average of the two native pixels with wavelengths closest to the rest frame wavelength of the transition in question as our analysis pixel. This sets the integration scale fixed to 138 \(\,\mathrm{km\,s^{-1}}\). This mirrors the Ly\(\alpha\) selection function bin scale which is also a 2-pixel average (see Section 3). The error estimate for the flux transmission of this double width wavelength bin is taken as the quadrature sum of the uncertainty for the two pixels in question (a conservative approximation that neglects the fact that errors in neighbouring pixels are correlated due to pipeline and analysis steps such as pseudo-continuum fitting). Here after we will use 'central bin' to refer to this 2-pixel average centred around the rest frame wavelength of the transition of interest. In contrast P14 showed that measuring the full profile of the features leads to a different velocity width for every feature indicating either varying sensitivity to these effects or tracing different extended complexes. Critically this means that some absorption must be coming from physically different gas. Since the objective of this work is the formal measurement and interpretation of the systems selected, we limit ourselves to the central analysis pixels at the standard rest frame wavelength of each transition. We note, however, that information is present in the composite spectra on the velocity scale of metal complexes and this demands further study if it can be disentangled from large-scale structure. ### Measuring the H i Column density Here we compare Lyman series line measurements in the composite spectrum with a variety of models in order to constrain the column density and Doppler parameter. As we have stressed throughout this work, our SBLA samples are a blend of unresolved lines contributing to a 138 \(\,\mathrm{km\,s^{-1}}\) central bin. As a result a range of H i column densities are present in each SBLA. While the full range of H i columns contribute to the selection, it is reasonable to presume that a high column density subset dominate the signal in the composite. It is, therefore, natural that the further we climb up the Lyman series, the more we converge on a signal driven by this dominant high-column subset. Here we exploit this expected convergence to jointly constrain the integrated dominant H i column density (\(\mathrm{N_{H\textsc{i}}}\)) of lines in the blend and their typical Doppler parameter (\(b\)). In the following, the results are presented as equivalent widths to follow standard practise, but the measurements are in fact central bin flux decrements (\(1-F_{C}\)) multiplied by the wavelength interval corresponding to the 138 \(\,\mathrm{km\,s^{-1}}\) central bin interval. In effect, the equivalent widths presented are the integrated equivalent widths of all lines contributing to that central bin measurement. We build a grid of model1 equivalent widths for the eight strongest Lyman transitions over the range \(13.0\leq\log\mathrm{N_{H\textsc{i}}}(\mathrm{cm^{-2}})\leq 21.0\) with interval \(\delta\log\mathrm{N_{H\textsc{i}}}(\mathrm{cm^{-2}})=0.01\), and \(5.0\leq b\) (\(\,\mathrm{km\,s^{-1}}\)) \(\leq 50.0\) with interval \(\delta b=0.1\)\(\,\mathrm{km\,s^{-1}}\). These models are built for the composite spectrum wavelength solution and include instrumental broadening of 167 \(\,\mathrm{km\,s^{-1}}\). Footnote 1: Produced using VPFIT 10.0 (Carswell & Webb, 2014) In order to measure the dominant H i contribution, we must determine which of the Lyman series lines should be treated as upper limits progressively, starting with Ly\(\alpha\) and moving up the series until a converged single line solution of satisfactory probability is reached. For each line considered as upper limit, if the model prediction lies 1\(\sigma\) equivalent width error above the measured equivalent width, the line contributes to the total \(\chi^{2}\) for the model and one degree of freedom gets added to the number of degrees of freedom for the model. If the model prediction lies below this threshold, it does not contribute to the total \(\chi^{2}\) and the number of degrees of freedom for the model remain unchanged. This process 'punishes' the overproducing models instead of rejecting them. The probability for each model is calculated based on the total \(\chi^{2}\) and the updated number of degrees of freedom. The best-fit model for a given upper-limit assignment scheme is determined by maximising the probability. The best-fit probabilities, \(N\) and \(b\)-values corresponding to the different upper-limit assignment schemes are compared to determine the number of lowest order Lyman lines assigned to upper limits (\(N_{ul}\)) necessary to achieve a converged probability. The convergence for the FS0 sample is shown in Figure 7. The model that corresponds to the convergence is chosen as the best-fit model for the H i column density and Doppler parameter for that sample. Figure 8 shows the measured equivalent widths (\(W\)) normalised by the oscillator strength (\(f\)) and rest frame wavelength (\(\lambda\)) for each transition for the FS0 sample. Also shown is the best-fit model, along with models for the \(1\sigma\) upper and lower confidence intervals on the dominant H i column density. Note that when plotted this way, unsaturated lines would produce a constant \(W/(F\lambda)\), and so the dominant H i population is only beginning to show unsaturated properties for the highest Lyman series transitions measured. Table 3 shows the fit results for this procedure. The differences in measured column densities between FS0, FS1, and FS2 demonstrate that, along with decreasing purity of noiseless \(F_{\rm Ly\alpha}<0.25\), higher transmission bands also select lower column densities. The P90, P75 and P30 samples show a similar trend but show a weaker variation in H i column density along with a weaker decline in mean purity. This combined with the large numbers of systems selected indicates that these purity cuts do indeed provide more optimal SBLA samples. While we chose to focus on FS0 in order to preserve sample continuity for comparison with previous work, we recommend a transition to such optimised selection in future work. This supports the choice taken in Perez-Rafols et al. (2023) to use the P30 sample. ### Average Metal Column Densities Unlike the H i measurement above, metal features in the composite are sufficiently weak that several metal transitions are not necessary to establish a reliable column density. However, the combination of line strength and measurement precision means that the small opacity approximation (that the relationship is linear between equivalent width and column density) is inadequate for our needs. Again given that we lack a large number of metal transitions with a wide dynamic range of transition strengths for each metal species, a suite of model lines (as performed for H i) is not necessary. We instead fit them directly with column density the only free parameter, treating each feature as fully independent or one another. We assume a Doppler parameter value taken from the H i measurement (see below). We fit the mean of the pair of pixels nearest to the transition wavelength with instrumental broadening set to 167 km s\({}^{-1}\) using VPFIT. Since VPFIT was not designed to reliably assess the uncertainty in the column density from a single pixel at time, we pass the upper and lower \(1\sigma\) error envelope through VPFIT for every line to obtain \(N_{min}\) and \(N_{max}\) respectively. The measurements for our main sample (FS0) along are given Table 4. We exclude from our analysis all transitions where there is a significant contribution to the central 138 km s\({}^{-1}\) by the broad wing of a neighbouring feature. In principal, it is possible to fit the superposed features, correct for the profile of the unwanted feature and measure the 138 km s\({}^{-1}\) central core of the desired line, but these blended features are incompatible with the population modelling Figure 8: The best fit H i model (_green solid_ line) and the limiting \(\pm 1\sigma\) allowed models (_orange dashed_ line) compared to Lyman series equivalent width measurements for the FS0 sample. The upper limits reflect the convergence described in the text and illustrated in Figure 7. \begin{table} \begin{tabular}{l c c c} \hline \hline Sample & \(\log\mathrm{N_{H\textsc{i}}}\)(cm\({}^{-2}\)) & \(b\)(km s\({}^{-1}\)) & \(N_{ul}\) & Prob \\ \hline \hline FS0 & \(16.04^{+0.06}_{-0.06}\) & \(18.1^{+0.7}_{-0.6}\) & 5 & 0.04 \\ FS1 & \(15.64^{+0.06}_{-0.06}\) & \(12.3^{+0.4}_{-0.4}\) & 3 & 0.6 \\ FS2 & \(15.1^{+0.06}_{-0.07}\) & \(8.5^{+1.0}_{-0.3}\) & 5 & 0.13 \\ P30 & \(15.49^{+0.06}_{-0.01}\) & \(10.8^{+1.4}_{-0.1}\) & 5 & 0.4 \\ P75 & \(15.67^{+0.06}_{-0.03}\) & \(13.5^{+0.3}_{-0.3}\) & 5 & 0.27 \\ P90 & \(15.79^{+0.06}_{-0.07}\) & \(14.6^{+1.0}_{-0.1}\) & 5 & 0.37 \\ \hline \end{tabular} \end{table} Table 3: Inferred H i column densities from Lyman series measurements. Figure 7: Test of H i Lyman series upper limits (starting with Ly \(\alpha\) as an upper limit and progressively adding higher order Lyman lines) for convergence to determine best fit model parameters for the FS0 composite. The shaded bands represent the final best fit parameters for \(\log\mathrm{N_{H\textsc{i}}}\) (top, blue) and \(b\) (middle, red). The probability of (of a higher \(\chi^{2}\)) for each best-fit model, as a function of the number of upper limits, is given in the bottom panel (green). procedure that follows and so are of limited value. Examples of cases where a broad feature wing contaminates the desired feature centre (and are hence discarded) are O i(989A), N iii(990A) and Si ii(990A), and C ii(1036A) and O vi(1037A). On the other hand O i(1302A) and Si ii(1304A)are retained in our analysis despite pertainingly blended in our composite spectrum. The contribution of the Si ii(1304A) feature wing to the central O i analysis bin is 3% of the observed flux decrement. The O i feature wing contributes 6% to the observed flux decrement to the Si ii(1304A) measurement. This is illustrated in Figure 9. In each case spectral error estimate is similar to the size of the contamination. As we shall see in Section 7 the error estimates of the composite are too small for any true model fit and instead the limiting factor is the much larger uncertainty in the population model fits of Section 8. Another consequence of our inability to resolve the individual lines that give rise to our metal features (and our lack of a dynamic range of transition strengths) is that we lack the ability to constrain the Doppler broadening parameter. However, we do have a statistical measurement of the Doppler parameter of systems that dominate the blend selected. This is the value of the Doppler parameter obtained from the H i measurement. While the measurement of narrow lines in wide spectral bins is often insensitive to the choice of Doppler parameter, in our measurements it does matter. The theoretical oversampled line profile is a convolution of the the narrow line and the line spread function. Our choice of 2 spectral bins is much larger than the former but does include the entire line spread function. This means that the choice of Doppler parameter in the model does have an impact. For example, changing the Doppler parameter by 5 km s\({}^{-1}\) generates a change of \(\Delta(logN)\lesssim 0.1\) (the strongest features are closest to this limit, e.g. C iii). Normally this degree of sensitivity would be considered small but in the context of the extremely high precision of the average column density statistic, the choice of using the H i Doppler is a significant assumption. Again we shall see in Section 8 that the population analysis implies larger column density errors. ### Modelling average metal column densities In order to interpret our measurements of SBLA sample FS0 (both for the ensemble SBLA mean and the population properties in Section 8) we follow the simple framework in P10 and P14. We will review this analytic framework here, and for further details see P14. A key supporting assumption for what follows is that the gas studied follows the optically thin (to ionizing photons) approximation. This assumption is supported by various arguments. First of all, as stated in Section 3 Damped Lyman-\(\alpha\) systems in the DR16Q sample are masked. Secondly, the mean H i column density found (see Section 7.2) is that of optically thin gas. Thirdly, the population analysis (see Section 8) indicates that Ly\(\epsilon\) is homogeneous indicat \begin{table} \begin{tabular}{l c c c c c c c c} \hline Line & Wavelength (Å) & Ionization Potential (eV) & \(F_{C}\) & \(\sigma_{F_{C}}\) & \(\epsilon\) & \(\log{\rm N(cm^{-2})}\) & \(\log{\rm N_{max}(cm^{-2})}\) & \(\log{\rm N_{min}(cm^{-2})}\) \\ \hline OI & 1302.17 & 13.6 & 0.9743 & 0.0011 & 1.084 & 13.470 & 13.449 & 13.489 \\ MgII & 2796.35 & 15.0 & 0.9376 & 0.0031 & 1.034 & 12.450 & 12.424 & 12.474 \\ MgII & 2803.53 & 15.0 & 0.9404 & 0.0031 & 1.043 & 12.729 & 12.703 & 12.754 \\ FeII & 1608.45 & 16.2 & 0.9596 & 0.0007 & 1.020 & 12.509 & 12.433 & 12.573 \\ FeII & 2344.21 & 16.2 & 0.9878 & 0.0009 & 1.042 & 12.499 & 12.467 & 12.530 \\ FeII & 2382.76 & 16.2 & 0.9807 & 0.0009 & 1.032 & 12.252 & 12.231 & 12.272 \\ FeII & 2586.65 & 16.2 & 0.9932 & 0.0013 & 1.041 & 12.415 & 12.321 & 12.493 \\ FeII & 2600.17 & 16.2 & 0.9798 & 0.0014 & 1.031 & 12.361 & 12.329 & 12.390 \\ SiII & 1190.42 & 16.3 & 0.9709 & 0.0010 & 1.147 & 12.780 & 12.765 & 12.795 \\ SiII & 1193.29 & 16.3 & 0.9643 & 0.0010 & 1.165 & 12.574 & 12.561 & 12.586 \\ SiII & 1260.42 & 16.3 & 0.9481 & 0.0012 & 1.082 & 12.422 & 12.411 & 12.433 \\ SiII & 1304.37 & 16.3 & 0.9823 & 0.0010 & 1.076 & 13.044 & 13.017 & 13.069 \\ SiIII & 1526.71 & 16.3 & 0.9780 & 0.0006 & 1.032 & 12.886 & 12.872 & 12.899 \\ AlII & 1670.79 & 18.8 & 0.9740 & 0.0007 & 1.020 & 11.806 & 11.795 & 11.817 \\ CII & 1334.53 & 24.4 & 0.9428 & 0.0010 & 1.019 & 13.410 & 13.401 & 13.418 \\ AlIII & 1854.72 & 28.4 & 0.9904 & 0.0005 & 1.031 & 11.805 & 11.780 & 11.828 \\ AlIII & 1862.79 & 28.4 & 0.9965 & 0.0005 & 1.035 & 11.661 & 11.590 & 11.722 \\ SiIII & 1206.50 & 33.5 & 0.8904 & 0.0010 & 1.057 & 12.690 & 12.685 & 12.696 \\ SiIV & 1393.76 & 45.1 & 0.9367 & 0.0007 & 1.016 & 12.838 & 12.832 & 12.844 \\ CIII & 977.02 & 47.9 & 0.8180 & 0.0025 & 1.259 & 13.444 & 13.434 & 13.455 \\ CIV & 1548.20 & 64.5 & 0.8764 & 0.0008 & 1.029 & 13.586 & 13.582 & 13.590 \\ OVI & 1031.93 & 138.1 & 0.8994 & 0.0014 & 1.084 & 13.799 & 13.792 & 13.807 \\ \hline \end{tabular} \end{table} Table 4: Mean metal columns for the main sample, FS0. Figure 9: The contribution of Si ii(1304Å) to the central bin measurement of O i(1302Å) and vice versa. The _blue_ curve is the fit to the portion of the O feature that is Si ii–free (the blue-side of the profile). The _red_ curve is the fit to the portion of the Si ii feature that is O i–free (the red-side of the profile). The green curve is the joint fit of the full profiles of both features. The full profile fit is only used to measure the contribution to the measurement bin of the neighbouring line. As discussed in Section 7.1, we do not use the full profile measurement of features in this work. ing that the H i population does not deviate significantly from this mean. Finally DLAs and Lyman limit systems are not sufficiently numerous to significantly modify our mean results (as discussed in Section 3). However, as we shall see in Section 8 when one delves further into the metal population behind the mean one finds that such small populations can have an important contribution if the absorption is sufficiently strong. Metal lines consistent with such small populations are identified in Section 8 and omitted from further analysis in this work. In order to model the column density of each metal line from the measured the H i column density we need a simple sequence of conversion factors: the neutral hydrogen fraction is needed to obtain the hydrogen column density, the metallicity (and an abundance pattern baseline) is needed to obtain the metal element column density, and finally the metal ionization fraction is needed to obtain the required metal column density. The ionization fractions are provided under the optically thin approximation by runs of the CLOUDY (Ferland et al., 1998) with varying gas density and temperature using a quasar+galaxy UV background model (Haardt & Madau, 2001). For relative elemental abundances, we assume a solar abundance and take the solar abundance pattern of Anders & Grevesse (1989). The UV background and abundance patterns used are significant simplifying assumptions (see Section 9.2 for further discussion). In this work, we focus on the constraining density and temperature from these ionization models with metallicity as an additional free parameter (acting as a overall normalisation of projected metal column densities). We give the gas density in terms of hydrogen atom number density, but this can be converted to gas overdensity by scaling up by 5.04 dex for a standard cosmology at \(z=2.7\). In the process of interpreting these metal features, we take into account that all these features should build up a coherent picture either as a multi-phase medium, or multiple populations of systems or both. By'multiphase' we mean that an individual SBLA in our sample may be associated with multiple phases of gas that are unresolved in our data. Interpreting our average metal properties in a purely multiphase way presumes that all SBLAs stacked are the same. We will initially explore this straw-man model before going on to explore the underlying population, and combined multi-population and multi-phase fits in Section 8. One cannot fit a model to each of the ionisation species in isolation because a fit to one metal column density implies a prediction for another. We illustrate this point in figures 10 and 11. In each panel we attempt to fit one of O vi, Si iii or O i, varying the metallicity to maintain the fit while exploring density and temperature. In Figure 10 we a take reasonable density for each of the 3 species and a reasonable temperature of \(T=10^{4}\)K, and we vary the density around this value. In Figure 11 we vary instead the temperature around these reasonable values. The temperature, \(T=10^{4}\)K, is a standard estimate for a photoionized and photo-heated gas. The central densities are estimates intended to span the range of conditions required without over-production of other species (where possible). Note that the propagated errors associated with the uncertainty in the H i column density are approximately the width of the model lines shown and so can be neglected. In this plot (and all subsequent plots of this section) the measured column densities are those shown in Table 4. In this plot (and all subsequent plots of this section) the measured metal column densities are those shown in Table 4. Note also that the propagated errors associated with the uncertainty in the H i column density is approximately the width of the model lines shown and so can be neglected. In each panel we attempt to fit one of O vi, Si iii or O i. In Figure 10 we a take reasonable density to reproduce each of these 3 species and a reasonable temperature of \(T=10^{4}\)K, and we vary the density around this value. In each panel we vary the metallicity to maintain fit to the intended species. In Figure 11 we vary instead the temperature around these reasonable values. Assuming that the gas is multi-phase, contributions to the column density are additive. In other words, one must not significantly over-produce column densities from any given phase, but under-producing is acceptable so long as the short-fall can be made up by other phases (that do not themselves lead to significant over-production of other metal column densities). One can see by eye Figure 10: Metal column densities models for the F80 sample. Model curves are displayed assuming gas is optically thin, H i columns shown in Table 1, a solar abundance pattern. Models to fit the column densities of O i (_top panel_), Si iii (_middle panel_), and O vi (_bottom panel_) are shown with varying density. Metallicities are tuned in order to fit to the chosen species for a given density and temperature. A preferred value of density for a fixed temperature (\(10^{4}\)K) attempting to avoid overproducing any species and avoiding unjustified extremes of density. Density is varied around this preferred value ( _black line_) in the the middle and bottom panels. In the top panel, we are not able to do this since the maximum density is the favoured one (_blue dashed line_) and we are only able to vary the density downwards. that two to three phases are sufficient to generate all the ionization species in broad terms. Figure 12 shows the resulting overall model fit from summing these three phases for the reasonable densities and temperatures of figures 10 and 11. While not a full parameter search it is clear that this multi-phase model produces the general trend required by the data but only with extremely high density and metallicity for the CGM. However, it completely fails to offer acceptable statistical agreement required by the very small measured uncertainties. While one might attempt to generate instead four, five, six or more phases (indeed a plausible physical model would not be discrete at all), but each of our current three phases makes strong productions for multiple species and the model lacks the freedom to meet the statistical requirements of the data. For instance, producing more Al iii without overproducing Si iii, C ii and Al ii seems implausible. Similarly producing more Si iv without overproducing Si iii or further overproducing C iii seems implausible. Indeed the data is also not self-consistent in this purely multi-phase picture. For example the five Si ii features measured are statistically divergent from one other. A natural solution to this puzzle presents itself; not all SBLAs are alike and treating the composite spectrum as a measurement of a uniform population of lines with multi-phase properties is unrealistic. ### The covariance between SBLA metal features In order to explore the absorbing population beyond the mean we can study the properties of the stack of spectra used to calculate the mean composite spectrum. Naturally there is variance in the metal population giving rise to any given feature. In order to exploit these metal populations, we must develop an understanding of whether line strengths vary together. For example, it is expected that Si ii(1260A) will be covariant with C ii given the predictions of models shown in figures 10 and 11. On the other hand, it is far from clear if Si ii(1260A) will be similarly covariant with O i, Si iii or even O vi. Insignificant covariance would imply that population variance is negligible. Similar and significant covariance between all species irrespective of ionization potential would indicate that metallicity variation is the main driver for population variation. On the other hand significant differences in covariance of low, medium and high ions with themselves and each other are a sign of more complex multi-population properties. In order to explore this we calculate the covariance of the transmitted flux between our metal features normalised by their line strengths. The procedure used is set out in Appendix D. Figure D2 shows the covariance between pairs of lines measured at line centre normalised to the product of the associated mean absorption signal for each line (corresponding to the flux decrement in the composite spectrum at line centre). This normalisation is performed in order to allow meaningful comparisons of the covariance between lines of Figure 11: As in Figure 10 metal column densities models are shown for the FSO sample. Models to fit the column densities of O i (_top panel_), Si iii (_middle panel_), and O vi (_bottom panel_) are shown with varying temperature around the value \(10^{1}\)K corresponding to the preferred values of Figure 10. Metallicities are again varied to provide the best fit to the chosen species for a given density and temperature. Figure 12: The column densities of metal ionization species in order of decreasing ionization potential the FSO sample as in Figure 10. The best three models to fit the column densities of O i, Si iii, and O vi are shown. A combined model is showing reflecting the multiphase scenario where each system stacked has same properties and three phases of associated gas. By summing the columns from the three models without correction we are assuming that the H i is distributed equally in each phase. Each sample receives a third the H i column and therefore the metallicity is a three times larger than values shown in the legend for the model. different intrinsic strengths. In general covariance is approximately as large as the absorption strength or up to 4\(\times\) larger. In top panel of Figure 11 we focus once again on transitions of our 3 indicative species: O i, Si iii, and O vi, for low, medium and high ionization species respectively. We show the trend of covariance with the best-measured carbon lines, the best-measured silicon lines and remaining low ionization species in subsequent panels. We find that high ions are covariant with other ions with little or no signs of ionization potential dependence. Medium ions (Si iv, Si iii, Al iii and to an extent C iii and C ii) also show an increased (albeit weaker) covariance with low ions and no signs of raised covariance with each other. We can conclude that SBLAs are not all alike with respect to their mix of detected metal lines. High ions appear to be relatively homogeneous, low ions appear to be inhomogenous. Medium ions lie between and their inhomogeneity seems to be linked to the inhomogeneity of low ions. Low ions generally show high levels of covariance with each other aside from the peculiar low covariance between Mg ii and O i despite their closely related ionization properties. However Section 8 shows that the Mg ii population is poorly constrained and is (marginally) consistent with a separate small self-shielded population. Overall it seems evident from the line covariance alone that more than one population exists in the ensemble of SBLA properties, and that metallicity variation alone is not sufficient to explain it. Overall covariance with low ionization species is high. It is at least as high as the covariance between high ions, between medium ions and between high ions with medium ions. Hence we conclude that the strong population(s) of low ions is also accompanied by strong populations of all species. ## 8 Sbla absorption population The standard stacking approach of calculating a mean or a median in order to understand the ensemble properties of the sample neglects variation in the ensemble. In this section we seek to explore the underlying properties of the population probed in our fiducial composite spectrum by using the full distribution in the stack at metal line centres. This is a non-trivial matter since the flux transmission distribution provided by this stack of spectra is a mix of different effects. In addition to the metal strength distribution we seek to probe, one can expect contributions from the observing noise (mostly read noise and photon shot noise), contaminating absorption fluctuations (absorption in the spectra not associated with the selected system but nevertheless coincident with them), any smooth residual continuum normalisation errors in the individual quasar spectra, and finally any errors in the subtraction of the overall mean level of uncorrelated absorption (i.e. the pseudo-continuum). It is not possible to pick apart these various effects from the data alone but we may forward-model potential metal populations and compare them with the observed distribution. One could seek to study each effect in detail and generate synthetic spectra, but a much simpler and more robust method presents itself; we use the data itself as the testbed for our population modelling by adding model signal to null data. ### The null sample The stack of spectra itself provides the ideal signal-free null sample: the close blueward and redward portions of the stack of spectra beyond the full profile of the feature of interest. These proximate portions of the spectral stack represent a close approximation of the effects present at line centre excluding the metal signal of interest. Potential linear variation as a function of wavelength in these effects are dealt with by attempting to mirror as much as possible the null pixels selected on both the blueward and redward sides. These nulls wavelength bins are drawn from the sample used in pseudo-continuum fitting as shaded in green in Figure 2. We take 8 wavelength bins on the red-side and 8 wavelength bins on the blue-side for all metal lines except Si iii (where the close proximity of the broad Ly\(\alpha\) absorption feature limits us to 4 bins on each side). We then average together the flux transmission in red-blue pairs from closest to furthest to the metal transition in order to generate the usual 138 \(\,\mathrm{km\,s}^{-1}\) integration scale of the central bin and to cancel out linear evolution with wavelength between red and blue. This leaves us with 8 null bins (or 4 nulls for Si iii) for every metal feature central bin. In all cases the sampling of the null distribution is sufficient to allow the errors in the true measurement to dominate. Finally, before assembling our null pixels we rescale them by any residual offset from the pseudo-continuum in the mean spectrum. As a result the nulls show only dispersion and no zero-point offset from the pseudo-continuum before mock signal is added. ### The population model We model the populations underlying the average metal absorption signal of each feature independently with two main fitted parameters and two further marginalised parameters. These main parameters generate bimodality in the metal populations constrained by a prior that the population mean arrived as is that given by the unweighted arithmetic mean composite spectrum. In effect this unweighted arithmetic mean provides a flux decrement (\(D_{m}=1-F_{C}\))'metal absorption budget' to be allocated in a way such that the ensemble mean is preserved. Specifically our main parameters are: * \(f_{pop}\), the fraction of systems with strong metal absorption, and * \(f_{move}\), the proportion of the flux decrement by which to reduce the weak metal absorption population and reallocate to the strong population. The two parameters combined define the degree of asymmetry between the two populations. We initially attempted to fit with only \(f_{pop}\) and \(f_{move}\) as free parameters but found that two forms of random scatter were required and must be marginalised over. The first is a Gaussian scatter in the strong absorption flux decrements with a standard deviation, \(\sigma_{p}\). The second is a Gaussian random noise added to the entire sample (both strong and weak components) with a standard deviation, \(\sigma_{n}\). This additional noise term is typically small (see Table 5) but appears to be necessary in some cases for an acceptable fit. The addition is a logical one since the pseudo-continuum fitting leads to an asymmetry in the noise properties between the metal measurements and nulls. The null pixels are part of the pseudo-continuum fitting and therefore the mean of the noise distribution is suppressed. This suppression is reinforced by our choice to rescale the zero-point of the nulls. In this way, we chose to generate a random noise in the nulls rather than carry-forward a potential different noise deviation already present in the nulls. Overall, these two normally distributed random variables are sufficiently flexible to account for any scatter in the weak population also, since the sum of two independent normal random variables is also normal. The resulting model represents the simplest that provides an acceptable fit to our data. More explicitly, a mock absorption sample is built by taking every null pixel in the ensemble of nulls and applying the model as follows. For strong absorbers the flux decrement applied is \[D^{\prime}_{strong}=D_{m}+\frac{D_{m}f_{move}(1-f_{pop})}{f_{pop}}\mathcal{G}(0, \sigma_{p})+\mathcal{G}(0,\sigma_{n}) \tag{6}\] whereas the weak absorbers flux decrement is modelled as \[D^{\prime}_{weak}=D_{m}(1-f_{move})+\mathcal{G}(0,\sigma_{n}) \tag{7}\] where \(\mathcal{G}(0,\sigma)\) denotes a Gaussian random number with zero mean and a standard deviation \(\sigma\). The Gaussian random number that represents scatter in the strong population is bounded such that \(\mathcal{G}(0,\sigma_{p})<D_{m}\) in order to ensure that the strong sample never shows unphysical negative absorption. In principal this could lead to an asymmetry in the generated Gaussian numbers, a non-conservation of the'metal budget' and therefore an incorrect mean metal strength for the ensemble. In practise, however, favoured values of \(\sigma_{p}\) are sufficiently small that this regime is not reached. The mock absorption sample combines together every null pixel from every member of the stack of spectra. We randomly assign weak or strong absorber status to each pixel (using a uniform random number generator) in line with the trial \(f_{pop}\) and proceed following Equation 6 or 7 as necessary. For every model (specified by our 4 parameters) we compare the flux transmission distribution of the mock sample with the measured flux transmission distribution function for the feature of interest. Despite our large number of null pixels, our model distribution functions can be unstable. Hence we make at least 100 random realisations of these mocks and the model distribution function carried forward is the average of these random realisations. More realisations are produced when it is clear that the flux distribution is more complex or if the favoured models are those with small \(f_{pop}\), which therefore require additional statistics to offset the intrinsically smaller sample size. In each case we compare the averaged simulation histogram with the measured true one, by performing a \(\chi^{2}\) test. An example of this is shown in Figure 13, which compares the distribution function of the flux transmission for the central bin of Si ii (1260) with the distribution of the preferred model. In the development of these mocks and their comparison to data, it became apparent that outliers in the noise distribution lead to high \(\chi^{2}\) values. In order to limit the impact of these outliers we sigma-clip by removing the top and bottom 3% of the distribution from the \(\chi^{2}\) test. This could in principal impair our ability to constrain very small absorbing populations but this is not true in practise. Furthermore, the favoured models are largely unaffected. This suggests that the tails of the distributions are dominated by noise outliers as expected. The range of flux transmission shown in Figure 13 shows for example, the range used in the model comparison. We search parameter space from \(0.01\leq f_{pop}<1\) and \(0.01<f_{move}<0.99\) allowing \(\sigma_{p}\) and \(\sigma_{p}\) to float freely to preferred values in each case following the results of the \(\chi^{2}\) test. We also add grid points in this 4-dimensional parameter space in order to better sample the region with \(\Delta\chi^{2}<12\). We then find the minimum \(\chi^{2}\) in this parameter space and calculate the \(\Delta\) with respect to this minimum for the entire \(\chi^{2}\) surface. We estimate confidence interval for our two parameters of interest by marginalising over the other 2 in order to produce a \(\chi^{2}\) scan as shown in Figure 14. Since we are performing a combined fit of the two parameters of interest the standard deviation 68.3% confidence interval is provided but the region where \(\Delta\chi^{2}<2.30\). This 1\(\sigma\) interval is marked in Figure 14. ### Population analysis results and measuring the strong metal population Table 5 shows the resulting favoured model parameters including 1\(\sigma\) confidence intervals for our two parameters of interest and the fit probability. Since the constraint is statistically and computationally demanding, we limit ourselves to the most constraining transition for each ionization species. We present only species that have generated statistically meaningful parameter constraints for any feature. We study one further parameter, which is a quantity derived from our two parameters of interest. This is the 'boost factor' \[C_{boost}=\frac{f_{move}(1-f_{pop})}{f_{pop}}+1, \tag{8}\] which represents for each feature the level of boost in line strength that must be applied to the flux decrement measured in the composite spectrum in order to generate the metal strength of the strong population favoured by the population model search. Note that the best fit \(C_{boost}\) is derived from the best fit \(f_{pop}\) and \(f_{move}\), while the error estimate in \(C_{boost}\) is the range given by marginalising over the 1\(\sigma\) confidence of \(f_{pop}\) and \(f_{move}\). ### Inferred column densities for the strong metal population We now have a population analysis fit, as shown in Table 5 and the covariance analysis result in Section 7.5, and so we are able to build up a picture of the dominant strong absorber population with realistic associated measurement errors statistically, even though we make no attempt to recover the sub-population on a case-by-case basis. The population analysis parameter \(C_{boost}\) allows us infer the typical corrected transmitted flux, \(F_{Corr}\), associated with this strong population for each feature (see Table 6). Since the uncertainty \(C_{boost}\) is much larger than the uncertainty in \(F\), the error margin in \(C_{boost}\) can be carried forward as the error margin in the flux transmission. This uncertainty is indicated in Table 6 as a minimum and maximum transmitted flux, respectively given by \(F_{Corr,min}\) and \(F_{Corr,max}\). The corrected transmitted fluxes shown in Table 6 for the strong population are averaged across a 138 km s\({}^{-1}\) velocity window and while this information alone doesn't tell us how many individual components exist, we know there must be at least one component Figure 13: An estimate of the probability distribution function of the flux in the stack o spectra corrected for the pseudo-continuum (for consistency with the composite spectrum) at the spectral pixel closest to the rest frame wavelength of Si ii(1260) (_black line_). The _red line_ shows the distribution function of the best fitting model (see Table 5). that is strong enough to produce a minimum flux at least this low if resolved. We can conclude therefore that all these lines should be statistically significant in high S/N and high resolution. We don't rule out the possibility that the weak population for any given metal line is detectable, but they are not the focus of this work. The size of the strong populations (indicated by \(f_{pop}\)) is not consistent among all features. Higher ionization lines typically show larger and weaker strong populations. Given the picture, drawn from covariance, that strong higher ions trace a wider ranger of conditions, this is to be expected. However, it is also true that each feature shows their highest covariance with low ions. The key conclusion of the covariance analysis is that strong low ions appear to be accompanied by medium and high ions. We can therefore treat this sub-population of \(\approx\)25% as being traced by all our fitted metal features and fit a multi-phase model to all these features. The metal column densities (and their measurement uncertainties) associated with this common strong absorbing population are derived from the corrected transmitted flux (and its error margin), using the same method as set out in Section 7.3. We recompute each column density value as before using this strong absorber corrected flux transmission. The column densities of the strong population features are given as \(N_{strng}\) in Table 6, with associated upper low \begin{table} \begin{tabular}{l c c c c c c} \hline Line & Wavelength & \(F_{Corr}\) & \(F_{Corr,min}\) & \(F_{Corr,max}\) & \(N_{strng}\) & \(N_{strng,max}\) & \(N_{strng,min}\) \\ \hline OI & 1302.17 & 0.8708 & 0.7867 & 0.9214 & 14.287 & 14.653 & 14.008 \\ MgII & 2796.35 & 0.4951 & 0.0000 & 0.8805 & 15.334 & \(\infty\) & 12.798 \\ FeII & 2382.76 & 0.2484 & 0.0000 & 0.8602 & 18.023 & \(\infty\) & 13.248 \\ SiII & 1260.42 & 0.9091 & 0.8878 & 0.9218 & 12.707 & 12.825 & 12.628 \\ AlIII & 1670.79 & 0.7844 & 0.7454 & 0.8832 & 12.991 & 13.165 & 12.557 \\ CII & 1334.53 & 0.8359 & 0.6817 & 0.9097 & 14.005 & 14.748 & 13.645 \\ SiIII & 1206.50 & 0.8676 & 0.8644 & 0.8904 & 12.802 & 12.817 & 12.690 \\ SiIV & 1393.76 & 0.8052 & 0.7463 & 0.8499 & 13.513 & 13.770 & 13.322 \\ CIII & 977.02 & 0.6085 & 0.5550 & 0.6667 & 14.638 & 15.098 & 14.207 \\ CIV & 1548.20 & 0.7530 & 0.7260 & 0.7664 & 14.125 & 14.257 & 14.065 \\ OVI & 1031.93 & 0.8845 & 0.8498 & 0.8994 & 13.879 & 14.041 & 13.799 \\ \hline \end{tabular} \end{table} Table 6: Strong population column densities. \(F_{Corr}\) is the corrected flux transmission for the strong population of lines derived from the population analysis in Section 8.4. \(N_{strng}\) is the integrated metal column density associated with SBLAs with strong metals. Figure 14: Si ii(1260) population model \(\chi^{2}\) scans of both \(f_{pop}\) (_left_) and \(f_{move}\) (_right_) marginalised over all four parameters. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Line & \(\lambda\) (Å) & \(f_{pop}\) & \(f_{move}\) & \(C_{boost}\) & \(\sigma_{p}\) & \(\sigma_{n}\) & \(\chi^{2}\) & DOF & Prob \\ \hline Ly \(\epsilon\) & 937.803 & 0.91 \({}^{+0.09}_{-0.05}\) & 0.050 \({}^{+0.221}_{-0.030}\) & 0.105 \({}^{+0.087}_{-0.005}\) & 0.000 & 0.002 & 81.309 & 76 - 4 & 0.212 \\ Si ii & 1260.422 & 0.36 \({}^{+0.08}_{-0.18}\) & 0.42 \({}^{+0.11}_{-0.25}\) & 0.175 \({}^{+0.41}_{-0.25}\) & 0.21 & 0.009 & 210.7 & 178 - 4 & 0.030 \\ Si iii & 1206.500 & 0.590 \({}^{+0.00}_{-0.004}\) & 0.298 \({}^{+0.01}_{-0.10}\) & 1.21 \({}^{+0.01}_{-0.01}\) & 0.30 & 0.003 & 214.1 & 183 - 4 & 0.038 \\ Si iv & 1393.760 & 0.202 \({}^{+0.071}_{-0.008}\) & 0.526 \({}^{+0.097}_{-0.099}\) & 3.08 \({}^{+0.21}_{-0.21}\) & 0.12 & 0.037 & 117.2 & 94 - 4 & 0.028 \\ C ii & 1334.532 & 0.26 \({}^{+0.18}_{-0.18}\) & 0.670 \({}^{+0.09}_{-0.20}\) & 2.95 \({}^{+0.17}_{-0.30}\) & 0.15 & 0.037 & 127.5 & 98 - 4 & 0.012 \\ C iii & 977.020 & 0.430 \({}^{+0.080}_{-0.058}\) & 0.870 \({}^{+0.065}_{-0.065}\) & 2.15 \({}^{+0.32}_{-0.32}\) & 0.043 & 0.009 & 253.4 & 130 - 4 & 0.000 \\ C iv & 1548.205 & 0.373 \({}^{+0.038}_{-0.090}\) & 0.593 \({}^{+0.024}_{-0.024}\) & 2.00 \({}^{+0.22}_{-0.11}\) & 0.15 & 0.043 & 150.5 & 126 - 4 & 0.041 \\ Mg ii & 2796.354 & 0.05 \({}^{+0.05}_{-0.03}\) & 0.39 \({}^{+0.17}_{-0.13}\) & 8.1 \({}^{+0.22}_{-0.2}\) & 0.059 & 0.010 & 18. & 22 - 4 & 0.444 \\ Fe ii & 2382.764 & 0.010 \({}^{+0.01}_{-0.010}\) & 0.38 \({}^{+0.12}_{-0.12}\) & 3.95 \({}^{+0.20}_{-0.20}\) & 0.000 & 0.028 & 152.6 & 129 - 4 & 0.047 \\ O i & 1302.168 & 0.19 \({}^{+0.11}_{-0.14}\) & 0.96 \({}^{+0.40}_{-0.04}\) & 5.0 \({}^{+0.3}_{-0.73}\) & 0.043 & 0.004 & 84.1 & 81 - 4 & 0.271 \\ O vi & 1031.926 & 0.79 \({}^{+0.06}_{-0.25}\) & 0.55 \({}^{+0.11}_{-0.08}\) & 1.15 \({}^{+0.35}_{-0.15}\) & 0.043 & 0.000 & 446.0 & 258 - 4 & 0.000 \\ Al ii & 1670.789 & 0.045 \({}^{+0.043}_{-0.017}\) & 0.341 \({}^{+0.085}_{-0.088}\) & 8.3 \({}^{+1.5}_{-3.3}\) & 0.14 & 0.022 & 103.4 & 91 - 4 & 0.111 \\ \hline \end{tabular} \end{table} Table 5: Population model fits. We exclude all species where the statistics were insufficient to provide any useful constraint. limiting column densities given by \(N_{strng,max}\) and \(N_{strng,min}\) respectively. ### Modelling the column densities for the strong metal population Now that we have a series of column densities measurements for a single strong population with multiple phases, we are ready to reassess the model comparisons shown in Section 7 and thus test the unusually high densities that our comparisons demand. As explained in the previous section, our metal column density models are dependent on density, temperature, metallicity, the UV background models and abundance pattern. We make standard assumptions for the latter two and explore density and temperature, with metallicity setting the overall normalisation. A challenge of modelling our measurements lies in the production of the lowest ionisation potential species without over-producing others, driving us towards unusually high minimum densities. Thus far we have used this model comparison purely for illustration since no statistical fit to a single mean population was possible. Here we attempt statistical constraints for the dominant strong metal population. We begin with the most conservative choice; we relax the assumption of a solar abundance pattern and explore the minimum density required by multiple species of a single element. This is possible for both carbon and silicon where we have reliable population analysis results for three ionisation species each. Optically thin, photoionized gas is typically heated to \(\sim 10^{4}\)K in hydrodynamic simulations (Rahmati et al., 2016), but it is theoretically possible for it to reach \(T<10^{3.7}\)K in unresolved gas that is sufficiently metal rich. As a result we consider models with temperatures as low as \(10^{3.5}\)K. Figures 15 and 16 illustrate these limits for silicon and carbon respectively. Only allowed models are shown and in each case the metallicity is tuned such that the low ion is only marginally produced, by treating the lower \(1\sigma\) error bar as the target. The density is then allowed to vary such that it remains below the \(1\sigma\) upper error bar of the high and intermediate ions. The minimum density in each figure is given by the red dot-dash line and the density is free to increase up to _and beyond_ the density indicated by the blue dashed line. Given that this is a multiphase model, any short-fall in projected column density for the high and intermediate ions can be made up by other phases of gas with a lower density. As one can see from figures 15 and 16, silicon provides the more stringent density limited of \(\log(n_{H}/\mathrm{cm}^{-3})>-1.85\) assuming \(10^{4}\)K gas (equivalent to an overdensity of \(\rho/\bar{\rho}>3.19\)) or \(\log(n_{H}/\mathrm{cm}^{-3})>-2.45\) assuming \(10^{3.5}\)K gas if one allows the temperature to reach the lowest temperature considered here (equivalent to \(\rho/\bar{\rho}>2.59\)). The limit arises from marginally (\(1\sigma\)) producing enough Si in without marginally (again \(1\sigma\)) overproducing Si in. Similarly, carbon requires \(\log(n_{H}/\mathrm{cm}^{-3})>-2.95\) assuming \(T=10^{3.5}\)K gas and \(\log(n_{H}/\mathrm{cm}^{-3})>-2.65\) assuming \(T=10^{4}\)K gas. Since the models imply a hydrogen neutral fraction, the total hydrogen column density can be derived. The characteristic gas clumping scale can be obtained from \[l_{c}=N_{H}/n_{H}, \tag{9}\] where \(N_{H}\) total hydrogen column density and \(n_{H}\) is the hydrogen density. For silicon this maximum scale is just \(l_{c}=36\) parsecs assuming a gas temperature of \(T=10^{4}\)K and \(l_{c}=255\) parsecs for a gas temperature \(T=10^{3.5}\)K. Our carbon-only limits produce weaker constraints of 1.4 kpc and 2.5 kpc respectively. \(10^{3.5}\)K is rather a low temperature for photoionized gas but as we shall see below, we appear to be forced to allow such low temperatures. We can perform a statistical fit for three gas phases in these dominant strong metal systems by including all species and assuming a solar abundance pattern. We scan through density, temperature and metallicity for two different gas phases: high density and moderate density. As explained below, it was not possible to scan through the third, lower-density phase. Temperature is allowed to vary between \(10^{3.5}\)K and \(10^{4.5}\)K in both phases. In the moderate density phase, density was searched from \(\log(n_{H}/\mathrm{cm}^{-3})=-4.8\) to \(\log(n_{H}/\mathrm{cm}^{-3})=-2.8\). In the high density phase we scan through \(\log(n_{H}/\mathrm{cm}^{-3})=-0.8\) to \(\log(n_{H}/\mathrm{cm}^{-3})=0\). As usual, metallicity is a free parameter that scales up and down the projected metal columns. Extreme small populations may arise due to Lyman limit systems in our sample of SBLAs and require more complex ionization corrections. P14 argued that this contamination is likely to be at the level of \(\lesssim\)1% and no higher than 3.7% of our SBLAs. We conservatively require that any strong population that is statistically consistent with 3.7% contamination should be omitted from our investigation of gas physical conditions. This leads to the rejection Figure 15: Constraining the minimum density of metal strong SBLAs using silicon species alone. Silicon column densities are modelled as in Figure 10. The data has been corrected to take into account the column density of the strong metal systems based on the population modelling (including associated model uncertainty). Here we test the minimum density allowed by measurements of Si in, and Si in. Si is also shown for completeness but doesn’t constrain the analysis since no model produce it in significant amounts and it is evidently produced by gas in a different phase. We conservatively take the \(1\sigma\) lower error bar of our lowest column density measurement of Si in as the target and then tune the density to change the slope while renormalising with the metallicity. The _red dash-dot line_ shows he lowest density allowed at the \(1\sigma\)-level. The _top panel_ shows the result for the lowest density considered of \(10^{3.5}\)K and _the bottom panel_ shows a more standard photoionized temperature of \(10^{4}\)K. of species Mg ii, Al ii and Fe ii from further interpretation using the optically thin to ionizing photons approximation. This is partly a consequence of poor statistical precision and given more data and more refined population modelling these species could be included in future analyses. In the process of performing this fit with three phases, it became apparent that only O vi requires the lowest density phase. With one data point and three unknowns (density, temperature and metallicity), this third phases is unconstrained aside from being (by construction) of lower density. As a result, we proceeded with a two phase fit excluding O vi. Figure 17 provides the best-fit model based on this parameter scan of the strong metal population. The fit is of acceptable statistical quality with a \(\chi^{2}=4.2\) for 7 points with 6 degrees of freedom arising from the 6 fitted parameters, equivalent to a probability of statistical consistency between the model and data of 4%. The favoured conditions for these strong absorbers are \(\log(n_{H}/\mathrm{cm^{-3}})=0\), temperature \(10^{3.5}\)K and super-solar metallicities of [X/H]= 0.80. In the intermediate density phase we find \(\log(n_{H}/\mathrm{cm^{-3}})=-3.35\), again temperature of \(10^{3.5}\)K and metallicity [X/H]= \(-1.1\). As noted the lowest density phase is required but unconstrained. It will be noted that the favoured density for the dense phase is the limiting value of our parameter scan of \(\log n_{H}=0\). This is driven by the measurement of O i. A higher density in the high density phase may provide a better fit to the O i column density, but only to a limited extent since it will lead to a a worse fit to C ii and Si ii. Again we can infer a gas clumping scale by dividing the hydrogen column density by the hydrogen density from this final, joint fit of species that reliably probe diffuse, photoionized gas. Our dense gas phase corresponding to \(n_{H}=1\mathrm{cm^{-3}}\) requires a clumping scale of only \(l_{c}=0.009\) parsecs. If we marginalise the density of this dense component and take the \(1\sigma\) minimum value for a 6 parameter fit (\(\Delta\chi^{2}=7.04\)) we obtain a minimum density of \(\log(n_{H}/\mathrm{cm^{-3}})=-0.93\) equivalent to a maximum (\(1\sigma\)) clumping scales of \(l_{c}=0.38\) parsecs. The intermediate density gas is expected to have structure on 15kpc scales. Once again the low density phase traced by O vi is unconstrained. ## 9 Discussion P10 and P14 argued that the presence of high density gas (inferred from the relative strength of low ionization species) indicates the presence of cold dense clumps 10s of parsecs in size, embedded in a more diffuse medium. We have reviewed and revisited this claim by improving the methodology, challenging several assumptions and interpreting the results more deeply, while quadrupling the amount of data updating from SDSS-BOSS DR9 (Dawson et al., 2013; Ahn et al., 2012; Lee et al., 2013) to SDSS-eBOSS DR16. Specifically we have, 1. explored the statistical robustness of the mean composite spectrum error estimation, 2. made robust statistical measurements of H i column density and verified its homogeneity, 3. improved the robustness of metal column densities, 4. explored metal line dependence on density and temperature, 5. measured the covariance and populations of metal species, 6. inferred the properties of the dominant strong metal population, Figure 16: Constraining the minimum density of metal strong SBLs using carbon species alone by following the same procedure used in Figure 15 for silicon. In this case all of C ii, C iii and C iv provide useful limits. Again the _red dash-dot_ line shows he lowest density allowed at the \(1\sigma\)-level. The _top panel_ shows the result for the lowest density considered of \(10^{3.5}\)K and the _bottom panel_ shows a more standard photoionized temperature of \(10^{4}\)K. Figure 17: The results of a parameter search for a two phase fit for metal strong SBLAs limited to species confirmed to arise in optically thin gas (shown as black data points) assuming that the strong populations overlap. The fit probability is 4%. Species showing small populations of only \(f_{pop}\la 5\)% are excluded from the fit since they may arise from a self-shielded contaminating population (Fe ii, Mg ii and Al ii). The measurement of strong O vi is also excluded from the fit since it requires a third (more ionized) phase that is poorly constrained. This is because one can only set a lower limit on density based on the absence of associated C iv (comparing with Figure 10 one may see that this density is \(\log(n_{H})\la-4.3\)). These four species not included in the fit are shown as grey points for completeness. (vii) placed limits on the density derived from a single element (carbon and silicon) for the strong metal population, (viii) performed a fit to models of density and temperature for the strong metal population. From silicon alone we find that gas clumping on scales of at most 36 parsecs is required assuming temperatures of at least \(10^{3.5}\)K. However, when we include C iv, C ii, Si iv, Si iii, C ii, Si ii and O i we find that a clumping scale of 0.009 parsecs is favoured (with a 1\(\sigma\) upper limit of 0.38 parsecs) and super-solar metallicities are required. We discuss this chain of reasoning its weak points further below. ### Metal populations and the nature of SBLAs Our covariance measurements and population models carry wider implications for the nature of SBLAs than simply gas property measurements. Perhaps some SBLAs probe the CGM (with low ions and medium/high ions) and others probe the metal enriched IGM (showing only medium/high ions). Alternatively perhaps all SBLAs probe the CGM with medium/high ions, and when the line of sight happens to pass through a dense clump, low ions are also seen. The former implies a high impact cross-section to at least one dense clump with covariance being driven by CGM/IGM separation. The latter, implies a lower impact cross-section to dense clumps and covariance driven by the lines of sight passing through the CGM with or without intersecting a dense clump. Naturally these two scenarios are not mutually exclusive. This is self-evident since we cannot exclude the possibility that a metal rich IGM surrounding the CGM plays a significant role. Nor can we argue that there is a perfect association between our SBLA samples and CGM regions. This is likely to be a factor in why the high ion covariance is non-zero, but we cannot rule out the possibility that some CGM is relatively diffuse or metal poor (e.g. inflows). In practise the variation in ion strengths must arise due to some combination of SBLA purity, CGM selection purity of SBLAs and the impact cross-section to various phases. The first term is known since we have measured the FS0 sample purity to be 89%. Neglecting this minor correction, the fractional size of the low ion strong population, \(\approx 30\)%, provides the cross-section to high density phases modulated by the CGM purity. We make this assertion because these low ionization species are not expected to occur in significant quantities outside of the CGM. ### Inferring gas properties from SBLA metals Following on from P10 and P14, we focus on the surprising appearance of low ionisation metal species in forest absorbers that are optically thin to ionising photons. All the metal line measurements are of interest but the low ionization species drive our interpretation towards a quite narrow set of physical conditions. Specifically, the need for high densities and therefore small-scale clumping. Our goal in this work has been to update the measurements of P14 with the final BOSS/eBOSS dataset, to make error estimates more robust and to perform a thorough multi-phase and multi-population analysis of our measurements in order to generate statistically robust constraints. Despite our inclusive error analysis, the error estimates on the metal column densities remain so tight that no single population, multi-phase model is satisfactory. This in combination with an analysis of the metal line covariance has led us to go beyond the study of mean properties in the composite spectrum and explore the full properties of the stack. Hence we forward model the metal absorbing population for each of our metal species using the full stack. The quality of fit provided by the population is largely acceptable, with more complex models unjustified by current data. Exceptions are C iii(977A) and O vi(1032A), both of which offer 0% quality of fit. This is not surprising since these are two of our four strongest metal features. It seems likely that this is a sign that more sophisticated population models are required by larger samples of SBLAs and/or higher signal-to-noise spectra. It is also possible that the metal populations are an exceptionally poor fit in these two cases, however, neither species' strong line fits are critically important for the main results presented in this article. For each of the metal species we obtain (among other quantities) a constraint on the absorbing population size. All species with a population modelling constraint are included in the fit except Al ii, Fe ii and Mg ii since their strong populations are sufficiently small that they could plausibly arise in self-shielded gas (although it is notable that Fe ii and Mg ii column density constraints are statistically consistent with preferred models). Given the measured column density of \(\log(N_{HI}/\mathrm{cm}^{-2})=16.04_{-0.06}^{+.006}\) for the FS0 sample, the lack of any significant inhomogeneity in the H i population, the small potential interloper incidence rate, and our efforts to exclude metal species that show populations consistent with the interloper rate, we robustly conclude that our SBLA analysis is not sensitive to complex self-shielding effects expected for Lyman limit systems, or indeed partial Lyman limit systems. The inferred column density is at the limit where these effects are considered to be negligible and therefore the sample under study can be treated as strong, blended groupings of optically thin Ly\(\alpha\) forest absorbers. The measurements of covariance indicate that strong low ion absorption is also associated with strong medium and high ion absorption, so we proceeded with measurements of the properties of these strong metal SBLA systems in various forms. Measurements of carbon-only and silicon-only were made independent of assumptions about abundance patterns providing lower limits in gas density and so upper limits on gas clumping on sub-kpc scales, but full fits become possible where all elements are included. These fits require 3 phases. 2 phases to provide both low and medium ions (defined broadly to include C iv) and one additional unconstrained phased providing only O vi absorption. The derived density of \(n_{H}=1\mathrm{cm}^{-3}\) for the dense phase is notably high even for the CGM (corresponding to an overdensity of \(10^{5}\)). Leading to a measurement of 0.009 parsecs cold dense clumps. Even if one considers the 1\(\sigma\) lower limit on density allowed in this fit, the analysis requires sub-parsec scale clumping (0.38 parsecs). Parsec-scales are required by silicon alone but the sub-parsec scales are driven by O i absorption. We cannot dismiss the measurement of O i absorption since no other metal lines contribute significantly to the measurement spectra bin. Si ii 1304A is closest but when one fits the full Si ii line profile one sees that the contribution to the O i line centre is negligible (as shown in Figure 9). Note that charge-exchange driving the O i ionization fraction to that of H i (Draine, 2011) does not apply in this case. This effect occurs at the boundaries of H i and H ii regions and as we have discussed, SBLAs are optically thin to H i ionizing photons and no boundary region is present. We must, therefore, conclude that we are probing clumps on scales as low as 1% of a parsec due to our measurement of O i absorption. Small increases in the favoured density above \(n_{H}=1\)cm\({}^{-3}\) are possible since the favoured density is the one selected as a prior. Also lower temperatures than our prior of \(10^{3.5}\)K are also possible but would stretch the limits of plausibility for a photoionized gas. The relationship between density and temperature warrants further investigation in simulations. It should be noted that this work assumes a solar pattern of elemental abundances (taken from Anders & Grevesse, 1989) for the final results in Figure 17. If the relative abundances of oxygen, carbon and silicon differ significantly from solar in SBLAs then our results would require modification. Our carbon and silicon only measurements are, of course, unaffected. Furthermore we assume photoionization reflecting a "quasar + galaxy" UV background following Haardt & Madau (2001). Morrison et al. (2019) and Morrison et al. (2021) demonstrated that large-scale inhomogeneities exist in the UV background at these redshifts on scales of 10s or even 100s of comoving Mpc. Morrison et al. (2021) in particular explored the spatial variation in metals species through large-scale 3D quasar proximity in eBOSS DR16. There we used a mixed CGM sample including the superset of FS0+FS1+FS2 and found 10-20% variations in O vi and C iv absorption on 100 comoving Mpc h\({}^{-1}\) scales with similar variations in Si iv and Si iii also possible but unconstrained. It seems clear that the high ionization species studied here are susceptible to large-scale variation while the low ionization species have not yet been explored. Questions remain about the potential impact of the local galaxy (or galaxies) of these CGM systems. ### Comparison with simulations Wind tunnel simulations indicate that cold clumps of gas should survive entrainment by a hot galactic wind despite concerns that they might be destroyed before they can be accelerated by Kelvin-Helmholtz instabilities (McCourt et al., 2015; Gronke & Oh, 2018; Tan et al., 2023). These simulations are broadly consistent with our findings that such high-densities, low-temperatures (for a photoionized medium) and small-scales are plausible. Indeed many physical effects with characteristic scales of order a parsec are key for the ejection, propagation, entrainment and subsequent accretion of gas in the CGM with important consequences for further galaxy evolution (Hummels et al., 2019; Faucher-Giguere & Oh, 2023 and citations therein). For detailed observational predictions, high resolution cosmological simulations are required and cosmological simulations do not resolve below 10-pc-scales even with zoom-in regions or adaptive refinement (Lochhaas et al., 2023; Rey et al., 2023). CGM scales as small as 18 pc have been studied by Rey et al. (2023) for a single isolated dwarf galaxy although this is currently computationally demanding. They found that increasing resolution does indeed reveal higher densities (\(n_{H}\approx 0.5\)cm\({}^{-3}\)) and more extreme temperatures in the CGM (both \(10^{3.6}\)K and \(10^{6.5}\)K). It is notable that temperatures below our minimum prior of \(10^{3.5}\)K or our high density prior of \(n_{H}=1\)cm\({}^{-3}\) were not required in this simulation. However, we cannot rule out that that more extreme temperatures will be required by yet higher resolutions needed to probe the 0.01pc scales inferred by our multiphase, strong population, multi-element fit. Although it seems that no simulations currently exist that reproduce the full range of conditions we infer for SBLAs, they can validate our findings that extreme small-scales are a requirement. This can be achieved by simply passing lines of sight through CGM zoom-in simulations and selecting those which meet our with H i properties (an HI column of \(\approx 10^{16}\)cm\({}^{-2}\) and distributed in components over 138 km s\({}^{-1}\) to generate flux transmission <25%) and comparing with with the metal populations we infer. Cosmological simulations can also address the potentially less demanding task of helping us understand the relationship between our selection of strong, blended Ly\(\alpha\) absorption and the galaxies and dark matter halos identified by it. In particular, to learn whether it can be optimised to better recover these systems or be modified to identify others. Such tests would greatly enhance our understanding of how the Ly\(\alpha\) forest traces IGM and CGM properties. ### Individual systems and SBLA analogues As explained in Section 7.1, we advise caution in the interpretation of column densities measured in this work. The features measured here are integrated and averaged quantities. Our population analysis seeks to correct for the impact of averaging SBLAs showing weaker metal absorption with SBLAs showing stronger metal absorption, but the integrated nature of our measurements per SBLA is unavoidable. SBLAs themselves arise due to the blending of Ly\(\alpha\) lines over 138 km s\({}^{-1}\) and we cannot rule out that they correspond to multiple close CGM regions of multiple close galaxies ('close' here referring to both impact parameter and redshift). Furthermore within one CGM region we cannot resolve individual metal lines. We don't measure metals over the full observed feature profile as explained in Section 7.1, but even within the narrower 138 km s\({}^{-1}\) velocity window the measurements are integrated quantities. They cannot be trivially compared to individual metal line components that one might fit in an individual spectrum. If one interpreted the measured signal as arising from single lines the metals would be strong and quite evident in high-resolution and high signal-to-noise studies of individual quasar absorption spectra. Those systems drawn from the strong population we have inferred would be even more evident once one takes into account the associated line strength boost, leading to quite high column densities ( \(F_{strng}\) in Table 6) but once again we stress that these are integrated column densities. We illustrate this argument with Appendix C in which we identify SBLAs at \(2.4<z_{abs}<3.1\) in 15 high resolution and high signal-to-noise KODIAK spectra by taking 138 km s\({}^{-1}\) bins and the noiseless definition of SBLAs (\(-0.05\leq F_{\rm Ly\alpha}<0.25\); where in this work we limit ourselves to \(-0.05\leq F_{\rm Ly\alpha}<0.05\) to prioritiise SBLA purity in light of the SDSS noise). Figure 18 shows the distribution of flux transmissions in native Keck HIRES wavelength bins at the position of Si ii (1260A) in the SBLA rest frame. Distributions are also shown for pixels on both the red and blue side of the Si ii feature (selected as usual to be at wavelengths away from lines and on the pseudo-continuum). Error bars show the 75% spread of these null distributions. At the level of what one can discern by eye the Si ii (1260A) pixel distribution could have been drawn from the null distributions. Based on our analysis, around a third of SBLAs should show'strong' Si ii absorption with an integrated column density of \(N_{strng}=10^{12.7}\)cm\({}^{-2}\). Assuming that this signal is present in association with this KODIAK SBLA sample, it must be weak enough to not be clearly detected here. In other words, the Si ii absorption signal must be weak and distributed among the native pixels in the 138 km s\({}^{-1}\) SBLA window and not a single narrow Si ii line with \(N=10^{12.7}\)cm\({}^{-2}\). One might reasonably ask, then, what SBLAs should look like in individual spectra of high quality. The inferred column densities may be integrated column densities but the strong metal population should nevertheless be individually significant. However, high confidence individual line identification isn't simply a matter of observing a significant absorption line. They must also be unambiguously assigned an absorption transition and redshift. This may be complex task when lines are weak and there are no lines from the same species with which to confirm. It is made more difficult at high redshift where the line density in quasar spectra is generically higher, particularly in the Ly\(\alpha\) forest. O i is particularly challenging since Si ii absorption is expected to be nearby and could be caused by the same galaxy or galaxy group. Our measurement of statistical excess here is robust and unambiguous because all sources of contaminating absorption are included in our error analysis both in the mean composite and the multi-population decomposition. We are aware of what appears to be one strong metal SBLA analogue at \(z>2\) in the literature, published in Nielsen et al. (2022). Following up on systems in their catalogue of Mg ii absorbers they discovered an associated compact group of galaxies and DLA absorption. Among many interesting structures seen, there is an group of seven H i absorbers with velocities offset blueward from the central velocity by between 350 and 450 \(\,\mathrm{km\,s^{-1}}\). The H i column density of these lines is between \(\approx 10^{13.5}\) and \(\approx 10^{15.8}\mathrm{cm^{-2}}\), with a group total of approximately \(10^{16}\mathrm{cm^{-2}}\). The velocity range of this structure and the resulting integrated column density are consistent with our SBLA sample. In Nielsen et al. (2022) this SBLA seems to have been found because of its association with this wider clustering of strong H i and strong Mg ii. It should be noted that this system would not have been selected by our methods because the SBLA Ly\(\alpha\) absorption is masked by the wide damping wing of the close DLA in the spectrum. Of course SBLAs in groups with DLAs will be missing from our sample in general, but the loss will be minimal because, as mentioned elsewhere (e.g. Section 3), SBLAs are much more numerous than DLAs. Nielsen et al. (2022) measure the H i column densities of these individual lines using higher order Lyman lines. The average metal absorption strengths over a 138 \(\,\mathrm{km\,s^{-1}}\) window is similar to our strong metal population in all the lines which are measured by both studies: Si ii, Si iii, C iii, Si iv, and C iv. Their intermediate metal ion models are also broadly similar to what we find. For low ionization species Nielsen et al. (2022) infer that components are present with solar or super-solar metallicities, high densities (between \(-2<log_{H}<-1\)), low temperatures (\(3<logT(K)<4.5\)) and sub-parsec gas clouds. They do not infer densities as high as here nor gas clouds as small but they do not present detailed O i measurements, which are the main driving factor our extreme inferences. They point out that the observed O i column density of the DLA portion of the group is high compared to their model, but they are not able to measure O i for the SBLA (private communication). The analysis of KDDAQ data presented in Lehner et al. (2022) presumably include SBLAs among their sample but when they define their sample of strong Lyman-\(\alpha\)absorption systems (or 'SLFS' as they call them) they do not include the blending requirement critical for SBLAs selection and CGM properties that we, P10, P14 and Yang et al. (2022) have seen. Instead their SLFS appear better characterised as IGM systems. However, they do show an example which superficially seems to qualify for SBLA selection, and it appears to be an example of a weak metal system in contrast to the strong metal system case discussed above. Studies of individual low ionization systems in photoionized (\(N_{HI}\approx 10^{16}\mathrm{cm^{-2}}\)) are more common at low redshift. Examples of such works are Lehner et al. (2013), Sameer et al. (2021) and Qu et al. (2022). These works also produce a similar picture of multiphase gas showing small clumps (or clouds or shells) on parsec scales with temperatures of around \(10^{4}\mathrm{K}\). Studies such as these (that focus on the detailed properties of individual absorbers) have particular virtues compared to our work, including probing detailed velocity structure and temperature from line widths. However, they cannot (yet) study the statistical properties of well-defined and unbiased large samples of CGM systems with our wide range of metal species. Our work demonstrates that the Nielsen et al. (2022) SBLA with super-solar metallicity and high densities is not simply an isolated oddity but a member of a population of around 125,000 in current surveys (taking a \(\sim\)25% strong population 0.5 million SBLAs expected in eBOSS). Simulators aiming to reproduce the results of these studies can seek to generate gas clouds that reproduce these properties among the clouds in their simulations. Whereas simulators can aim to compare the global properties of their CGM systems by simply reproducing our simple selection function. In this sense our statistical work complements the detail gas properties derived form those observations. ### Comparison with other observations based on stacking We have referred P10 and P14 throughout this work. They showed evidence of dense, parsec-scale, photoionized gas, and the goal has been to build upon their stacking methods, improve on the exploitation of their composite spectra, and verify their conclusions. There is a another study, Yang et al. (2022), that has been inspired to apply these methods to SDSS-IV/eBOSS DR16 data. Our work is different in many respects from that publication. Referring back to the list at the beginning of this section, only point (iv) regarding investigating the density and temperature of gas proved by the composite spectrum is in common between the two papers. In a sense Yang et al. (2022) follows on directly from P14 in that they take a range of composite spectra for different Ly\(\alpha\) absorption strengths and explore more sophisticated ionization models to interpret them. P14 measured both the full profile of the metal features and the core of the absorption profile associated with the 138 \(\,\mathrm{km\,s^{-1}}\) velocity window 'central pixel' matched to the Ly\(\alpha\) selection. The former is a more inclusive integration and therefore generates a higher column density for both metals and H i (see for example the comparison between their table A1 and table A3). Yang et al. (2022) take the full profile approach only, while we take the central pixel approach only. The motivation for our choice is set out in Section 7.1. Yang et al. (2022) will, therefore, naturally present higher metal column densities than us derived from the composite spectrum. This difference makes direct comparison difficult. There are further complications from differences in analysis choices. We select and stack Ly\(\alpha\) absorbers and their associated spectra in precisely the same way as P14 in bins of flux transmission (and so take advantage P14 progress on understanding SBLAs with tests on hydrodynamic simulations and comparison with Lyman break galaxy samples). On the other hand Yang et al. (2022) selects Ly\(\alpha\) samples in windows of flux transmission contrast (see Appendix A), have a different S/N requirement for selection, apply no strong redshift cut (sacrificing sample homogeneity for statistics in the process) and weight their stack of spectra to compute the composite. On this final point regarding weighting, we do not weight the spectra by S/N because we wish to preserve the equal contribution of every system stacked, which simplifies our population analysis. We are also conscious of the fact that weighting the stacking by S/N would bias us towards stronger Ly\(\alpha\) absorption in complex in difficult to control ways 2 Footnote 2: Higher S/N for Ly\(\alpha\) selection provides a purer selection of strong Ly\(\,\alpha\). This higher S/N is typically associated with a higher S/N spectrum as a whole (quasar brightness varies and the S/N is highly covariant across each spectrum), therefore the weighting applied at the metal line is a complex mix of weighting towards stronger Ly\(\,\alpha\) systems modulated by any quasar shape change between 1216Å and the metal line placement in the absorber rest frame. With all these caveats in mind, the results of Yang et al. (2022) and our measurements of the mean composite spectrum present broadly the same picture of multiple gas phases in the CGM exhibiting low ionization species tracing at least one high density phase, high ionization species tracing at least one low density phase, and intermediate ionization species probing intermediate densities. They do not go into a detailed error analysis to understand what is allowed statistically, and so did not conclude (as we do) that column densities and their small error estimates force us to go beyond fits to the composite spectrum and study the underlying population behind the mean. When we do this, we appear to disagree with some of the findings of Yang et al. (2022). Our population analysis leads us to rule out a significant higher column density H i sub-population, forces us to higher densities, sub-parsec clumping and lower temperatures for agreement with low ionization species. We are also forced to similarly low temperatures for intermediate/high ionization species (excluding O vi) along with elevated densities and metallicities. In this work we explored a more precise and demanding error analysis method compared to P14 and included not just the statistical errors in the stacking but also absorbed uncertainty in the pseudo-continuum fitting to generate the final composite spectrum. P14 conservatively assumed that the errors in the final step were equal to the rest of the errors combined and scaled their error estimates of the stacked spectra by \(\sqrt{2}\) for the composite spectra. Our end-to-end bootstrap error analysis shows that the pseudo-continuum fitting step contributes weakly to the errors. This is quantified by \(\epsilon\) as shown in Table 4. Assuming that the pseudo-continuum fitting is performed with similar care to this work, this contribution can typically be neglected and the step of pseudo-continuum fitting an entire suite of bootstrapped realisations of the stack can be foregone. This is assuming that the error estimate need only be known to around 10% precision. A notable exception is C iii, for which the error contribution is estimated at 26% due to the challenge of separating it from absorption by Lyman series lines. Overall, we advocate moving beyond studies of the mean (or median) composite spectra alone and in doing so make the need for precise error estimates redundant. Instead we advocate a focus on forward modelling the underlying population, and measuring covariance between the metal features in order to obtain a deeper understanding of the SBLA population studied. ### Future surveys Despite the extreme high signal-to-noise in the composite spectrum presented here, our work demonstrates that more data is needed. Our population analysis requires not only high S/N in the composite spectrum but excellent sampling over the entire SBLA ensemble to build high a S/N measurement of the distribution function of the flux for every metal line studied. Only the metal transitions presented here were sufficiently well-sampled to obtain a population estimate. On the other hand, the distributions functions of some metal transitions are sufficiently well-measured that our 5 parameter fit does not appear to capture the characteristics of the population and a more complex parametrisation is required. More quasar absorption spectra are required to both widen the range of transitions (and species) measurable and help define improved metal populations for a more extensive round of forward modelling. The DESI survey (DESI Collaboration et al., 2016) began in 2021 and is expected to grow to produce around 700,000 \(z>2.1\) quasar spectra. The WEAVE-QSO survey (Jin et al., 2023; Pieri et al., 2016; Pieri et al., 2016) is expected to begin immunotherapy and will observe around 400,000 \(z>2.1\) quasar spectra. 4MOST (de Jong et al., 2019) is also in preparation and looks set to include \(z>2.1\) quasars among its spectroscopic sample. These surveys will also generate parameters of the moderate-to-high signal-to-noise spectra (S/N\(\geq 3\)) spectra required to identify SBLAs. These next generation surveys will also provide spectral resolution that is twice (DESI and 4MOST), three-times (WEAVE-QSO LR) or even ten-times (WEAVE-QSO HR) the resolution BOSS spectra. This will allow us the freedom to treat the velocity scale of the selection blend as a free parameter. In this work, we noted the striking similarity between the inferred halo mass derived from the large-scale 3D clustering of the Ly\(\alpha\) forest with SBLAs and the virial mass inferred by treating the velocity-scale of the blend as the halo circular velocity. This may be a coincidence but if there is some connection it raises the attractive possibility of identifying specific galaxy populations or halo populations from Ly\(\alpha\) absorption blends/groups alone. This warrants further study using next generation surveys and simulations with accurate small-scale IGM and CGM Ly\(\alpha\) clustering. The diversity of environmental properties for IGM/CGM gas studied in the Ly\(\alpha\) forest is also expected to grow substantially in the coming years. Maps of the cosmic web are expected using IGM tomography applied to data from WEAVE-QSO (Kraljic et al., 2022), DESI, PFS (Takada et al., 2014; Greene et al., 2022) and further to the future MOSAIC Japeli et al. (2019) and a potential DESI-II survey, allowing us to study SBLA properties in filaments, sheets and voids of structure. Furthermore large \(z>2\) galaxy surveys are expected over the coming years associated with these surveys allowing us to study gas properties near confirmed galaxies of known impact parameter with galaxy properties. These surveys promise to shed new light on the formative epoch of galaxy formation in the build-up towards cosmic noon. ## 10 Conclusions In this work we have sought to establish the potential of Strong, Blended Lyman-\(\alpha\), or SBLA, absorption systems for the study of the CGM. In this work we define "strong" as a flux transmission less than 25% and "blended" as average absorption in bins of 138 km s\({}^{-1}\). We build on the work of P14 in various ways such that we conclude a new widespread class of circumgalactic system must be defined and we explore the properties of these CGM systems. Specifically we find, 1. SBLA samples can be defined various ways to prioritise sample size of sample purity, though we focus on the main sample of P14 for continuity, we which label FS0. 2. We make the first statistical constraint of the H i column density of the FS0 SBLA sample and find it to be \(\log(N_{HI}/\rm{cm}^{-2})=16.04^{+0.05}_{-0.06}\) with a Doppler parameter of \(b=18.1^{+0.04}_{-0.04}\) km s\({}^{-1}\). This is not an individual line measurement but a constraint of the dominant H i column density in the 138 km s\({}^{-1}\) spectra window driven by a convergence to a solution ascending the Lyman series. * By studying the mean composite of the FS0 sample we find that at least 3 phases of gas are present in SBLAs but that no single multiphase solution can be found that would agree with the tight error bars and so a multiphase _and_ multi-population model is needed. * We explore the SBLA population by forward-modelling trial populations using portions of the stack of spectra without correlated absorption as a null test-bed. In doing this we find good agreement with a bi-modal population, and we exclude from further study metal transitions which are consistent with populations small enough to plausibly arise from rare Lyman limit system interlopers. * We find that low ionization metals (traced by optically thin gas) are present in a 1/4 of SBLAs while higher metal ionization species are typically more common in SBLAs (present in 40-80% cases). We also find that H i shows a high degree of homogeneity as measured from the Ly\(\epsilon\) population. * We study the covariance between our metal features and find that metals species are significantly covariant with one another spanning all ionization potentials. In general low ions show a high excess covariance with one another, moderately excess covariance with intermediate ions and a mild excess covariance with high ions. This is consistent with the picture presented by the population analysis where low ions appear 25% of the time and tend to appear together, while other ions are more common in SBLAs. It also indicates that when SBLAs are strong low ions, they are strong in all metal ions and so defines a sub-class of metal strong SBLAs. * By conservatively focusing only silicon species Si iv, Si iii, and Si ii we find densities in metal strong SBLAs of at least \(\log(n_{H}/\mathrm{cm}^{-3})>-2.45\) are required assuming \(>10^{3.5}\)K. This corresponds to gas clumping on \(<25\) parsecs scales. * Focusing conservatively only carbon species C iv, C iii, and C ii we find that densities in metal strong SBLAs of at least \(\log(n_{H}/\mathrm{cm}^{-3})>-2.95\) are required assuming \(>10^{3.5}\)K. This corresponds to gas clumping on \(<2.5\) kpc scales. * We fit a mixture of three gas phases to all metal lines associated with the metal strong SBLA sub-population (excluding species that could arise due to self-shielding). The highest ionization phase is required by O vi but it unconstrained. The intermediate ionization and low ionization phases both require our minimum temperature of \(T=10^{3.5}\)K. The intermediate ionization model shows a density of \(\log(n_{H}/\mathrm{cm}^{-3})>-3.35\) (equivalent to 15 kpc clumping) with metallicity \([X/H]=-1.1\). The favoured low ionization phase model has a density of \(n_{H}=1\mathrm{cm}^{-3}\) corresponding to scales of only 0.009 parsecs and metallicity \([X/H]=0.8\). The minimum allowed density for this phase is \(\log n_{H}>-0.93\) (at 1\(\sigma\)) corresponding to a clumping of 0.38 parsecs. These extreme and yet common CGM conditions required further study in simulations. ## Acknowledgements We thank KG Lee for his continuum fitting code that was used in a modified form to produce the continua used in this work. We thank Ben Oppenheimer supplying the ionization tables and providing helpful discussions. We also thank Nikki Nielsen for her useful comments about this work. This work was supported by the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the "Investissements d'Avenir" French Government program, managed by the French National Research Agency (ANR), and by ANR under contract ANR-14-ACHN-0021. Some the data presented in this work were obtained from the Keck Observatory Database of Ionized Absorbers toward QSOs (KODIAQ), which was funded through NASA ADAP grant NNX10AE84G. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU ) University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatorio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. ## Data availability Catalogues and derived data products from this article are available at [https://archive.lam.fr/GECO/SBLA-eBOSS](https://archive.lam.fr/GECO/SBLA-eBOSS) The data underlying this article were accessed from SDSS-IV DR16 ([https://www.sdss.org/dr16/](https://www.sdss.org/dr16/)) and Keck Observatory Database of Ionized Absorption toward Quasars (KODIAQ; [https://koa.ipac.caltech.edu/applications/KODIAQ](https://koa.ipac.caltech.edu/applications/KODIAQ)).
2301.13382
Numeracy from Literacy: Data Science as an Emergent Skill from Large Language Models
Large language models (LLM) such as OpenAI's ChatGPT and GPT-3 offer unique testbeds for exploring the translation challenges of turning literacy into numeracy. Previous publicly-available transformer models from eighteen months prior and 1000 times smaller failed to provide basic arithmetic. The statistical analysis of four complex datasets described here combines arithmetic manipulations that cannot be memorized or encoded by simple rules. The work examines whether next-token prediction succeeds from sentence completion into the realm of actual numerical understanding. For example, the work highlights cases for descriptive statistics on in-memory datasets that the LLM initially loads from memory or generates randomly using python libraries. The resulting exploratory data analysis showcases the model's capabilities to group by or pivot categorical sums, infer feature importance, derive correlations, and predict unseen test cases using linear regression. To extend the model's testable range, the research deletes and appends random rows such that recall alone cannot explain emergent numeracy.
David Noever, Forrest McKee
2023-01-31T03:14:57Z
http://arxiv.org/abs/2301.13382v1
# Numeracy from Literacy: Data Science as an Emergent Skill from Large Language Models ###### Abstract Large language models (LLM) such as OpenAI's ChatGPT and GPT-3 offer unique testbeds for exploring the translation challenges of turning literacy into numeracy. Previous publicly-available transformer models from eighteen months prior and 1000 times smaller failed to provide basic arithmetic. The statistical analysis of four complex datasets described here combines arithmetic manipulations that cannot be memorized or encoded by simple rules. The work examines whether next-token prediction succeeds from sentence completion into the realm of actual numerical understanding. For example, the work highlights cases for descriptive statistics on in-memory datasets that the LLM initially loads from memory or generates randomly using python libraries. The resulting exploratory data analysis showcases the model's capabilities to group by or pivot categorical sums, infer feature importance, derive correlations, and predict unseen test cases using linear regression. To extend the model's testable range, the research deletes and appends random rows such that recall alone cannot explain emergent numeracy. Exploratory Data Analysis, Transformers, Text Generation, Generative Pre-trained Transformers, GPT ## 1 Introduction Three promising and challenging AI technologies are benchmarks for community research progress: autonomous driving, personal assistant, and chatbots [1]. OpenAI's ChatGPT combined the personal assistant with a chat interface in their late November 2022 public release [2-8]. Prompt or conversational customization of the ChatGPT API reveals the depth of its encyclopedic knowledge [7], somewhat akin to an effective Google advanced search or dynamically created Wikipedia entry [9]. Previous researchers have noted emergent features [10-19] beyond what a search engine, spidering indexer, or community-sourced compilation like Wikipedia might answer complex questions. This paper proposes several tasks that require ChatGPT to reason [10,20]. While traditional challenge problems presented to large language models like "2+2=" have previously not satisfied any reasoning tests [21], the latest generation seems to display what the AI community might categorize as emergent properties [15-17]. For instance, previous work highlighted ChatGPT's capability to mimic complex computer operating systems as if a hacker interacted with text commands [22-24]. As an API interface, ChatGPT could serve as a dynamic honeypot with realistic responses [23]. The present work extends this "out-of-the-box" simulation capability to role-play the data scientist or knowledge assistant as they perform exploratory data analysis [15]. ChatGPT's latest release (19JAN2023) incorporates a basic understanding of benchmark machine learning datasets like iris [25-26], Titanic survival [27-28], and Boston housing [29] without explicit programming. Some critical tests of the LLM's reasoning or knowledge [30-35] include random re-sampling of available datasets and _de novo_ generation from scratch. The present work examines whether ChatGPT possesses built-in knowledge of classic data science case studies like iris [25-26], Boston housing [29], and Titanic [27-28]. Without the built-in capability to load data, the large language models simulate user interactions [22-24], including coded Python that edits the datasets and removes memorized responses from the model's responses. Once the modified data receives prompts and queries, the LLM delivers emergent answers [15-17] based on its capability to perform arithmetic calculations or generate display code. Finally, each case presents word problem categories, such as "what demographic group most likely did not survive the Titanic crash? [27-28]". The paper presents a systematic version of exploratory data analysis (EDA) with linguistics models. The models generate python code to execute in a Jupyter notebook or answer questions related to identifying correlations, trends, outliers, and missing values as one might anticipate as typical data science pre-processing routines or follow-up summaries based on post-processing results [37] (Appendices A-E). The goal is to identify how well an LLM adapts to previously unseen datasets and generates plausible hints for extending the EDA [38]. Where possible, we generalize the results to either synthetic data that could not appear in the training data or to data slices that offer unique challenges from the well-known cases [36]. For example, by adding or subtracting random rows of well-known datasets, we force the LLM to demonstrate whether it can creatively tailor its responses to prompts in ways that differ from a simple search engine reply [37]. ## 2 Methods We organize the paper around the exploratory data analysis shown in Appendices A through E, as summarized in Table 1. Appendix A establishes basic numeracy skills, including date-time manipulation and word problems. Appendices B-D examine three tabular datasets where ChatGPT includes the data frame as part of its training data but receives customizations that make it impossible for the LLM to recall what previous steps or outcomes might apply. For instance, we use a random train-test split to the Titanic dataset to force the LLM to describe through its emergent statistical skills rather than return the standard answers that internet tutorials present. One motivation for this approach stems from the failure of earlier LLMs to perform basic arithmetic based on the next token prediction methods. Appendix F adds a further test of ChatGPT's data science skills by creating a randomized insurance claim dataset valid for the session only and would not appear in any historical training from the internet archive. \begin{tabular}{l|l|l} \hline **A**. & **B**. & **C**. \\ **A**. & **Basic Statistics** & **Arithmetic, Date-Time Manipulation,** & **Prompt-driven single values** \\ **and** & **Unit Conversions, Word Problems,** & \\ **B**. & **ChatGPT Iris** & **Descriptive statistics, missing and** & **IRIS dataset, Petal and Sepal** \\ **Dataset** & **duplicate value identification, variable** & **Width, and Length for Species** \\ **Interactions** & **correlations, factor analysis and feature** & **Identification (Some knowledge** \\ & **importance, plot code generation using libraries (seaborn, plotly), outlier identification, dataset augmentation,** & \\ \hline **C**. & **ChatGPT Titanic** & **Descriptive statistics, data frame** & **Titanic Survival Dataset based** \\ **Dataset** & **operations such as drop columns, missing values, composite column creation,** & **on passenger list demographics** \\ & **python function generation and execution in place, random test-train split, feature importance, pivot tables, and factor summation** & **(Some knowledge embedded but** \\ \hline \end{tabular} ## 3 Results For all five tests, the main result supports the hypothesis that the latest LLMs have reached sufficient scale to handle complex statistical questions. As proposed by the builders of GPT-3, these models offer public access to "zero-shot" or "few-shot" learning capabilities when scaled to sufficient parametric size. The model encodes enough general knowledge to answer mathematical questions with plausible answers, even when presented with only a few (or no) examples of formatted requirements or context. Because ChatGPT provides memory within a given session to at least 8,000 tokens (25 pages), the model's coherence and relevance present new inquiries in data science. One might call this quality "emergent" because no rules or computational layers are explicitly defined. The following sections outline the general characteristics of the four datasets presented (Iris, Titanic, Boston Housing, synthetic) along with a chain of statistical calculations selected to highlight date-time manipulations, approximations, and word problems. It is worth noting that ChatGPT provides self-contained datasets to test, which proves critical to complete any analysis. As an LLM frozen in time (2021) without any buffer or storage, the traditional steps needed to upload or present data fail. But having encountered the three well-known examples and one synthetic one, the model keeps track of each manipulation such that if a data row disappears, the resulting median or count changes accordingly. ### Descriptive Statistics As illustrated in Appendix A, the model can add large numbers, reduce answers to N significant digits, identify divisors, and perform an order of magnitude calculation with unit conversions. When asked for the day of the week from history, the model correctly identifies the day from 60 years prior. While not remarkable from a lookup table or internet search, the model only generates the correct result using next-token prediction and language training. To highlight the model's capacity for manipulating extensive, multi-stage calculations, we prompt for the number of minutes in a decade, the number of inches between the Eiffel Tower and London Bridge, and the number of people who could fit on the island of Manhattan. ChatGPT answers incorrectly to identify the time zone that corresponds to six hours ahead of US Eastern (EST) (False: Greenwich GMT\(+6\) or Bangladesh). When instructed that the model responded incorrectly, ChatGPT shows a Universal Time formula UTC-5 as EST, followed by UTC-5\(+6\)\(=\)UTC\(+1\), or Central European Time (CET). ChatGPT's capabilities to self-correct provide a novel user interface for redefining a precise question-and-answer sequence. For example, asking the model to do distance calculations between two cities in small units like inches seems to raise the need for further explanation: What's the point of knowing urban-scale dimensions in such small increments? When pressed in follow-up inquiries, the response showcases the conversion of units (miles to inches) but begins with an incorrect distance (3,500 miles rather than 212 miles). While the math is correct, the more specific initial conditions are flawed. When asked a more eccentric estimation problem (the number of people standing shoulder to shoulder who could fit in Manhattan a densely packed single layer), ChatGPT responds with the correct initial condition for the area calculation (22.96 square miles). If a person requires 2 square feet, the model fails to convert square miles to feet (ChatGPT: 8.9 million people vs. 318 million people in a 636 million sq foot area). The model qualifies its answer as a safety, logistical, and health problem based on crowd control, then further amends its calculation to exclude parks, buildings, or non-built-up areas. As noted previously, ChatGPT has access to structured and organized datasets. LLMs can perform the four basic software operations expected for databases: Create, Read, Update, and Delete (CRUD). In Appendix B, the iris dataset describes the classification challenge to identify one of three flower species by its distinct petal and sepal dimensions. For this multi-class clustering, the model answers that there are no duplicates or missing values for the 50 examples of each class. When prompted to mimic a python interpreter (as a Jupyter notebook), the model responds with the expected output given a prompt in code alone. For example, using "data.corr()" as the prompt produces the correct python output for the iris data. We prompt the model to produce graphical code given a desired figure output (such as histograms, heatmaps, boxplots, pair plots, scatter, and distribution plots). Rather than a language-only model producing the requested figures directly, ChatGPT responds with the python libraries (plotly, seaborn, matplotlib) and codes, which run in a separate interpreter to give the graphs shown in Appendices B-D. When asked for interpretations based on the exploratory charts, the model responds with a description, such as a box-and-whiskers plot showing the quartiles and statistical outliers. ChatGPT does not limit its response to general code commentary for box plots but identifies the given dataset's variables and highlights conclusions for each class. While GitHub or internet tutorials might support ChatGPT training for this EDA, we alter the expected output by adding or deleting data frame rows to avoid the memorized response. This way, the emergent capabilities for performing statistical inference get isolated from the baseline training inputs. ### Coding and Plots Appendices B-E focus on the four data science tasks to exercise ChatGPT's capabilities for python code generation. Because the LLM offers no graphical output, the problem set transforms from the previous tasks to coding solutions to test using Jupyter. Both Codex and copilot have offered coding assistance since August 2021. In Appendix B, ChatGPT shows the output of exploratory data analysis as displayed by python code for outliers, histograms, and distribution plots for the iris dataset. In Appendix C, we ask the LLM to modify the Titanic dataset in preparation for categorical analysis and survivorship demographics. The raw data (891 rows x 12 columns) offers irrelevant predictive variables ("PassengerID"), which we drop, then let ChatGPT pick up with the finer manipulation of the modified data. The sequence of steps matter along the path to generating a final working dataset ready for machine learning. In the prompt, for instance, one can define python functions that recode the embarkation points, ticket prices, and passenger class with mappings from symbols to full names. One further can bin age into five maturity categories between infant and elderly and distribute the passenger ages into ten-year brackets. A further partition transforms the gender and marital status into categoricals. Once ChatGPT gets the python functions, the running of dataset modifications provides an in-memory style of output for further analysis. It is worth noting that these steps illustrate how a language model serves as a computational interface to perform complex statistical actions, pivot groupings, and train-test splits that could not appear in the model's original corpus. Once the unique Titanic data is created and plotted, ChatGPT can answer demographic questions about survivorship: third-class male passengers proved least likely to live through the crash and rescue. In Appendix D, we perform essential machine learning (ML) steps that drop highly correlated variables and split the Boston housing data into train and test sets. We applied linear regression models from sci-kit learn python libraries and asked for root mean square error (RMSE) results. The initial prompts without context led to coding suggestions but refused to perform calculations on an arbitrary train-test split. However, when prompted to act as a Jupyter notebook, the code output renders actual numerical RMSE and correlation coefficients (R-squared) values. We created an example row to test the model and asked for a linear model prediction. A series of word problems round out the Appendix E example, such that based on the data, the model highlights low-crime areas or numbers of rooms. In a plausible real estate setting, the LLM answers with a data-driven response that a combination of many rooms and the lowest price might satisfy a buyer. To our knowledge, this output seems unique to this scale of LLM in public access, both as a data science platform but also as capable of performing as an ML algorithm and predict on unseen inputs. It is worth noting that previous models from the last few years, like GPT-2, failed on simple addition questions. ### Emergent Understanding Appendix E establishes that a randomized dataset was created using the python library Faker to synthesize an anonymous insurance claim dataset that could not be repeated in previous LLM inputs. This library makes categorical and numerical variables to include names, addresses, companies, claim reasons, and claim confidentiality levels. For the final mock dataset created, 200 rows and nine columns make up the in-memory capability of ChatGPT. When asked to reason over the nine columns, the LLM recognizes that 6 or 8 variables are categorical and that for the remaining two numerical categories, a median value emerges as the (randomized) claim amount of $1498.5. This number appears differently every time the conversation commences, such that the net amount sums the medical, travel, phone, and unknown reasons are segmented. The minimum possible value in this example would equal one, and the maximum (medical) claim would equal 2300. While this sample of 200 values over many trials should converge to approximately 1650, the resulting language model performs a reasonable approximation in building the anonymized dataset for insurance claim values. A current search engine (Google) value for: "Give me a random value between 1 and 2300" yields a link tree that samples 11.8 million examples on the internet but does not answer the arithmetic question specifically. The referral engine links to calculators. ## 4 Discussion The present work selects these demonstrations to illustrate the data science capabilities of large language models. The result extends previous efforts that highlight the computational capacity of language as an expressive and generative transformer. There exist few obvious precursors to evolving numeracy from literacy. As recently as 18 months prior, the most advanced linguistic models could not perform elementary addition. One consequence of the emergent skill that exceeds the expectations of a python coder would include the comprehensive explanation of word problem challenges. So not only does ChatGPT produce code from surveying Github, but it also reaches a natural (and relatively safe) conclusion based on the output of running sample code. While previous work has demonstrated this "fake storefront" or "Hollywood stage" effect in ChatGPT when assuming different operating systems, honeypots, or characters in a play, the role of data scientist provides a novel representation to evolve exploratory analysis. In the classic triad of iris, Titanic, and Boston housing, the work demonstrates that standard operations like pivoting, statistical observation, and anomaly detection suggest legitimate linguistic operations to supplement arithmetic understanding. Like young children, the LLM has some capacity for reasoning across symbolic abstraction (numeracy) and linguistic interpretation (literacy). An obvious extension of this work would combine the symbolic and literate to translate word problems in multiple languages with complex alphabets like Chinese, Cyrillic, or Arabic. In this way, one might imagine the union of symbolic AI with its more brute-force cousin as a trained transformer capable of compressing and representing the sum of human knowledge into "next token" predictions. ## 5 Conclusions In conclusion, the present work demonstrates large language models like ChatGPT carry a built-in capacity for performing numerical work using a basic linguistic representation and (attention-based) weights across a vast (40TB) dataset of human knowledge. Presumably, no single branch of its 175 billion parameters encodes a given dataset like Titanic or Boston housing, but even without the capability to upload the data, the model knows and illustrates complex manipulations. If presented with 8000 tokens (around 25 pages) of a novel dataset, one can presume that ad hoc and de novo data science becomes possible within an otherwise numerically challenged token-centric model by appending it as a data frame. The work surveys basic and advanced operations, including CRUD, which makes a dataset otherwise impossible to memorize but amenable to a linguistics model that can summarize and coherently alter what it stores in memory. While ChatGPT set out to demonstrate the first chat interface that could survive both "safely and naturally" in the wild, what the scale of its operation may eventually reveal is emergent qualities that either are too complex for human traceability and validation or that survive in some over-fit quality from a few key transformer branches in the maze of internet databases for training. This work scratches the surface of what automated data science and exploratory analysis might evolve into given a language model that can calculate, infer and predict. ## Acknowledgments The authors thank the PeopleTec Technical Fellows program for encouragement and project assistance. The authors thank the researchers at OpenAI for developing large language models and allowing public access to ChatGPT.
2309.04962
Scalar fields around a loop quantum gravity black hole in de Sitter spacetime: Quasinormal modes, late-time tails and strong cosmic censorship
Loop quantum gravity, as one branch of quantum gravity, holds the potential to explore the fundamental nature of black holes. Recently, according to the quantum Oppenheimer-Snyder model in loop quantum cosmology, a novel loop quantum corrected black hole in de Sitter spacetime has been discovered. Here, we first investigate the corresponding quasinormal modes and late-time behavior of massless neutral scalar field perturbations based on such a quantum-modified black hole in de Sitter spacetime. The frequency and time domain analysis of the lowest-lying quasinormal modes is derived by Prony method, Matrix method as well as WKB approximation. The influences of loop quantum correction, the black hole mass ratio, and the cosmological constant on the quasinormal frequencies are studied in detail. The late-time behaviors of quantum-modified black holes possess an exponential decay, which is mainly determined not only by the multipole number but also by the cosmological constant. The impact of loop quantum correction on the late-time tail is negligible, but it has a significant impact on damping oscillation. To explore spacetime singularities, we examine the validity of strong cosmic censorship for a near-extremal quantum-modified black hole in de Sitter spacetime. As a result, it is found that the strong cosmic censorship is destroyed as the black hole approaches the near-extremal limit, but the violation becomes weaker as the cosmological constant and the loop quantum correction increase.
Cai-Ying Shao, Cong Zhang, Wei Zhang, Cheng-Gang Shao
2023-09-10T08:32:49Z
http://arxiv.org/abs/2309.04962v2
# Strong cosmic censorship for a black hole in loop quantum gravity ###### Abstract A fine gravitational theory is essentially expected to deal with the problem of spacetime singularities. Loop quantum gravity as one branch of quantum gravity is potential to explore the nature of black holes. Recently, according to the quantum Oppenheimer-Snyder model in loop quantum cosmology, a novel loop quantum corrected black hole in de Sitter spacetime has been discovered. Here, we focus on examining the strong cosmic censorship(SCC) based on such a quantum modified black hole by considering a massless neutral scalar field perturbation. As a result, we find that the SCC is destroyed as the black hole approaches to the near-extremal limit. Notably, the critical value of the black hole mass ratio for such a violation increases with the increase of the cosmological constant. It was implied the cosmological constant plays an important role in moderating the violation of the SCC. ## I Introduction Spacetime singularities, characterized by infinite curvature or density, have been a subject of great interest and curiosity in the fields of gravitation theory and relativistic astrophysics. According to the singularity theorems proved by Hawking and Penrose, the existence of singularities is unavoidable in generic gravitational collapses. The presence of singularities poses profound challenges to our understanding of the universe within the context of classical general relativity. One specific concern is the existence of naked singularities, which are singularities that are not hidden within a black hole event horizon and thus could be observed by outside observers, breaking down the predictive power of classical general relativity. In order to alleviate such a loss of predictability, Penrose proposed the cosmic censorship conjectures [1]. One is called weak cosmic censorship conjecture (WCC), which asserts that any spacetime singularity formed in a generic gravitational collapse should be covered by a black hole horizon. It is obvious that WCC guarantees the predictive power of classical general relativity only in the spacetime region outside of the black hole. While the predictability of classical general relativity inside of the black hole is further restored by the other conjecture, named the strong cosmic censorship conjecture (SCC), which claims in a colloquial style that the timelike singularities are not allowed, or can be formulated equivalently as a more rigorous mathematical statement that the Cauchy horizon inside of the black hole is unstable for the generic perturbations and thus inextendible. This is the case for Kerr and Reissner-Nordstrom black holes, where the would-be timelike singularity does not lead to the violation of the SCC because the Cauchy horizon becomes singular and inextendible due to the exponential blueshift effect of the perturbations along it. Actually, in asymptotically flat spacetimes, the SCC is always valid except for the accelerating black holes[2; 3]. However, the validity of SCC will become more complicated in the asymptotically de Sitter spacetimes. A positive cosmological constant leads to an exponential decay of the external perturbations, which can compete with the aforementioned blueshift effect along the Cauchy horizon [4; 5]. Thus the validity of the SCC depends on which one will win in the competition. To be more specific, the SCC has recently been found violated in the nearly extremal charged Reissner-Nordstrom de Sitter (RNdS) black hole by the scalar field [6; 7; 8; 9; 10], the fermionic field [11; 12; 13], and the gravito-electromagnetic field [14]. In addition, as to the rotating Kerr de Sitter black hole, the SCC can be respected by the bosonic field perturbations [15; 16], but violated by the fermionic field perturbation [17]. While for the Kerr-Newman de Sitter black hole, the SCC is still violated by both the scalar and fermionic fields [18]. Last but not least, it is noteworthy that other factors, such as the smoothness of initial data, nonlinear effect, dark matter and dark energy, space-time dimensions, and quantum effect of the perturbation fields, could also impact the validity of the SCC [19; 20; 21; 22; 23; 24]. On the other hand, the presence of singularities in classical general relativity highlights the necessity for a theory of quantum gravity (QG) that combines the principles of quantum mechanics and general relativity. Among the various approaches to QG, loop quantum gravity (LQG) has shown great promise with significant advancements made (see, e.g., [25; 26; 27; 28; 29; 30; 31] and the references therein). By applying the procedure of loop quantization to spherically symmetric black holes, one has gained many insights into the quantum nature of black holes [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42], where the singularity of the Schwarzschild black hole is believed to be resolved through the effects of LQG as it should be the case, although the specific detail of how this resolution occurs is scheme dependent. In particular, with the quantum Oppenheimer-Snyder model in loop quantum cosmology, a new quantum black hole model has been derived most recently [39], where the Schwarzschild singularity is resolved by a transition region that contains an inner horizon. As a result, the global structure of such a quantum black hole model resembles that of the charged Reissner-Nordstrom black hole. In this sense, the SCC is still plagued potentially by the emergence of the inner Cauchy horizon if one immerses this quantum modified black hole in de Sitter space. The purpose of this paper is to examine whether the SCC holds for such a quantum modified black hole in de Sitter space. To this end, we first follow the same procedure developed in [39] to derive the modified metric of the loop quantum black hole in de Sitter space in the next section. Then we present the dynamics of a neutral massless scalar perturbation and derive Christodoulou's formulation of the SCC in terms of quasinormal modes in Sec. III. With the above preparation, we use different numerical methods to calculate the quasinormal modes and explore the validity of the SCC in Sec. IV. Finally, the concluding remarks are presented in the last section. ## II The loop quantum gravity corrected geometry of the black hole in de Sitter space Let us follow the precedure introduced in [39] to get the quantum modified spacetime by considering the quantum Oppenheimer-Snyder model. In this model, the entire spacetime is divided into two regions. One region comprises a pressureless dust ball with a constant density, and the other region is a vacuum outside the dust ball. In the region with dust, we introduce a coordinate \((\tau,\tilde{r},\theta,\phi)\) with \(0<\tilde{r}<\tilde{r}_{0}\) which adapts the symmetry of the dust ball. Then, the metric of the ball takes the form \[ds_{\rm in}^{2}=-d\tau^{2}+a(\tau)^{2}(d\tilde{r}^{2}+d\Omega^{2}), \tag{1}\] where \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\). The dynamics of the scale factor \(a(\tau)\) is governed by the LQC modified Friedmann equation \[H^{2}=\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8\pi G}{3}\rho(1-\frac{\rho}{ \rho_{c}})+\frac{\Lambda}{3},\quad\rho=\frac{M}{\frac{4}{3}\pi\tilde{r}_{0}^{ 3}a^{3}}, \tag{2}\] where the deformation parameter \(\rho_{c}\) denotes the critical density defined as \(\rho_{c}=\sqrt{3}/(32\pi^{2}\gamma^{3}G^{2}\hbar)\) with the Barbero-Immirzi parameter \(\gamma\). \(M\) is the mass of the ball with radius \(a(\tau)\tilde{r}_{0}\). It should be noted that the current work adds a cosmological constant term to the modified Friedmann equation, different from the initial model considered in [39]. Eq.(2) reverts to the usual Friedmann equation in the classical regime where \(\rho\ll\rho_{c}\). However, in the quantum regime where \(\rho\) is comparable with \(\rho_{c}\) so that the spacetime curvature becomes Planckian, the deformation term will prevent the matter density \(\rho(\tau)\) from reaching infinity which thus prevents the formation of the singularity. Indeed, according to Eq.(2), at the moment \(\tau_{b}\) with \(\rho(\tau_{b})=\rho_{c}\left[1+\sqrt{1+\Lambda/(2\pi G\rho_{c})}\right]/2\), one has \(H=0\), which signifies a change of the dynamics of the ball from the collapsing phase to the expanding phase at \(\tau_{b}\). In the outside region of the dust ball, we assume the spacetime to be spherically symmetric and static, as done in [39]. We can use the coordinates \((t,r,\theta,\phi)\) to describe this region, which is adapted to the symmetry of the spacetime. In this coordinate, the metric of the outside region reads \[ds_{\rm out}^{2}=-f(r)dt^{2}+g(r)^{-1}dr^{2}+r^{2}d\Omega^{2}, \tag{3}\] where \(f(r)\) and \(g(r)\) are two unknown functions to be determined. In order to determine the unknown functions \(f(r)\) and \(g(r)\), we need to find the inner most boundary of the outside region which is glued with the dust ball surface. The junction condition for the gluing requires that the reduced 3-metrics and the extrinsic curvatures along the gluing surfaces obtained from the 4-metrics \(ds_{\rm in}^{2}\) and \(ds_{\rm out}^{2}\) respectively are continuous. It should be noted that the worldlines \(\tau\mapsto(\tau,\tilde{r}_{0},\theta,\phi)\) of each particle on the surface of the dust ball is a timelike geodesic without rotation. This implies that the inner most surface of the outside region is also composed of the congruence of freely falling timelike geodesics associated to the metric \(ds_{\rm out}^{2}\). Moreover, let \(\tau\rightarrow(t(\tau),r(\tau),\theta,\phi)\) be a geodesic in the innermost surface of the outside region, with \(\tau\) being the length of the geodesic. Then, the surfaces are glued by the identification \((\tau,\tilde{r}_{0},\theta,\phi)\sim(t(\tau),r(\tau),\theta,\phi)\). The calculation could be simplified by such a junction condition. So far, we have built our model and sketched the calculation to get the metric of the outside region by the junction condition. Then, just following the procedure shown in [39], we get \[f(r)=g(r)=1-\left(\frac{2GM}{r}+\frac{\Lambda r^{2}}{3}-\frac{\alpha G^{2}M^{2 }}{r^{4}}\left(1+\frac{\Lambda r^{3}}{6GM}\right)^{2}\right) \tag{4}\] where \(\alpha=16\sqrt{3}\pi\gamma^{3}G\hbar\), proportional to the Planck area, is the quantum deformation parameter. It should be noted that the metric (4) is valid only for \(r>r_{b}\) with \(r_{b}\) denoting the minimal radial of the dust ball at which the bounce occurs [39]. For convenience, we set \(G=\hbar=1\) and \(\gamma=0.2375\) in the remainder of this paper. In Fig. 1, we plot the values of \(f(r)\) depending on \(r\) for \(\Lambda=0.1\), in which \(M\) can take different values. As shown in the figure, for \(M\) bigger than some extreme value \(M_{\rm Ext}\), the metric function \(f(r)\) has three roots, corresponding to the three horizons of the black hole. They are respectively the Cauchy horizon \(r_{i}\), the event horizon \(r_{h}\) and the cosmological horizon \(r_{c}\), with \(r_{i}<r_{h}<r_{c}\). If one decreases the mass of the black hole for the given cosmological constant, the Cauchy and the event horizons gradually approach. When the Cauchy horizon coincides with the event horizon, the mass reaches an extreme value, which is denoted as \(M_{\rm ext}\).For \(M<M_{\rm ext}\), the event horizon disappears resulting in a naked singularity. This case is thus prohibited by the WCC. For research purposes, our focus here is only on a black hole with three horizons. ## III Quasinormal modes and strong cosmic censorship Now, we consider a massless neutral scalar field perturbation in the above background. The equation of motion in such a curved spacetime is governed by the following Klein-Gordon equation: \[\Box\Phi=0. \tag{5}\] According to spherical symmetry of the spacetime, the scalar field can be expanded as \[\Phi=\frac{\phi(r)}{r}Y_{lm}(\theta,\varphi)e^{-i\omega t}, \tag{6}\] where \(Y_{lm}(\theta,\varphi)\) is the spherical harmonics function. By plugging it into the Klein-Gordon equation, the master equation in the radial part reads \[\left(\frac{d^{2}}{dr_{*}^{2}}+\omega^{2}-V_{eff}(r)\right)\phi(r)=0, \tag{7}\] where the effective potential is given by \[V_{eff}(r)=f(r)\left[\frac{l(l+1)}{r^{2}}+\frac{f^{\prime}(r)}{r}\right]. \tag{8}\] \(dr_{*}\) is tortoise coordinate, which is defined as \(dr_{*}=\frac{dr}{f(r)}\). Physically, there only exist purely ingoing waves near the event horizon and purely outgoing waves near the cosmological horizon [43]. Thus the boundary conditions are imposed as \[\phi(r)\approx e^{-i\omega r_{*}}\left(r\to r_{h}\right),\quad\phi(r) \approx e^{i\omega r_{*}}\left(r\to r_{c}\right). \tag{9}\] Then, the discrete quasinormal frequencies can be derived by solving the equation of motion with the above boundary conditions. On the other hand, if one imposes purely ingoing wave near the event horizon, the solution from the equation of motion has both the outgoing and ingoing waves near the Cauchy horizon, which can be expressed as \[\phi_{ingoing}\approx e^{-i\omega u}(r-r_{i})^{\frac{i\omega}{ \kappa_{i}}},\quad\phi_{outgoing}\approx e^{-i\omega u}, \tag{10}\] where \(u\) is outgoing coordinate defined as \(u=t-r_{*}\) and \(\kappa_{i}\) is the surface gravity of Cauchy horizon defined as \(\kappa_{i}=\left|\frac{1}{2}f^{\prime}\left(r_{i}\right)\right|\). Obviously, the ingoing wave has non-smooth radial dependence, which results in the potential non-smoothness behavior in the energy momentum tensor of the scalar field. Commonly, the violation of the SCC implies the weak solution can be extended beyond the Cauchy horizon. In other words, the energy-momentum tensor consisting of the square of its first derivative for the scalar field can be integrable at the Cauchy horizon, which requires [6] \[\beta=-\frac{\text{Im}\omega}{\kappa_{i}}>\frac{1}{2}. \tag{11}\] for all the quasinormal modes. On the contrary, as long as one finds the lowest lying quasinormal modes with the criterion \(\beta\leq\frac{1}{2}\), the SCC is preserved. Hence, in order to check the validity of the SCC, we exclusively focus on the lowest-lying quasinormal modes in the remainder of this paper. ## IV Numerical methods and relevant results In this section, we will use two numerical methods to accurately calculate the lowest-lying quasinormal modes and present some relevant results. Presently, many numerical computations of quasinormal modes have been developed with high precision [44; 45; 46]. Here, we introduce the finite difference method [47] to obtain the numerical evolution of the scalar field and then extract the quasinormal spectrum from the data samples with Prony method [48]. In order to check the correctness of our results, we also employ the matrix method [49]. Besides that, WKB approximation [50; 51; 52] is also performed for the quasinormal modes with large \(l\) s. Note that there are three distinct families to classify the relevant quasinormal modes, namely, the near-extremal modes with \(l=0\), the de Sitter modes with \(l=1\), and the photon sphere modes with large \(l\) s. In what follows, we are going to explore the neutral massless scalar field with these three modes. First, it is necessary to perform a coordinate transformation to derive the double null coordinates, which is defined as \(u=t-r_{*}\) and \(v=t+r_{*}\). Accordingly, the Klein-Gordon equation can be expressed as \[-4\frac{\partial^{2}\phi}{\partial u\partial v}=V_{\text{eff}}(r(u,v))\phi. \tag{12}\] According to finite difference scheme, the data at \(N\) can be obtained from \(W\), \(E\), and \(S\), such that the above equation of motion gives rise to \[\phi_{N}=\phi_{W}+\phi_{E}-\phi_{S}-\Delta u\Delta vV_{\text{eff }}(r(u,v))\frac{\phi_{W}+\phi_{E}}{8}, \tag{13}\] where the indices \(N,W,E,S\) denote grid-points, respectively corresponding to the points \(N\equiv(u+\Delta,v+\Delta)\), \(W\equiv(u,v+\Delta)\), \(E\equiv(u+\Delta,v)\), and \(S\equiv(u,v)\) with \(\Delta\) the step width of \((u,v)\). The time-domain profile will appear soon, once one provides the specific initial conditions \[\phi(u,0)=0,\quad\phi(0,v)=e^{-\frac{(v-v_{c})^{2}}{2\sigma^{2}}}, \tag{14}\] where \(v_{c}\) and \(\sigma\) correspond to the center and width of the Gaussian wave packet. The resulting temporal evolution \(\phi(t,r_{*})\) can be obtained from equally elapsed late-time data. Next, to extract the quasinormal mode from the temporal evolution data, Prony method is a very useful tool, which as an extension of the Fourier decomposition, is of great significance for signal processing and data analysis. The late-time signal at a certain \(r_{*}\) is composed of a set of quasinormal modes, which can be expanded as \[\phi(t)=\sum_{j=1}^{p}C_{j}e^{-i\omega_{j}t}. \tag{15}\] The time interval of the time-domain profile is between \(t_{0}\) and \(t=t_{0}+qh\), where \(h\) is the time interval of each point. \(q\) as the number of sample signals is an integer and satisfies \(q=2p\). For convenience, every sample is labeled by an integer \(n\). According to the above formula, the time-domain data at any time can be expressed as \[x_{n}=\sum_{j=1}^{p}\tilde{C}_{j}z_{j}^{n}, \tag{16}\] where \(x_{n}=\phi\left(t_{0}+nh\right),z_{j}=e^{-i\omega_{j}h},\tilde{C}_{j}=C_{j}e^ {-i\omega t_{0}}\). In order to find \(z_{j}\), it is necessary to introduce a polynomial function \[A(z)=\prod_{j=1}^{p}\left(z-z_{j}\right)=\sum_{i=0}^{p}\alpha_{i}z^{p-i}, \tag{17}\] with \(\alpha_{0}=1\). Obviously, for any integer \(j\) from 1 to \(p\), \(A(z_{j})=0\). Thus, it's easy to obtain the sum \[\sum_{i=0}^{p}\alpha_{i}x_{j-i}=\sum_{i=0}^{p}\alpha_{i}\sum_{k=1}^{p}C_{k}z_{ k}^{j-i}=\sum_{k=1}^{p}C_{k}z_{k}^{j-p}A\left(z_{k}\right)=0. \tag{18}\] Considering \(\alpha_{0}=1\), the above equation can be rewritten as \[\sum_{i=1}^{p}\alpha_{i}x_{j-i}=-x_{j}. \tag{19}\] Thus, we can get \(p\) equations after taking \(j\) from \(p+1\) to \(q\) such that \(\alpha_{i}\) can be solved. After substituting \(\alpha_{i}\) into Eq. (17), \(z_{j}\) can be derived easily. Then the quasinormal modes are obtained with the relation \(\omega_{j}=\frac{i}{h}\ln\left(z_{j}\right)\). The coefficients \(C_{i}\) can also be found according to Eq. (16). As a comparison, we further resort to the matrix method to ensure the accuracy of numerical results. By introducing reasonable wave function with a new variable \(Y(y)\) and changing the equation of motion into a regular form in the interval \([0,1]\), the Eq.(7) can be transformed into a matrix equation in the form of \(\Gamma(\omega)\mathcal{Y}=0\) with the boundary condition \(Y(0)=Y(1)=0\), where \(\Gamma(\omega)\) is a matrix and \(\mathcal{Y}\) a vector given by \(\mathcal{Y}_{i}=Y(y_{i})\). The quasinormal modes can be determined by solving the nonlinear algebraic equation \(\det(\Gamma(\omega))=0\). In Tab.I and II, we present low-lying quasinormal modes for the massless neutral scalar field obtained from both the Prony method and the matrix method. As shown in Tab.I and II, the numerical results derived by both methods are consistent with each other, and their accuracy error is controlled within 5 percent, which demonstrates the reliability of our numerical calculations. Moreover, we also employ the WKB approximation to calculate low-lying quasinormal modes with large \(l\) s and find the correlation results converge with those of other methods. It is noted that as the mass of the black hole is close to the extremal limit, \(\frac{-\mathrm{Im}(\omega)}{\kappa_{i}}\) becomes larger and larger. This indicates that the SCC might be violated in the near extremal regime. As a demonstration, Fig.2 presents the variation of \(\beta\) with the black hole mass ratio \(M_{\mathrm{Ext}}/M\) for different cosmological constants for the given \(l\). As expected, when the cosmological constant is fixed, the SCC will only be violated as the mass ratio exceeds a certain critical value. In addition, the critical value of the mass ratio for the violation of the SCC increases with the cosmological constant. To test this further, we also plot the variation of \(\beta\) with the cosmological constant \(\Lambda\) for different black hole mass ratios \(M_{\mathrm{Ext}}/M\) in Fig.3. It is noted that the larger the cosmological constant is, the harder the SCC is violated. It seems that the cosmological constant play an important role in recovering the SCC. Furthermore, the critical value for \(\Lambda\) to rescue the SCC becomes larger with the increase of the mass ratio. Finally, to display the behavior of \(\beta\) more intuitively, we present the density plots of \(\beta\) in the \(\frac{M_{Err}}{M}-\Lambda\) plane in Fig.4. The critical threshold \(\beta=1/2\) is marked as a solid red line. Only in the area above that can the SCC be violated. As one can see, the dashed line in black \(\frac{M_{Err}}{M}=0.99\) illustrates that the SCC is violated in the case with a smaller cosmological constant but respected when the cosmological constant is large enough. But anyhow, the SCC will always be violated if only the black hole approaches a highly near-extremal limit. The critical value of the black hole mass ratio for the violation is increased with the increase of the cosmological constant. To a certain degree, the cosmological constant can moderate the violation of the SCC. ## V Concluding remarks In this paper, we consider the perturbation of the massless neutral scalar field on the top of a loop quantum gravity corrected black hole in de Sitter spacetime. We obtain the low-lying quasinormal modes by employing the Prony method and the matrix method. Based on these, we further explore the validity of the SCC under such a Figure 3: The lowest-lying quasinormal modes with the frequency \(\beta=\frac{-\mathrm{Im}(\omega)}{\kappa_{i}}\) as a function of the cosmological constant \(\Lambda\), where the dotted magenta horizontal line represents the threshold value \(\beta=\frac{1}{2}\) and the dotted cyan vertical line denotes the critical value of \(\Lambda\) for the restoration of the SCC. Figure 2: The lowest-lying quasinormal modes with the frequency \(\beta=\frac{-\mathrm{Im}(\omega)}{\kappa_{i}}\) as a function of the black hole mass ratio \(M_{\mathrm{Ext}}/M\), where the dotted magenta horizontal line represents the threshold value \(\beta=\frac{1}{2}\) and the dotted cyan vertical line denotes the critical value of the mass ratio for the violation of the SCC. perturbation. As a result, the SCC is always violated when the black hole approaches the extremal one. It is found that the larger the cosmological constant is, the harder the SCC is violated, which implies the cosmological constant plays an important role in alleviating such a violation. We conclude our paper by pointing out a potential tension between the SCC and LQG. If the SCC is valid, it means that the Cauchy horizon becomes singular and inextendible. But the effect from LQG is supposed to resolve any potential singularity. If this is the case, the Cauchy horizon is supposed to keep smooth and extendible in LQG even in the presence of our scalar field, invalidating the SCC even in the validity regime of the SCC we have found in this paper. Such a tension may be solved by shifting our perspectives. For instance, although the emergent Cauchy horizon in the full LQG is smooth and extendible, making it the predictive power lost beyond the Cauchy horizon, such a loss might simply be the classical manifestation of quantum uncertainty. In this sense, the SCC should be discarded. To have a deep understanding of this issue, it is better to explore what the emergent classical geometry really looks like in loop quantum gravity coupled to the quantum scalar field. But this is utterly beyond the scope of this paper and expected to be reported somewhere else. ## Acknowledgements This work is supported by the National Key R&D Program of China under Grant No.2021YFC2203001 and Grant No.2022YFC2204602, the Natural Science Foundation of China Grant No.11925503, Grant No.12075026, and Grant No. 12275022.
2307.16612
Light, Reliable Spanners
A \emph{$\nu$-reliable spanner} of a metric space $(X,d)$, is a (dominating) graph $H$, such that for any possible failure set $B\subseteq X$, there is a set $B^+$ just slightly larger $|B^+|\le(1+\nu)\cdot|B|$, and all distances between pairs in $X\setminus B^+$ are (approximately) preserved in $H\setminus B$. Recently, there have been several works on sparse reliable spanners in various settings, but so far, the weight of such spanners has not been analyzed at all. In this work, we initiate the study of \emph{light} reliable spanners, whose weight is proportional to that of the Minimum Spanning Tree (MST) of $X$. We first observe that unlike sparsity, the lightness of any deterministic reliable spanner is huge, even for the metric of the simple path graph. Therefore, randomness must be used: an \emph{oblivious} reliable spanner is a distribution over spanners, and the bound on $|B^+|$ holds in expectation. We devise an oblivious $\nu$-reliable $(2+\frac{2}{k-1})$-spanner for any $k$-HST, whose lightness is $\approx \nu^{-2}$. We demonstrate a matching $\Omega(\nu^{-2})$ lower bound on the lightness (for any finite stretch). We also note that any stretch below 2 must incur linear lightness. For general metrics, doubling metrics, and metrics arising from minor-free graphs, we construct {\em light} tree covers, in which every tree is a $k$-HST of low weight. Combining these covers with our results for $k$-HSTs, we obtain oblivious reliable light spanners for these metric spaces, with nearly optimal parameters. In particular, for doubling metrics we get an oblivious $\nu$-reliable $(1+\varepsilon)$-spanner with lightness $\varepsilon^{-O({\rm ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)$, which is best possible (up to lower order terms).
Arnold Filtser, Yuval Gitlitz, Ofer Neiman
2023-07-31T12:39:18Z
http://arxiv.org/abs/2307.16612v1
# Light, Reliable Spanners ###### Abstract A _\(\nu\)-reliable spanner_ of a metric space \((X,d)\), is a (dominating) graph \(H\), such that for any possible failure set \(B\subseteq X\), there is a set \(B^{+}\) just slightly larger \(|B^{+}|\leq(1+\nu)\cdot|B|\), and all distances between pairs in \(X\setminus B^{+}\) are (approximately) preserved in \(H\setminus B\). Recently, there have been several works on sparse reliable spanners in various settings, but so far, the weight of such spanners has not been analyzed at all. In this work, we initiate the study of _light_ reliable spanners, whose weight is proportional to that of the Minimum Spanning Tree (MST) of \(X\). We first observe that unlike sparsity, the lightness of any deterministic reliable spanner is huge, even for the metric of the simple path graph. Therefore, randomness must be used: an _oblivious_ reliable spanner is a distribution over spanners, and the bound on \(|B^{+}|\) holds in expectation. We devise an oblivious \(\nu\)-reliable \((2+\frac{2}{k-1})\)-spanner for any \(k\)-HST, whose lightness is \(\approx\nu^{-2}\). We demonstrate a matching \(\Omega(\nu^{-2})\) lower bound on the lightness (for any finite stretch). We also note that any stretch below 2 must incur linear lightness. For general metrics, doubling metrics, and metrics arising from minor-free graphs, we construct _light_ tree covers, in which every tree is a \(k\)-HST of low weight. Combining these covers with our results for \(k\)-HSTs, we obtain oblivious reliable light spanners for these metric spaces, with nearly optimal parameters. In particular, for doubling metrics we get an oblivious \(\nu\)-reliable \((1+\varepsilon)\)-spanner with lightness \(\varepsilon^{-O(\mathrm{ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)\), which is best possible (up to lower order terms). ###### Contents * 1 Introduction * 1.1 Our Results * 1.2 Technical Overview * 1.3 Related Work * 1.4 Organization * 2 Preliminaries * 3 Light Reliable Spanner for \(k\)-HSTs * 3.1 Decomposition of \(T\) to Heavy Paths * 3.2 Construction * 3.3 Analysis * 3.4 Improved Stretch for Small Max Degree HST * 4 Pairwise Partition Cover for Minor Free Graphs * 5 From Pairwise Partition Cover to Light \(k\)-HST Cover * 5.1 \(k\)-HST Cover for Doubling Metrics. * 6 Reliable Spanners for Metric Spaces * 6.1 Doubling Metrics * 6.2 General Metric Spaces * 6.3 Minor Free Graphs * 6.4 Doubling Metric of High Dimension * 6.5 General Ultrametric * 7 Light Reliable Spanner for the Path Graph * 7.1 Construction * 7.2 Analysis * 8 Improved Light Reliable Spanners for Minor-free Graphs * 9 Lower Bounds * 9.1 Lower bound for deterministic light reliable spanners * 9.2 Lower Bound for HST * 9.3 Lower Bound for the Unweighted Path * A A Helpful Lemma * B Light Reliable \(O(\log n)\)-Spanner Introduction Given a metric space \((X,d_{X})\), a \(t\)-_spanner_ is a graph \(H\) over \(X\) such that for every \(x,y\in X\), \(d_{X}(x,y)\leq d_{H}(x,y)\leq t\cdot d_{X}(x,y)\), where \(d_{H}\) is the shortest path metric in \(H\). 1 The parameter \(t\) is often referred to as the _stretch_. In essence, the purpose of spanners is to represent the distance metric using a sparse graph. Spanners where introduced by Peleg and Schaffer [14], and found numerous applications throughout computer science. For a more systematical study, we refer to the book [13] and survey [1]. In many cases, the goal is to minimize the total weight of the spanner and not just the number of edges. E.g., when constructing a road network, the cost is better measured by the total length of paved roads, as opposed to their number. This parameter of interest is formalized as the _lightness_ of a spanner, which is the ratio between the weight of the spanner (sum of all edge weights), and the weight of the Minimum Spanning Tree (MST) of \(X\): \(\frac{w(H)}{w(\text{MST})}\). Note that the MST is the minimal weight of a connected graph, and thus of a spanner with finite stretch. So the lightness is simply a "normalized" notion of weight. Footnote 1: Often in the literature, the input metric is the shortest path metric of a graph, and a spanner is required to be a subgraph of the input graph. Here we study metric spanners where there is no such requirement. Light spanners have been thoroughly studied. It is known that general \(n\)-point metric spaces admit a \((2k-1)(1+\varepsilon)\) spanner (for \(k\in\mathbb{N}\), \(\varepsilon\in(0,1)\)) with \(O(n^{1+1/k})\) edges and lightness \(O(\varepsilon^{-1}\cdot n^{1/k})\)[12, 13] (see also [1, 1, 14, 15]). Every \(n\)-point metric space with doubling dimension2 ddim admits a \((1+\varepsilon)\)-spanner with \(n\cdot\varepsilon^{-O(\text{ddim})}\) edges and lightness \(\varepsilon^{-O(\text{ddim})}\)[1] (see also [1, 14]). Finally, the shortest path metric of a graph excluding a fixed minor admits a (sub-graph, which already implies sparsity) \((1+\varepsilon)\)-spanner with lightness \(\tilde{O}(\varepsilon^{-3})\)[1]. Footnote 2: A metric space \((X,d)\) has doubling dimension ddim if every ball of radius \(2r\) can be covered by \(2^{\text{ddim}}\) balls of radius \(r\). The \(d\)-dimensional Euclidean space has doubling dimension \(\Theta(d)\). A highly desirable properly of a spanner is the ability to withstand massive node-failures. To this end, Bose _et. al._[1] introduced the notion of a _reliable spanner_. 3 Here, given a set of failed nodes \(B\subseteq X\), the residual spanner \(H\setminus B\) is a \(t\)-spanner for \(X\setminus B^{+}\), where \(B^{+}\supseteq B\) is a set slightly larger than \(B\). For the case of points in \(d\)-dimensional Euclidean space, for constant \(d\), Bose _et. al._[1] constructed \(O(1)\) spanner such that \(|B^{+}|\leq O(|B|^{2})\). Later, Buchin, Har-Peled, and Olah [1] constructed \(1+\varepsilon\) reliable spanner with \(n\cdot\varepsilon^{-O(d)}\cdot\nu^{-6}\cdot\tilde{O}(\log n)\) edges, guaranteeing that for every set of failed nodes \(B\), \(|B^{+}|\leq(1+\nu)\cdot|B|\). This result was generalized to metric spaces with doubling dimension ddim by Filtser and Le [11]. Footnote 3: For a comprehensive discussion with the related notion of fault-tolerant spanners, see Section 1.3. While reliable spanners for Euclidean and doubling metrics admit sparsity which is comparable to their non-reliable counter-parts, the situation is very different for other metric families. Indeed, Har-Peled _et. al._[12] showed that every reliable \(k\)-spanner of the simple uniform metric (which is also a tree metric) must have \(\Omega(n^{1+1/k})\) edges. Nevertheless, it is possible to construct _oblivious_ reliable spanner for other metric spaces with good parameters, where the bound on the size of \(B^{+}\) is only in expectation. **Definition 1** (Reliable spanner).: _A weighted graph \(H\) over point set \(X\) is a deterministic \(\nu\)-reliable \(t\)-spanner of a metric space \((X,d_{X})\) if \(d_{H}\) dominates4\(d_{X}\), and for every set \(B\subseteq X\) of points, called an attack set, there is a set \(B^{+}\supseteq B\), called a faulty extension of \(B\), such that: (1) \(|B^{+}|\leq(1+\nu)|B|\). (2) For every \(x,y\notin B^{+}\), \(d_{H[X\setminus B]}(x,y)\leq t\cdot d_{X}(x,y)\)._ Footnote 4: Metric space \((X,d_{H})\) dominates metric space \((X,d_{X})\) if \(\forall u,v\in X\), \(d_{X}(u,v)\leq d_{H}(u,v)\). _An oblivious \(\nu\)-reliable \(t\)-spanner is a distribution \(\mathcal{D}\) over dominating graphs \(H\), such that for every attack set \(B\subseteq X\) and \(H\in\operatorname{supp}(\mathcal{D})\), there exist a superset \(B^{+}_{H}\supseteq B\) such that, for every \(x,y\notin B^{+}_{H}\), \(d_{H[X\setminus B]}(x,y)\leq t\cdot d_{X}(x,y)\), and \(\mathbb{E}_{H\sim\mathcal{D}}\left[|B^{+}_{H}|\right]\leq(1+\nu)|B|\). We say that the oblivious spanner \(\mathcal{D}\) has \(m\) edges and lightness \(\phi\) if every \(H\in\operatorname{supp}(\mathcal{D})\) has at most \(m\) edges and lightness at most \(\phi\)._ For general \(n\)-point metrics, Filtser and Le [11] (improving over [10]) constructed an oblivious \(\nu\)-reliable \(8k+\varepsilon\)-spanner with \(\tilde{O}(n^{1+\frac{1}{k}}\cdot\varepsilon^{-2})\cdot\nu^{-1}\) edges. For the shortest path metric of graph excluding a fixed minor, there is oblivious \(\nu\)-reliable \((2+\varepsilon)\)-spanner with \(\varepsilon^{-2}\cdot\nu^{-1}\cdot\tilde{O}(n)\) edges, while every oblivious reliable spanner with stretch \(t<2\) requires \(\Omega(n^{2})\) edges [11]. For Euclidean and doubling metrics, oblivious \(\nu\)-reliable \((1+\varepsilon)\)-spanners can be constructed with only \(n\cdot\varepsilon^{-O(d)}\cdot\tilde{O}(\nu^{-1}\cdot\log^{2}\log n)\) edges [1, 11]. But what about lightness? no previous work attempted to construct reliable spanners of low total weight even though it is clearly desirable to construct reliable networks of low total cost. The single most studied metric in the context of reliable spanners is the unweighted path \(P_{n}\). Indeed, most of the previous work [1, 1, 1, 11] focused on constructing various reliable \(1\)-spanners for the path graph, and then generalized it other metric spaces using _locality sensitive orderings_5. A reliable spanner should have many edges between every two large enough sets, so that they could not be easily disconnected. Consider an attack \(B\) consisting of the middle \(\frac{n}{2}\) vertices on \(P_{n}\). If there are less than \(\frac{n}{8}\) crossing edges from left to right, then an attack \(B^{\prime}\supseteq B\) that contains also one endpoint per crossing edge, will disconnect two sets of size \(\frac{n}{8}\). Therefore a linear number of vertices should be added to \(B^{\prime+}\). We conclude that every deterministic reliable spanner (for any finite stretch) must have lightness \(\Omega(n)\) (see Theorem 19 for a formal proof). Thus, all hope lies in oblivious reliable spanners. However, even here any two large sets must be well connected. Previous oblivious reliable spanners for \(P_{n}\) all had unacceptable polynomial lightness. Footnote 5: Locality sensitive ordering is a generic tool that “reduces” metric spaces into the line, by devising a collection of orderings such that every two points are “nearby” in one of the orderings, see [1, 11]. As reliable spanners for \(P_{n}\) are the main building blocks for reliable spanners for other metric spaces, all previous constructions have inherent polynomial lightness.6 Footnote 6: The only previous work that did not reduced to \(P_{n}\) is by Har-Peled _et. al._[10] who reduced to uniform metrics. Nevertheless, their approach on \(P_{n}\) will have stretch \(3\), and lightness \(\Omega(n)\). ### Our Results The results of this paper are summarized in Table 1. Our results on light reliable spanners for various metric families are based on constructing such spanners for \(k\)-HSTs, this lies in contrast to previous results on sparse reliable spanners, which were mostly based on reliable spanners for the path graph. Roughly speaking, previous works on reliable spanners show us that the "cost" of making a spanner \(\nu\)-reliable, is often a \(\nu^{-1}\) factor in its size. Our results in this paper offer a similar view for light spanners: here the "cost" of reliability is a factor of \(\nu^{-2}\) in the lightness. That is, an \(\Omega(\nu^{-2})\) factor must be paid in the most basic cases (path graph, HST), while in more interesting and complicated metric families, we essentially match the best non-reliable light spanner constructions, up to this \(\nu^{-2}\) factor (and in some cases, such as minor-free graphs, an unavoidable constant increase in the stretch). For brevity, in the discussion that follows we omit the bounds on the size of our spanners (which can be found in Table 1). \(k\)-Hsts.We devise an oblivious \(\nu\)-reliable \(2+\frac{O(1)}{k}\)-spanner for any \(k\)-HST (see Definition 2), whose lightness is \(\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\) (see Theorem 1). It is implicitly shown in [12, Observation 1] that with stretch smaller than \(2\), the lightness must be \(\Omega(n)\). So when \(k\) is large, our stretch bound is nearly optimal.7 We also show that the lightness must be at least \(\Omega(\nu^{-2})\), regardless of the stretch, thus nearly matching our upper bound. Footnote 7: We also have a similar result for every \(k\geq 1\), with stretch \(2+\varepsilon\) and lightness \(\tilde{O}(\varepsilon^{2}\cdot\nu^{-1}\cdot\log\log n)^{2}\). Light \(k\)-Hst Covers.To obtain additional results for other metric families, following [12], we use the notion of _tree covers_, in which every tree is a \(k\)-HST (see Definition 3). We \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Family & Stretch & Lightness & Size & Ref \\ \hline \multirow{2}{*}{Doubling ddim} & \(1+\varepsilon\) & \(\varepsilon^{-O(\operatorname{ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)\) & \(n\cdot\varepsilon^{-O(\operatorname{ddim})}\cdot\tilde{O}(\nu^{-2})\cdot*\) & Cor. 8 \\ \cline{2-5} & ddim & \(\tilde{O}(\log n\cdot\nu^{-2})\cdot\operatorname{ddim}^{O(1)}\) & \(n\cdot\tilde{O}\left(\nu^{-2}\right)\cdot\operatorname{ddim}^{O(1)}\cdot*\) & Cor. 14 \\ \hline \multirow{2}{*}{General Metric} & \(12t+\varepsilon\) & \(n^{1/t}\cdot\tilde{O}(\nu^{-2}\cdot\varepsilon^{-4})\cdot\log^{O(1)}n\) & \(\tilde{O}\left(n^{1+1/t}\cdot\nu^{-2}\cdot\varepsilon^{-3}\right)\) & Cor. 10 \\ \cline{2-5} & \(O(\log n)\) & \(\tilde{O}(\nu^{-2}\cdot\log^{4}n)\) & \(n\cdot\tilde{O}\left(\nu^{-2}\cdot\log^{3}n\right)\) & Cor. 22 \\ \hline Minor-Free & \(2+\varepsilon\) & \(\tilde{O}(\nu^{-2}\cdot\varepsilon^{-7}\cdot\log^{8}n)\) & \(\tilde{O}(n\cdot\nu^{-2}\cdot\varepsilon^{-6})\) & Thm. 18 \\ \hline Tree & \(<2\) & \(\Omega(n)\) & \(\Omega(n^{2})\) & [12] \\ \hline Weighted Path & \(1\) & \(\nu^{-2}\cdot\tilde{O}(\log n)\) & \(n\cdot\tilde{O}(\nu^{-1})\cdot*\) & Cor. 17 \\ \hline Unweighted & \(<\infty\) & \(\Omega(\nu^{-2}\cdot\log(\nu\cdot n))\) & - & Thm. 21 \\ Path & \(<\infty\) & \(\Omega(n)\) (deterministic) & - & Thm. 19 \\ \hline HST & \(2+\varepsilon\) & \(\tilde{O}(\varepsilon^{-4}\cdot\nu^{-2})\cdot*\) & \(n\cdot\tilde{O}\left(\varepsilon^{-3}\cdot\nu^{-2}\right)\cdot*\) & Thm. 15 \\ \cline{2-5} (ultrametric) & \(<\infty\) & \(\Omega(\nu^{-2})\) & - & Thm. 20 \\ \hline \end{tabular} \end{table} Table 1: Our results for constructing light \(\nu\)-reliable spanners for various metric spaces. All the results in the table (other than the one specified as deterministic) are for oblivious reliable spanners. Stretch \(<\infty\) stands for the requirement that all the points in \(X\setminus B^{+}\) belong to the same connected component in \(H\setminus B\). \(*\) stands for \(\operatorname{poly}(\log\log n)\) factors. design these covers for metrics admitting a pairwise partition cover scheme (see Definition 4), such that each \(k\)-HST in the cover has lightness \(O(k\cdot\log n)\). General Metrics.For any metric space, by building a light \(k\)-HST cover, and applying our oblivious reliable spanner for every \(k\)-HST in the cover, we obtain an oblivious \(\nu\)-reliable \(O(k)\)-spanner with lightness \(\tilde{O}(\nu^{-2}\cdot n^{1/k})\). Note that up to a constant in the stretch (and lower order terms), this result is optimal, even omitting the reliability requirement. Doubling Metrics.For any metric with doubling dimension \(\operatorname{ddim}\),2 and \(\varepsilon\in(0,1)\), we devise an oblivious \(\nu\)-reliable \((1+\varepsilon)\)-spanner with lightness \(\varepsilon^{-O(\operatorname{ddim})}\cdot\tilde{O}\left(\nu^{-2}\cdot\log n\right)\). This result is tight up to second order terms. Indeed, it is folklore that any \((1+\varepsilon)\)-spanner for doubling metrics must have lightness \(\varepsilon^{-\Omega(\operatorname{ddim})}\) (see e.g., [1]). In Theorem 21, we show that every oblivious \(\nu\)-reliable spanner (for any finite stretch) for the shortest path metric of the unweighted path graph (which has \(\operatorname{ddim}\) 1) must have lightness \(\Omega(\nu^{-2}\cdot\log(\nu n))\). This dependence on \(n\) in the lower bound is somewhat surprising, and does not appear in the closely related fault-tolerant spanners for doubling metrics (see Section 1.3 for further details). Footnote 2: The \(k\)-HSTs are the \(k\)-H ### Technical Overview From a high level, our construction of light reliable spanners for various graph families has the following structure. * We first devise light reliable spanners for \(k\)-HSTs. * We construct _light_ tree covers for the relevant family, where all the trees in the cover are \(k\)-HSTs. * The final step is to sample a reliable spanner for each tree in the cover, and take as a final spanner the union of these spanners. In what follows we elaborate more on the main ideas and techniques for each of those steps. #### 1.2.1 Reliable Light Spanner for \(k\)-HSTs Let \(T\) be the tree representing the \(k\)-HST (see Definition 2). Our construction consists of a collection of randomly chosen bi-cliques: For every node \(x\in T\) we choose at random a set \(Z_{x}\) of \(\ell\approx\nu^{-1}\) vertices from the leaves of the subtree rooted at \(x\) (denoted \(L(x)\)). Then, for every \(x\in T\) with children \(x_{1},\ldots,x_{t}\), add to the spanner \(H\) all edges in \(Z_{x}\times Z_{x_{j}}\) for every \(j=1,\ldots,t\). Fix a pair of leaves \(u,v\in T\), let \(x=\operatorname{lca}(u,v)\), and let \(x_{i}\) (resp., \(x_{j}\)) be the child of \(x\) whose subtree contains \(u\) (resp., \(v\)). The idea behind finding a spanner path between \(u,v\) is as follows. We will connect both \(u,v\) to a certain chosen leaf \(x^{\prime}\in Z_{x}\). To this end, we first connect recursively \(u\) to a \(u^{\prime}\in Z_{x_{i}}\) and \(v\) to \(v^{\prime}\in Z_{x_{j}}\). Now, if \(x\), \(x_{i}\), and \(x_{j}\) have all chosen such leaves \(x^{\prime},u^{\prime},v^{\prime}\) to the sets \(Z_{x},Z_{x_{i}},Z_{x_{j}}\) respectively, that survive the attack \(B\), and also we managed the \(u-u^{\prime}\) and \(v-v^{\prime}\) connections recursively, then we can complete the \(u-v\) path. That path will consists of the two "long" bi-clique edges \(\{u^{\prime},x^{\prime}\},\{x^{\prime},v^{\prime}\}\), and the recursive \(u-u^{\prime}\) and \(v-v^{\prime}\) paths. Note that since \(u,u^{\prime}\in L(x_{i})\), \(d_{T}(u,u^{\prime})\leq d_{T}(u,v)/k\) (and similarly \(d_{T}(v,v^{\prime})\leq d_{T}(u,v)/k\)), so we can show inductively that the total distance taken by these recursive paths is only \(O(d_{T}(u,v)/k)\). See Figure 1 for an illustration of a path in \(H\) between two vertices \(u,v\). Having established what is needed for finding a spanner path, we say that a leaf is _safe_ if all its ancestors \(x\) in \(T\) have that \(Z_{x}\) is not fully included in \(B\). The failure set \(B^{+}\) consists of \(B\) and all leaves that are not safe. A subtle issue is that a vertex may have a linear number of ancestors, and we will need \(\ell\) to be at least logarithmic to ensure good probability for success in all of them. To avoid this, we use the following approach. For any node \(x\) that has a "heavy"child \(y\) (that is, \(L(y)\) is almost as large as \(L(x)\)), we use the sample \(Z_{y}\) for \(x\), instead of sampling \(Z_{x}\). This way, any leaf will have only logarithmically many ancestors that are not heavy parents, which reduce dramatically the sample size needed for success in all ancestors. For the reliability analysis, we distinguish between leaves that have an ancestor \(x\) with a very large \(1-\nu\) fraction of vertices in \(L(x)\) that fall in the attack set \(B\). These leaves are immediately taken as failed, but there can be only \(\approx\nu|B|\) such leaves. For the other leaves, a delicate technical analysis follows to show that only a small fraction \(\approx\nu\cdot|B|\) new vertices are expected to join \(B^{+}\). Note that if some node has a heavy child, we take the child's sample, so some care is needed in the analysis to account for this - roughly speaking, the definition of "heavy" must depend on the reliability parameter \(\nu\), in order to ensure sufficiently small failure probability. Improved stretch for bounded degree HSTs.In the case the \(k\)-HST has bounded degree \(\delta\), we can alter the construction slightly, and for every \(x\) with children \(x_{1},\ldots,x_{s}\), also add all edges in \(Z_{x_{i}}\times Z_{x_{j}}\) for every \(1\leq i<j\leq s\). While this alternative increases the lightness and size by a factor of \(\delta\), the stretch improves to \(1+\frac{O(1)}{k}\), since we only use one long edge. This variation will be useful for the class of doubling metrics. #### 1.2.2 Reliable Spanners via Light \(k\)-HST Covers A \((\tau,\rho)\)-tree cover of a metric space \((X,d)\), is a collection of \(\tau\) dominating trees, such that for every pair \(u,v\in X\), there exists a tree \(T\) in the cover with \(d_{T}(u,v)\leq\rho\cdot d(u,v)\). Let \((X,d)\) be any metric that admits a \((\tau,\rho)\)-tree cover in which all trees are \(k\)-HSTs of weight at most \(O(l\cdot w(MST(X))\), then we can devise an oblivious reliable spanner for \(X\) as follows. Sample an oblivious light \(\nu/\tau\)-reliable spanner \(H_{T}\) for each tree \(T\), and define \(H=\bigcup_{T}H_{T}\) as their union. We define \(B^{+}\) as the union of all the failure sets \(B^{+}_{T}\) over all tree spanners. Since in every \(\nu/\tau\)-reliable spanner of a tree only \(\nu/\tau\cdot|B|\) additional vertices fail in expectation, the total expected number of additional failures is at most \(\nu\cdot|B|\), as required. Now, if a pair \(u,v\) did not fail, there is a \(k\)-HST \(T\) in which \(d_{T}(u,v)\leq\rho\cdot d(u,v)\), and thus \(H\) has stretch at most \(\rho\cdot(2+\frac{O(1)}{k})\) for such a pair. Light \(k\)-HST Covers using Pairwise Partition Cover Scheme.A \((\tau,\rho,\varepsilon,\Delta)\)-Pairwise Partition Cover for a metric space \((X,d)\) is a collection of \(\tau\) partitions, each cluster in each partition has diameter at most \(\Delta\), and every pair \(u,v\in X\) with \(\frac{\Delta}{2\rho}\leq d(u,v)\leq\frac{\Delta}{\rho}\) is _padded_ in at least one cluster \(C\) of a partition. This means that the cluster \(C\) contains \(u,v\), and also Figure 1: _Illustration of the construction of spanner for a \(k\)-HST. For each internal node \(x\) we sample a subset \(Z_{x}\) of leaves from \(L(x)\), and connect all of \(Z_{x}\) to \(Z_{x^{\prime}}\) for every child \(x^{\prime}\) of \(x\). The path from \(u\) to \(v\) will first go from \(u\) to a surviving vertex in \(Z_{x_{i}}\) (using recursion), from there to a surviving vertices in \(Z_{x}\) and \(Z_{x_{j}}\), and finally to \(v\) (again by recursion)._ the balls of radius \(\varepsilon\Delta\) around them, see Definition 4. If \((X,d)\) admits such a cover for every \(\Delta\), we say it has a Pairwise Partition Cover Scheme (PPCS). In [12], PPCS were shown for general metrics and doubling metrics. In this paper, for any parameter \(0<\varepsilon<1/6\), we devise a \(\left(\frac{\log n}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon\right)\)-PPCS for minor-free graphs. In [12] it was shown that one can obtain a \(k\)-HST cover from a PPCS, in such a way that every cluster of diameter \(\Delta\) in the PPCS corresponds to an internal node \(x\) of one of the \(k\)-HSTs, with label \(\Gamma_{x}=\Delta\). For our purposes, we want every \(k\)-HST in the cover to be light. To this end, we augment the reduction of [12] by a feature that allows us to bound the lightness of the resulting \(k\)-HST. The idea is to use _nets_, see Definition 5. A basic observation for a \(\Delta\)-net \(\mathcal{N}\) of a metric space \((X,d)\), is that \(w(MST(X))\geq\Omega(|\mathcal{N}|\cdot\Delta)\). On the other hand, the weight of a \(k\)-HST \(T\) is roughly \(\sum_{x\in T}k\cdot\Gamma_{x}\) (every node pays for the edge to its parent in \(T\)). So as long as the number of internal nodes with label \(\Delta\) is bounded by \(|\mathcal{N}|\), the \(k\)-HST will be rather light. Now, given some partition with diameter bound \(\Delta\), we take a \(\approx\varepsilon\Delta\)-net \(\mathcal{N}\), and break all clusters that do not contain a net point. Then the points in the broken clusters are joined to a nearby remaining cluster. Since the net is dense enough, each cluster that was used for padding remains intact, while the number of clusters is bounded by \(|\mathcal{N}|\). This enables us to bound the weight of the \(k\)-HST accordingly. #### 1.2.3 Reliable Light Spanner for Minor-free Graphs with \(2+\varepsilon\) stretch In the special case of minor-free graphs, the framework described above will lose a factor of \(2\) in the stretch in two places. The first is due to the padding of the PPCS, and the second in the reliable spanners for the \(k\)-HSTs. While each of these losses is unavoidable,10 we can still exploit a certain property of our PPCS for minor-free graphs, to improve the stretch to near optimal \(2+\varepsilon\). Footnote 10: Stretch 2 for HST is necessary: Consider the uniform metric, every spanner with less than \(\binom{n}{2}\) edges has stretch 2. Every PPCS for minor free graphs must have either \(\rho\geq 2\) or \(\tau=\Omega(n)\): Fix \(\rho<2\), and consider the unweighted star graph. There are \(n-1\) leaf-center pairs, while a single partition can satisfy at most a single pair. In our previous approach, suppose vertices \(u,v\) are padded in some cluster \(C\) of the PPCS, with diameter at most \(\Delta\). Then in the \(k\)-HST cover, we will have some tree with an internal node \(x\) corresponding to \(C\), whose label is \(\Gamma_{x}=\Delta\). The way we construct the spanner path between \(u,v\) is via some chosen leaf \(z\) in \(L(x)\), and as both \(d(u,z)\), \(d(v,z)\) can be as large as \(\Delta\), we loose a factor of \(2\) here. The main observation behind overcoming this loss, is that in our PPCS for minor-free graphs, each cluster \(C\) is a ball around some center \(x\), and whenever a pair \(u,v\) is padded, then \(x\) is very close to the shortest \(u-v\) path, meaning that \(d(u,x)+d(v,x)\leq(1+\varepsilon)\cdot d(u,v)\). While we cannot guarantee that \(x\), or a vertex close to \(x\), will survive the attack \(B\), we can still use this to improve the stretch guarantee. Suppose that \(Z_{x}\) contains a surviving leaf \(z\) which is closer to \(x\) than both \(u,v\), then \[d(u,z)+d(z,v)\leq(d(u,x)+d(x,z))+(d(z,x)+d(x,v))\leq 2(d(u,x)+d(x,v))\leq 2(1+ \varepsilon)\cdot d(u,v)\;.\] So, instead of sampling a set \(Z_{x}\) of leaves at random from \(L(x)\), we create a bias towards vertices closer to the center \(x\). Concretely, order the leaves of \(L(x)\) by their distance to \(x\), and we would like that the probability of the \(j\)-th leaf in \(L(x)\) to join \(Z_{x}\) will be \(\approx\frac{1}{j}\). This way, the expected size of \(Z_{x}\) is still small, and if not too many vertices in the appropriate prefix of \(L(x)\) are in \(B\), then there is a good probability that such a \(z\in Z_{x}\) exists. However, as it turns out, this requirement it too strict, since every internal node \(x\) will force us to move vertices to \(B^{+}\) that fail due many vertices in \(B\) in its induced ordering. To avoid this hurdle, we use a _global_ ordering for all internal nodes - a carefully chosen preorder of \(T\) - and prove that the induced order on \(L(x)\) is a good enough approximation of distances to \(x\) (specifically, up to an additive factor of \(\approx\Gamma_{x}/k\)). #### 1.2.4 Reliable Light Spanner for the Path Graph There were several construction of a reliable spanner for \(P_{n}\) in previous works [1, 1, 2], none of them could provide a meaningful bound on the lightness. For instance, the first step in the construction of [1] was to connect the first \(n/2\) vertices to the last \(n/2\) vertices via a bipartite expander graph. In particular, the total weight of just this step is \(\Omega(n^{2})\). The method of [11] is to sample \(\approx\nu^{-1}\) vertices as star centers, and connect all other vertices to each center. This construction also clearly isn't light, as the total weight of even one such star is \(\Omega(n^{2})\). Our construction of an oblivious light \(\nu\)-reliable spanner for (weighted) \(P_{n}\) is similar to the approach taken by [1]. It starts by sampling a laminar collection of subsets \([n]=V_{0}\supseteq V_{1}\supseteq V_{2}\supseteq\cdots\supseteq V_{\log n}\), where \(|V_{i}|\) contains \(\frac{n}{2^{i}}\) points in expectation. However, the construction of [1] used long range edges: from vertices in \(V_{i}\) to the nearest \(\approx 2^{i/2}\) other vertices in \(V_{i}\), and thus its lightness is polynomial in \(n\).11 Footnote 11: To see why the lightness is polynomial, consider just the level \(i=\frac{2}{3}\log n\), then \(|V_{i}|\approx n^{1/3}\), but also the number of connected neighbors is \(2^{i/2}=n^{1/3}\), so all \(\approx n^{2/3}\) edges between vertices in \(V_{i}\) are added. The average length of these edges is linear in \(n\), so the lightness is \(\Omega(n^{2/3})\). To ensure bounded lightness, we take a more local approach, and each point \(a\in V_{i}\) adds edges to only the nearest \(\ell\approx\nu^{-1}\) points in \(V_{i}\) and \(V_{i+1}\) on both its left and right sides. We remark that the connections to the next level are crucial in order to avoid additional logarithmic factors (since unlike [1], we cannot use the exponentially far away vertices, that would have provided high probability for connection of every vertex to the next level). The lightness follows as each edge \(e\) of \(P\) is expected to be "covered" \(\ell^{2}\) times, in each of the \(\log n\) levels. The reliability analysis of our spanner uses the notion of _shadow_, introduced by [1]. For the path \(P_{n}\), roughly speaking, a vertex \(u\) is outside the \(\alpha\)-shadow of an attack \(B\), if in all intervals containing \(u\), there is at most an \(\alpha\) fraction of failed vertices (in \(B\)). The reliability argument goes as follows: a vertex \(a\in[n]\setminus B\) fails and joins \(B^{+}\) only if there exists a level \(i\) in which all its connections to \(V_{i+1}\) fail. That is, its \(\ell\) closest vertices in \(V_{i+1}\) are in \(B\). But as points are chosen to \(V_{i+1}\) independently of \(B\), this is an unlikely event, whose probability can be bounded as a function of the largest \(\alpha\)-shadow that does not contain \(a\). To obtain our tight bound, we need a delicate case-analysis for the different regimes of \(\alpha\)-shadows. The stretch analysis is a refinement of [1] stairway approach. A nice feature is that each pair in \([n]\setminus B^{+}\) will have a shortest path of at most \(\log n\) hops in the spanner \(H\). ### Related Work Light fault-tolerant spanners.Levcopoulos _et. al._[10] introduced the notion of \(f\)-fault-tolerant spanner, where it is guaranteed that for every set \(F\) of at most \(f\) faulty nodes, \(H\setminus F\) is a \(t\)-spanner of \(X\setminus F\). However, the parameter \(f\) has to be specified in advance, and both sparsity and lightness of the spanner must polynomially depend on \(f\). Thus, unlike reliable spanners, it is impossible to construct sparse and light fault-tolerant spanners that can withstand scenarios where, say, half of the nodes fail. Czumaj and Zhao [11] constructed \(f\) fault-tolerant spanners for point in constant dimensional Euclidean space with optimal \(O(f^{2})\) lightness (improving over [10]\(2^{O(f)}\) lightness). This result was very recently generalized to doubling spaces by Le, Solomon, and Than [12], who obtain \(O(f^{2})\) lightness (improving over [11]\(O(f^{2}\log n)\) lightness, and [13]\(O(f^{2}+f\log n)\) lightness). Abam _et. al._[1] introduced the notion of _region_ fault-tolerant spanners for the Euclidean plane. They showed that one can construct a \(t\)-spanner with \(O(n\log n)\) edges in such a way that if points belonging to a convex region are deleted, the residual graph is still a spanner for the remaining points. More on Light spanners.Light spanners were constructed for high dimensional Euclidean and doubling spaces (in similar context to our Corollary 14) [14, 10]. Subset light spanners were studied for planar and Minor free graphs [11, 12, 13, 14], where the goal is to maintain distances only between a subset of terminals (and the lightness is defined w.r.t. the minimum Steiner tree). Bartal _et. al._ constructed light prioritized and scaling spanner [1], where only a small fraction of the vertex pairs suffer from large distortion. Recently Le and Solomon conducted a systematic study of efficient constructions of light spanners [10] (see also [10, 12]). Finally, light spanners were efficiently constructed in the LOCAL [13], and CONGEST [15] distributed models. ### Organization After a few preliminaries in section 2, we show our reliable spanner for \(k\)-HSTs in section 3. In section 4 we show how to devise PPCS for minor-free graphs, and in section 5 we show how to construct light \(k\)-HST covers based on PPCS. In section 6 we combine the results of all previous sections, and derive our results on light reliable spanners for various metric spaces. We show our reliable spanner for the path graph in section 7. In section 8 we devise a reliable spanner for minor-free graphs with improved stretch, and finally, in section 9 we exihibit our lower bounds for the path graph and for ultrametrics. Preliminaries All logarithms (unless explicitly stated otherwise) are in base 2. We use \(\tilde{O}\) notation to hide poly-logarithmic factors. That is \(\tilde{O}(s)=O(s)\cdot\log^{O(1)}(s)\). For a weighted graph \(G=(V,E)\), denote the distance between \(u,v\in V\) by \(d_{G}(u,v)\). When \(G\) is clear from context, we might write \(d(u,v)\). For a metric space \((X,d)\), we denote the ball of \(v\in X\) of radius \(\Delta\geq 0\) by \(B(v,\Delta)=\{u\in X\ :\ d(u,v)\leq\Delta\}\). The diameter of a cluster \(C\subseteq X\) is maximum pairwise distance: \(\operatorname{diam}(C)=\max_{u,v\in C}d(u,v)\). Let \([n]\) denote the set \(\{1,\ldots,n\}\), and for integers \(a\leq b\) let \([a:b]\) denote \(\{a,\ldots,b\}\), and \([a:b)\) denote \(\{a,...,b-1\}\). We next define ultrametrics and HSTs. **Definition 2**.: _A metric \((X,d)\) is a called an ultrametric if it satisfies a strong form of the triangle inequality_ \[\forall x,y,z\in X,\ d(x,z)\leq\max\{d(x,y),d(y,z)\}\.\] _Equivalently [1], if there exists a bijection \(\varphi\) from \(X\) to the leaves of a rooted tree \(T\) in which:_ 1. _Each node_ \(v\in T\) _is associated with a label_ \(\Gamma_{v}\) _such that_ \(\Gamma_{v}=0\) _if and only if_ \(v\) _is a leaf, and if_ \(u\) _is a child of_ \(v\) _in_ \(T\) _then_ \(\Gamma_{v}\geq\Gamma_{u}\)_._ 2. \(d(x,y)=\Gamma_{\operatorname{lca}(\varphi(x),\varphi(y))}\) _where_ \(\operatorname{lca}(u,v)\) _is the least common ancestor of_ \(u,v\) _in_ \(T\)_._ _For \(k\geq 1\), a \(k\)-hierarchical well-separated tree (\(k\)-HST) is an ultrametric \(T\) that also satisfies that whenever \(u\) is a child of \(v\) in \(T\), then \(\Gamma_{v}\geq k\cdot\Gamma_{u}\)._ **Definition 3** (ultrametric cover).: _A \((\tau,\rho)\)-ultrametric cover for a metric space \((X,d)\), is a collection of at most \(\tau\) dominating\({}^{4}\) ultrametrics \(\mathcal{U}=\{(U_{i},d_{U_{i}})\}_{i=1}^{\tau}\) over \(X\), such that for every \(x,y\in X\) there is an ultrametric \(U_{i}\) for which \(d_{U_{i}}(x,y)\leq\rho\cdot d_{X}(x,y)\)._ _The cover is called \(l\)-light, if the weight of every ultrametric \(U_{i}\) is at most \(l\cdot w(MST(X))\)._ **Definition 4** (Pairwise Partition Cover Scheme).: _A collection of partitions \(\mathbb{P}=\{\mathcal{P}_{1},\ldots,\mathcal{P}_{s}\}\) is \((\tau,\rho,\varepsilon,\Delta)\)-pairwise partition cover if (a) \(s\leq\tau\), (b) every partition \(\mathcal{P}_{i}\) is \(\Delta\)-bounded (that is, \(\forall C\in\mathcal{P}_{i}\), \(\operatorname{diam}(C)\leq\Delta\)), and (c) for every pair \(x,y\) such that \(\frac{\Delta}{2\rho}\leq d(x,y)\leq\frac{\Delta}{\rho}\), there is a cluster \(C\) in one of the partitions \(\mathcal{P}_{i}\) such that \(C\) contains both closed balls \(B(x,\varepsilon\Delta),B(y,\varepsilon\Delta)\). A space \((X,d)\) admits a \((\tau,\rho,\varepsilon)\)-pairwise partition cover scheme (PPCS) if for every \(\Delta>0\), it admits a \((\tau,\rho,\varepsilon,\Delta)\)-pairwise partition cover._ **Definition 5** (\(\Delta\)-net).: _For \(\Delta>0\) and a metric space \((X,d)\), a \(\Delta\)-net is a set \(\mathcal{N}\subseteq X\) such that:_ 1. _Packing: For every_ \(u,v\in\mathcal{N},d(u,v)>\Delta\)__ 2. _Covering: For every_ \(x\in X\)_, there exists_ \(u\in\mathcal{N}\) _satisfying_ \(d(x,u)\leq\Delta\)_._ It is well known that a simple greedy algorithm can find a \(\Delta\)-net. **Definition 6**.: _A metric space \((X,d)\) has doubling dimension \(\mathrm{ddim}\), if for every \(r>0\), every ball of radius \(2r\) can be covered by \(2^{\mathrm{ddim}}\) balls of radius \(r\). A family of metrics is called doubling, if all the metrics in the family have uniformly bounded doubling dimension._ By applying the definition iteratively, we get the following simple corollary. **Lemma 7** (Packing Lemma).: _If \((X,d)\) has doubling dimension \(\mathrm{ddim}\), and \(\mathcal{N}\) is a \(\Delta\)-net, then for any \(R>1\), a ball of radius \(R\cdot\Delta\) contains at most \((2R)^{\mathrm{ddim}}\) net points._ The proof uses the fact that a ball of radius \(\Delta/2\) cannot contain two net points of \(\mathcal{N}\). The following lemma is an extension of [10, Lemma 2], that shows it suffices to bound the expected size and lightness of an oblivious \(\nu\)-reliable spanner, in order to obtain worst-case guarantees. **Lemma 8**.: _Suppose that \((X,d)\) admits an oblivious \(\nu\)-reliable spanner \(\mathcal{D}\) with expected size \(m\) and expected lightness \(\phi\), then \((X,d)\) admits an oblivious \(3\nu\)-reliable spanner \(\mathcal{D}^{\prime}\) with size \(3\cdot m\) and lightness \(3\cdot\phi\)._ Proof.: We define \(\mathcal{D}^{\prime}\) by conditioning on the event \(A=\{(|H|\leq 3m)\wedge(w(H)\leq 3\phi\cdot w(MST(X)))\}\). Observe that \(\Pr[|H|>3m]\leq 1/3\) and also \(\Pr[w(H)>3\phi\cdot w(MST(X))]\leq 1/3\), both by Markov's inequality. So that \(\Pr[A]\geq 1/3\). For any attack \(B\subseteq X\), \[\mathbb{E}_{H\sim\mathcal{D}^{\prime}}[|B_{H}^{+}\setminus B|] = \mathbb{E}_{H\sim\mathcal{D}}[|B_{H}^{+}\setminus B|\ \mid A]\] \[= \sum_{H\in\mathrm{supp}(\mathcal{D})}|B_{H}^{+}\setminus B|\cdot \frac{\Pr[H\cap A]}{\Pr[A]}\] \[\leq \frac{1}{\Pr[A]}\cdot\sum_{H\in\mathrm{supp}(\mathcal{D})}|B_{H} ^{+}\setminus B|\cdot\Pr[H]\] \[\leq \frac{\nu\cdot|B|}{\Pr[A]}\leq 3\nu\cdot|B|\.\] ## 3 Light Reliable Spanner for \(k\)-Hsts In this section we devise a light reliable spanner for the family of \(k\)-HSTs (see Definition 2). Let \(T\) be the tree corresponding to the given \(k\)-HST, we refer to its leaves as vertices, and to the interval nodes as nodes. Each node has an arbitrary order on its children. For a node \(x\) we denote by \(L(x)\) the set of leaves in the subtree rooted at \(x\), and by \(L=[n]\) the set of all leaves. For an internal node \(x\) in \(T\), let \(\deg(x)\) denote the number of children of \(x\). We will assume that \(\deg(x)\geq 2\) (as degree \(1\) nodes are never the least common ancestor, and thus can be contracted). Our goal is to prove the following theorem. **Theorem 1**.: _For any parameters \(\nu\in(0,1/6)\) and \(k>1\), every \(k\)-HST \(T\) admits an oblivious \(\nu\)-reliable \((2+\frac{2}{k-1})\)-spanner of size \(n\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\) and lightness \(\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\)._ ### Decomposition of \(T\) to Heavy Paths We apply the following decomposition of \(T\) into paths, reminiscent of the heavy-path decomposition [12]. Each node \(x\in T\) is given a tag, initially \(\sigma_{x}=|L(x)|\), and set \(D=\emptyset\). Go over the nodes of \(T\) in preorder, and when visiting node \(x\) with children \(x_{1},\ldots,x_{t}\): If there is \(1\leq j\leq t\) such that \(\sigma_{x_{j}}>(1-\nu/2)\sigma_{x}\), set \(\sigma_{x_{j}}=\sigma_{x}\) and add the edge \(\{x,x_{j}\}\) to \(D\). For example, if \(T\) contains a path \((y_{1},y_{2},\ldots,y_{q})\) where \(y_{1}\) is the closest vertex to the root, and \(L(y_{q})>(1-\nu/2)L(y_{2})\) while \(L(y_{2})<(1-\nu/2)L(y_{1})\) then it will hold that \(\sigma_{y_{1}}\neq\sigma_{y_{2}}=\sigma_{y_{3}}=\cdots=\sigma_{y_{q}}=|L(y_{2})|\). We claim that \(\sigma_{x}\geq|L(x)|\) for every node \(x\in T\), because we either have equality or \(x\) inherit the original tag of one of its ancestors. As \(1-\nu/2>1/2\), there cannot be two different children of \(x\) with more than \(|L(x)|/2\) leaves in their subtree, hence there can be at most one child \(x_{j}\) for which an edge is added to \(D\). So indeed \(D\) is a decomposition of \(T\) into heavy paths (some paths can be singletons). Denote by \(\mathcal{Q}\) this collection of paths, and for each \(Q\in\mathcal{Q}\), let \(f(Q)\) be the lowest vertex (farthest from the root) on \(Q\). We overload this notation, and define \(f(x)=f(Q)\), where \(Q\) is the heavy path containing \(x\). Let \(F=\{f(Q)\}_{Q\in\mathcal{Q}}\) be the set of lowest vertices over all paths. **Claim 9**.: _Each root-to-leaf path \(W\) intersects at most \(O(\nu^{-1}\log n)\) paths in \(\mathcal{Q}\)._ Proof.: Fix a path \(Q\in\mathcal{Q}\). Note that all nodes in \(Q\) have the same tag \(\sigma_{Q}\). Whenever the path \(W\) leaves \(Q\), it will go to some node \(y\) with \(\sigma_{y}\leq(1-\nu/2)\sigma_{Q}\). The root has tag \(n\), so after leaving \(2\nu^{-1}\ln n\) heavy paths, the tag will be at most \[n\cdot(1-\nu/2)^{2\nu^{-1}\ln n}<n\cdot e^{-\ln n}=1\,\] since the tag of any internal node \(x\) is at least \(|L(x)|\), we must have reached a leaf. ### Construction For each node \(y\in F\), we independently sample uniformly at random a set \(Z_{y}\) of \(\ell=c\cdot\nu^{-1}\cdot\ln\left(\frac{\ln n}{\nu}\right)\) vertices from \(L(y)\), where \(c\) is a constant to be determined later. If there are less than \(\ell\) vertices in \(L(y)\), take \(Z_{y}=L(y)\). For each internal node \(x\) in \(T\) with children \(x_{1},\ldots,x_{t}\), and for every \(1\leq j\leq t\), we add the edges \(\{\{y,z\}\ :\ y\in Z_{f(x)},z\in Z_{f(x_{j})}\}\) to the spanner \(H\). Defining the set \(B^{+}\).Consider an attack \(B\). We say that an internal node \(x\in T\) is _good_ if \(Z_{f(x)}\setminus B\neq\emptyset\). A leaf \(u\) is _safe_ if for every ancestor \(x\) of \(u\), \(x\) is good. In other words, a leaf is safe if every ancestor \(x\) sampled a leaf to \(Z_{f(x)}\) which is not in \(B\). Define \(B^{+}\) as the set of all leaves which are not safe. ### Analysis Size Analysis.For each internal node \(x\) in \(F\) and each child \(x_{j}\) of \(x\), we added the bi-clique \(Z_{x}\times Z_{x_{j}}\), which contains at most \(\ell^{2}\) edges. Since the sum of degrees of internal nodes in \(T\) is \(O(n)\) (recall that all degrees are at least 2), the total number of edges added to \(H\) is \(O(n\cdot\ell^{2})=n\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\). Weight Analysis.First, we claim that the weight of the MST for the leaves of \(T\) is equal to \[\sum_{x\in T}(\deg(x)-1)\cdot\Gamma_{x}. \tag{1}\] This can be verified by running Boruvka's algorithm, say.12 Every internal node \(x\) in \(F\), adds at most \(\ell^{2}\cdot\deg(x)\) edges of weight at most \(\Gamma_{x}\) to the spanner. The total weight is thus Footnote 12: In Boruvka’s algorithm, we start with all vertices as singleton components. In each iteration, every component adds to the MST the edge of smallest weight leaving it (breaking ties consistently). For a \(k\)-HST, we use a small variation – only components which are the deepest leaves in the HST participate in the current iteration. We claim that the connected components after the \(j\)-th iteration correspond to nodes of height \(j\) above the leaves. Thus, in the \(j\)-th iteration, any node \(x\) of height \(j\) will add \(\deg(x)-1\) edges with weight \(\Gamma_{x}\) each, that connect the components corresponding to its children. \[\sum_{x\in F}\deg(x)\cdot\ell^{2}\cdot\Gamma_{x}=O(w(MST)\cdot\ell^{2})=w( MST)\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2})\.\] Stretch Analysis.The stretch analysis is based on the following lemma. **Lemma 10**.: _Let \(u\notin B^{+}\) be any safe leaf. Then for any ancestor \(x\) of \(u\) and any \(v\in Z_{f(x)}\setminus B\), the spanner \(H\) contains a path from \(u\) to \(v\) of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x}\) that is disjoint from \(B\)._ Proof.: The proof is by induction on \(|L(x)|\). The base case is when \(x=u\), then \(L(u)=\{u\}\) and the statement holds trivially. Let \(x\) be an ancestor of \(u\), and take any vertex \(v\in Z_{f(x)}\setminus B\). We need to find a path in \(H\) of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x}\) from \(u\) to \(v\) that is disjoint from \(B\). Let \(x_{u}\) be the child of \(x\) whose subtree contains \(u\). Since \(u\) is safe, we know that \(Z_{f(x_{u})}\setminus B\neq\emptyset\), so take any vertex \(u^{\prime}\in Z_{f(x_{u})}\setminus B\). By the induction hypothesis on \(x_{j}\), there is a path \(P^{\prime}\) in \(H\) from \(u\) to \(u^{\prime}\) of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{j}}\) disjoint from \(B\) (note that indeed \(|L(x_{j})|<|L(x)|\), as all vertices have degree at least 2). Recall that in the construction step for \(x\), we added all edges from \(Z_{f(x)}\) to \(Z_{f(x_{u})}\), in particular the edge \(\{u^{\prime},v\}\in H\). Note that \(v\notin B\), that \(u^{\prime},v\in L(x)\) and therefore \(d_{T}(u^{\prime},v)\leq\Gamma_{x}\), and as \(T\) is a \(k\)-HST we have that \(\Gamma_{x_{j}}\leq\frac{\Gamma_{x}}{k}\). It follows that the path \(P=P^{\prime}\circ\{u^{\prime},v\}\) from \(u\) to \(v\) in \(H\) is disjoint from \(B\), and has length at most \[\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{j}}+\Gamma_{x}\leq\left(\frac{1+ \frac{1}{k-1}}{k}\right)\cdot\Gamma_{x}+\Gamma_{x}=\left(1+\frac{1}{k-1} \right)\cdot\Gamma_{x}\] Fix a pair of leaves \(u,v\notin B^{+}\), and let \(x=\operatorname{lca}(u,v)\). Since both are safe, \(Z_{f(x)}\setminus B\neq\emptyset\), and pick any \(z\in Z_{f(x)}\setminus B\). By lemma 10 there are paths in \(H\) from \(u\) to \(z\) and from \(v\) to \(z\), both disjoint from \(B\), of combined length at most \[2\cdot\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x}=\left(2+\frac{2}{k-1}\right) \cdot d_{T}(u,v)\.\] Reliability Analysis.For every \(x\in T\), denote by \(B^{(x)}\) the set of all vertices in \(u\in L(x)\setminus B\), such that there is an ancestor \(z\) of \(u\) in the subtree rooted at \(x\) for which \(Z_{f(z)}\subseteq B\). In other words, those are the leaves (outside \(B\)) who are not safe due to a bad ancestor in the subtree rooted at \(x\). We say that a node \(x\in T\) is _brutally attacked_ if \(|B\cap L(x)|\geq(1-\nu)\cdot|L(x)|\), that is at least a \(1-\nu\) fraction of the decedent leaves of \(x\) are in the attack \(B\). Denote by \(B^{(x)}_{1}\subseteq B^{(x)}\) the set of vertices \(u\in L(x)\setminus B\) that have a brutally attacked ancestor \(y\) in the subtree rooted at \(x\). Denote by \(B^{(x)}_{2}=B^{(x)}\setminus B^{(x)}_{1}\) the rest of the vertices in \(B^{(x)}\). We next argue that the number of vertices added to \(B^{+}\) (in the worst case) due to brutally attacked nodes is bounded by \(O(\nu)\cdot|B|\). Let \(A_{\text{ba}}\) be the set of \(T\) nodes which are brutally attacked, and they are maximal w.r.t. the order induced by \(T\). That is, \(x\in A_{\text{ba}}\) if and only if \(x\) is brutally attacked, while for every ancestor \(y\) of \(x\), \(y\) is not brutally attacked. Clearly, for every \(x\in A_{\text{ba}}\) it holds that \(|B^{(x)}_{1}|\leq|L(x)\setminus B|\leq\nu\cdot|L(x)|\leq\frac{\nu}{1-\nu}\cdot |L(x)\cap B|\). In total, for the root \(r\) of \(T\) it holds that \[|B^{(r)}_{1}|=\sum_{x\in A_{\text{ba}}}|B^{(x)}_{1}|\leq\sum_{x\in A_{\text{ba }}}\frac{\nu}{1-\nu}\cdot|L(x)\cap B|\leq\frac{\nu}{1-\nu}\cdot|B|\leq 2\nu \cdot|B|\.\] Next we bound the damage done (in expectation) due to non brutally attacked nodes. Denote \(\beta=\frac{1}{\ln\ln n}\). We will prove for any node \(x\in T\) which is not a heavy child, by induction on \(|L(x)|\) that \[\mathbb{E}[|B^{(x)}_{2}|]\leq\max\left\{0,\nu\cdot\beta\cdot\ln\ln(|L(x)|) \cdot|B\cap L(x)|\right\}. \tag{2}\] The base case where \(|L(x)|\leq\nu^{-1}\) holds trivially as \(B^{(x)}_{2}=\emptyset\). Indeed, consider a descendent leaf \(v\notin B\) of \(x\). For every ancestor internal node \(y\) of \(v\), which is a descendent of \(x\), it holds that \(f(y)=y\) (\(y\) does not have heavy children as \(|L(y)|-1=(1-\frac{1}{|L(y)|})\cdot|L(y)|<(1-\frac{\nu}{2})\cdot|L(y)|\)). In particular \(v\in Z_{f(y)}\setminus B\). It follows that \(v\notin B^{(x)}_{2}\), and thus \(B^{(x)}_{2}=\emptyset\). In general, let \(x\in T\) be an inner node, which is not a heavy child. Denote \(m=|L(x)|>\nu^{-1}\). \(x\) is the first vertex in a heavy path \(Q=(x=y_{1},y_{2},...,y_{s})\in\mathcal{Q}\). Let \(x_{1},\ldots,x_{t}\) be the children of all the nodes in \(Q\). Observe that none of \(x_{1},\ldots,x_{t}\) is a heavy child, and that \(L(x_{1}),\ldots,L(x_{t})\) is a partition of \(L(x)\). The main observation is that all the vertices in \(Q\) use the same sample \(Z_{f(x)}\), so a leaf \(u\) is in \(B^{(x)}_{2}\) if at least one the following holds: 1. \(u\in B^{(x_{j})}_{2}\) for some \(1\leq j\leq t\), or 2. \(Z_{f(x)}\subseteq B\). We conclude that \[\mathbb{E}[|B_{2}^{(x)}|]\leq\sum_{j=1}^{t}\mathbb{E}[|B_{2}^{(x_{j})}|]+|L(x)| \cdot\Pr[Z_{f(x)}\subseteq B]. \tag{3}\] In what follows we bound each of the two summands. For the first, we use the induction hypothesis on \(x_{j}\) (clearly \(|L(x_{j})|<m=|L(x)|\)), to get that \[\mathbb{E}\left[\left|B_{2}^{(x_{j})}\right|\right]\leq\max\left\{0,\nu\cdot \beta\cdot\ln\ln(|L(x_{j})|)\cdot|B\cap L(x_{j})|\right\}\.\] By definition of a heavy path, for every \(1\leq j\leq t\), \(|L(x_{j})|\leq(1-\nu/2)\cdot\sigma_{Q}=(1-\nu/2)\cdot m\). It holds that \((1-\frac{\nu}{2})\cdot m\geq(1-\frac{\nu}{2})\cdot\nu^{-1}\geq\nu^{-1}-\frac{ 1}{2}\geq 5.5\), and in particular, \(\ln\ln\left((1-\frac{\nu}{2})\cdot m\right)>0\). It follows that \[\sum_{j=1}^{t}\mathbb{E}[|B_{2}^{(x_{j})}|] \leq\sum_{j=1}^{t}\nu\cdot\beta\cdot\ln\ln\left(\left(1-\frac{ \nu}{2}\right)\cdot m\right)\cdot|B\cap L(x_{j})| \tag{4}\] \[=\nu\cdot\beta\cdot\ln\ln\left(\left(1-\frac{\nu}{2}\right)\cdot m \right)\cdot|B\cap L(x)|\.\] For the second summand, we now analyze the probability of the event \(Z_{f(x)}\subseteq B\). If \(|B\cap L(x)|\geq(1-\nu)\cdot|L(x)|\), then \(x\) is brutally attacked and thus \(B_{2}^{(x)}=\emptyset\) and (2) holds. We thus can assume \(|B\cap L(x)|<(1-\nu)\cdot|L(x)|\). By the heavy path decomposition, it holds that \(|L(f(x))|>(1-\frac{\nu}{2})\cdot m\). In the case that \(|L(f(x))|\leq\ell\) we take \(Z_{f(x)}=L(f(x))\), and as \(|L(f(x))|>(1-\frac{\nu}{2})\cdot m>(1-\nu)m>|B\cap L(x)|\), there must be a vertex in \(Z_{f(x)}\setminus B\). In particular, \(\Pr\left[Z_{f(x)}\subseteq B\right]=0\). Otherwise, we have that \(|L(f(x))|>\ell\). As \(Z_{f(x)}\) is chosen from \(L(f(x))\) independently of \(B\), by Lemma 33, the probability that all of the \(\ell\) vertices in \(Z_{f(x)}\) are chosen from \(B\cap L(f(x))\) is at most \[\Pr\left[Z_{f(x)}\subseteq B\right] =\frac{\binom{|B\cap L(f(x))|}{\ell}}{\binom{|L(f(x))|}{\ell}} \leq O(\sqrt{\ell})\cdot\left(\frac{|B\cap L(f(x))|}{|L(f(x))|}\right)^{\ell}\] \[\leq O(\sqrt{\ell})\cdot\left(\frac{1-\nu}{1-\frac{\nu}{2}}\right) ^{\ell-1}\cdot\frac{|B\cap L(f(x))|}{m}\] \[\stackrel{{(*)}}{{\leq}}\frac{\nu^{2}\cdot\beta}{4 \cdot\ln n}\cdot\frac{|B\cap L(f(x))|}{m}\leq\frac{\nu^{2}\cdot\beta}{4\cdot \ln m}\cdot\frac{|B\cap L(x)|}{m}\, \tag{5}\] where the inequality \({}^{(*)}\) uses that \(\frac{1-\nu}{1-\frac{\nu}{2}}\leq 1-\frac{\nu}{2}\leq e^{-\nu/2}\), and taking a large enough constant \(c\) in the definition of \(\ell\). By plugging (4) and (5) into (3) we conclude that, \[\mathbb{E}\left[\left|B_{2}^{(x)}\right|\right] \leq\sum_{j=1}^{t}\mathbb{E}[|B_{2}^{(x_{j})}|]+m\cdot\Pr[Z_{f(x) }\subseteq B]\] \[\leq\nu\cdot\beta\cdot\ln\ln\left(\left(1-\frac{\nu}{2}\right) \cdot m\right)\cdot|B\cap L(x)|+\frac{\nu^{2}\cdot\beta}{4\cdot\ln m}\cdot|B \cap L(x)|\] \[\stackrel{{(**)}}{{\leq}}\nu\cdot\beta\cdot\ln\ln m \cdot|B\cap L(x)|\ \,\] which concludes the proof of (2), and thus the induction step. It remains to validate \({}^{(**)}\): \[\ln\ln m-\ln\ln\left((1-\frac{\nu}{2})\cdot m\right) =\ln\frac{\ln m}{\ln\left((1-\frac{\nu}{2})\cdot m\right)}\geq\ln \frac{\ln m}{\ln m-\ln(1+\frac{\nu}{2})}\] \[\geq\ln\left(1+\frac{\ln(1+\frac{\nu}{2})}{\ln m}\right)\geq\frac {\ln(1+\frac{\nu}{2})}{2\ln m}\geq\frac{\nu}{4\ln m}\,\] using \(\ln(1+x)\geq\frac{x}{2}\) for \(0<x<1\). Finally, by applying (2) on the root \(r\) of \(T\), we get that \[\mathbb{E}[|B^{+}\setminus B|]=\mathbb{E}[|B^{(r)}_{1}|+|B^{(r)}_{2}|]\leq(2 \nu+\nu\cdot\beta\cdot\ln\ln n)\cdot|B|=3\nu\cdot|B|\.\] Theorem 1 follows by rescaling \(\nu\) by a factor of \(3\). ### Improved Stretch for Small Max Degree HST In this subsection we slightly modify Theorem 1 to obtain a spanner with stretch \((1+\frac{2}{k-1})\), while increasing the lightness and sparsity to be linear in the maximum degree of the HST. Later, we will use Theorem 2 to construct an oblivious light \((1+\varepsilon)\)-reliable spanner for doubling metrics. **Theorem 2**.: _Consider a \(k\)-HST \(T\) of maximum degree \(\delta\). For any parameters \(\nu\in(0,1/6)\) and \(k>1\), \(T\) admits an oblivious \(\nu\)-reliable \((1+\frac{2}{k-1})\)-spanner of size \(n\cdot\delta\cdot\tilde{O}\left(\nu^{-1}\cdot\log\log n\right)^{2}\) and lightness \(\delta\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\)._ Proof.: The construction will follow the exact same lines of Theorem 1 with a small tweak. We will use the heavy path decomposition \(\mathcal{Q}\), and for every node \(y\in F\), we will sample a set \(Z_{y}\) of size \(\ell\) from \(L(y)\). The set \(B^{+}\) (and the definition of safe), remain exactly the same. The only difference is in the definition of bi-cliques. Specifically, for each internal node \(x=x_{0}\) in \(T\) with children \(x_{1},\ldots,x_{t}\), for every \(0\leq j<j^{\prime}\leq t\), we add the edges \(\{\{y,z\}\ :\ y\in Z_{f(x_{j})},z\in Z_{f(x_{j^{\prime}})}\}\) to the spanner \(H\). That is, in addition to adding edges from \(Z_{f(x)}\) (the sample set of \(x\)) to all the other sampled sets (of the children of \(x\)), we also add all the edges between the two sets \(Z_{f(x_{j})},Z_{f(x_{j^{\prime}})}\) of every pair of children of \(x\). As \(B^{+}\) is defined in the exact same way, for every attack \(B\) we have \(\mathbb{E}[|B^{+}|]\leq(1+\nu)\cdot|B|\). For the size analysis, consider an internal node \(x\) of degree \(\deg(x)\leq\delta\), we add at most \(\ell^{2}\cdot\binom{\deg(x)+1}{2}\leq\ell^{2}\cdot\delta\cdot\deg(x)\) edges. In total, the size of the spanner is bounded by \(n\cdot\ell^{2}\cdot\delta=n\cdot\delta\cdot\tilde{O}(\nu^{-1}\cdot\log\log n )^{2}\). For the lightness analysis, the total weight added due to an internal node \(x\) of degree \(\deg(x)\leq\delta\) is at most \(\ell^{2}\cdot\delta\cdot\deg(x)\cdot\Gamma_{x}\). Thus, the total weight added due to the bi-cliques is \(\sum_{x\in T}\deg(x)\cdot\ell^{2}\cdot\delta\cdot\Gamma_{x}=\delta\cdot\tilde {O}(\nu^{-1}\cdot\log\log n)^{2}\cdot w(MST)\). It remains to analyze the stretch. The argument is similar to Theorem 1, where the main difference is that a \(u-v\) path will be using only a single edge in the highest level (instead of two). Note that since we only add additional edges to \(H\) in this variant, Lemma 10 still holds. Fix a pair of leaves \(u,v\notin B^{+}\), and let \(x=\operatorname{lca}(u,v)\). Let \(x_{u}\) (resp., \(x_{v}\)) be the child of whose subtree contains \(u\) (resp., \(v\)). Since both \(u,v\) are safe, \(Z_{f(x_{u})}\setminus B\neq\emptyset\) and \(Z_{f(x_{v})}\setminus B\neq\emptyset\), so pick any \(u^{\prime}\in Z_{f(x_{u})}\setminus B\) and \(v^{\prime}\in Z_{f(x_{v})}\setminus B\). By the construction step for \(x\), we added all edges in \(Z_{f(x_{u})}\times Z_{f(x_{v})}\), in particular, \(\{u^{\prime},v^{\prime}\}\in H\). Note that \(d_{T}(u^{\prime},v^{\prime})\leq\Gamma_{x}\), since both are in \(L(x)\). By Lemma 10 there is a path \(P_{u}\) (resp., \(P_{v}\)) in \(H\) from \(u\) to \(u^{\prime}\) (resp., \(v\) to \(v^{\prime}\)), which is disjoint from \(B\), and of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{u}}\) (resp., \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{v}}\)). Since \(T\) is a \(k\)-HST we have that \(\Gamma_{x_{u}},\Gamma_{x_{v}}\leq\frac{\Gamma_{x}}{k}\), therefore the path \(P=P_{u}\circ\{u^{\prime},v^{\prime}\}\circ P_{v}\) is a \(u-v\) path in \(H\), disjoint from \(B\), and has total length at most \[2\cdot\left(1+\frac{1}{k-1}\right)\cdot\frac{\Gamma_{x}}{k}+\Gamma_{x}=\left( 1+\frac{2}{k-1}\right)\cdot d_{T}(u,v)\.\] ## 4 Pairwise Partition Cover for Minor Free Graphs In this section we construct a _Pairwise Partition Cover Scheme_ (PPCS, recall Definition 4) for metrics arising from shortest paths of graphs excluding a fixed minor. The main building block in the construction of our PPCS is the so called Shortest Path Decomposition (SPD) introduced by [1]. Roughly speaking, this is a recursive decomposition of the graph into shortest paths, and the measure of interest is the depth of the recursion, as captured by the following definition. **Definition 11** (Spddepth).: _A graph has an SPDdepth\(1\) if and only if it is a (weighted) path. A graph \(G\) has an SPDdepth\(k\geq 2\) if there exists a shortest path\(P\), such that deleting \(P\) from the graph \(G\) results in a graph whose connected components all have SPDdepth at most \(k-1\)._ It is shown in [1] that \(n\)-vertex graphs excluding a fixed minor have SPDdepth\(O(\log n)\) (this follows by using the balanced separator consisting of \(O(1)\) shortest paths, by [1]). We will prove the following lemma: **Lemma 12**.: _For any parameter \(0<\varepsilon<1/6\), any graph \(G=(V,E)\) with SPDdepth\(k\) admits a \(\left(\frac{k}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon\right)\)-PPCS._ In particular, as graphs excluding a fixed minor have SPDdepth\(=O(\log n)\), we obtain the following corollary. **Corollary 3**.: _For any parameter \(\varepsilon<1/6\), every graph \(G=(V,E)\) that excludes a fixed minor, admits a \(\left(\frac{O(\log n)}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon\right)\)-PPCS_ Proof of Lemma 12.: We will assume for simplicity (and w.l.o.g.) that \(\varepsilon^{-1}\) is an integer. Fix \(\Delta>0\). We will prove by induction on the SPDdepth, that graphs with SPDdepth\(k\) admit a \(\left(\frac{k}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon,\Delta\right)\)-PPC, assuming all graphs with SPDdepth less than \(k\) admits a \(\left(\frac{k-1}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon,\Delta\right)\)-PPC. For the base case, we think of a graph with SPDdepth\(0\) as the empty graph, where there is nothing to prove. Let \(G=(V,E)\) be a connected graph with SPDdepth\(k\), denote by \(d(u,v)\) the shortest path distance between \(u,v\in V\), and let \(P\) be a shortest path in \(G\) such that every connected component in \(G\backslash P\) has SPDdepth at most \(k-1\). Construction.The basic idea is quite simple, we use the \(\frac{k-1}{\varepsilon}\) partitions for the connected components of \(G\setminus P\), and create \(\frac{1}{\varepsilon}\) new partitions, whose goal is proving padding for pairs \(u,v\) such that \(P\) intersect the shortest \(u-v\) path, or the balls \(B_{G}(u,\varepsilon\Delta),B_{G}(v,\varepsilon\Delta)\). We start by defining the new partitions \(\mathcal{P}_{new}=\{\mathcal{P}_{1},\ldots,\mathcal{P}_{\varepsilon^{-1}}\}\). Let \(\mathcal{N}=\{z_{1},\ldots,z_{l}\}\subseteq P\) be an \(\varepsilon\Delta\)-net for \(P\) (recall Definition 5). Fix one endpoint of \(P\), and assume that \((z_{1},\ldots,z_{l})\) are sorted by their distance to this endpoint of \(P\). For every \(i\in\{0,1,\ldots,\varepsilon^{-1}-1\}\), let \(\mathcal{N}_{i}=\{z_{j}\ :\ j\equiv i\mod\varepsilon^{-1}\}\). For every \(z_{p},z_{q}\in\mathcal{N}_{i}\) with \(1\leq p<q\leq l\), we have that \[d(z_{p},z_{q})=\sum_{j=p}^{q-1}d(z_{j},z_{j+1})>(p-q)\varepsilon\Delta\geq \Delta\.\] The equality holds as \(P\) is a shortest path, the first inequality holds since the distance between net points is larger than \(\varepsilon\Delta\), and the last inequality by definition of \(\mathcal{N}_{i}\). Thus, the balls \(B(z_{p},\Delta/2),B(z_{q},\Delta/2)\) are disjoint. For every \(0\leq i\leq\varepsilon^{-1}-1\), we set \(\mathcal{P}_{i}\) to contain the clusters \(\{B(z,\Delta/2)\}_{z\in\mathcal{N}_{i}}\), and add the rest of the vertices (those that are not contained in any of these balls) as singleton clusters. Let \(G_{1},\ldots,G_{t}\) be the connected components of \(G\backslash P\), where \(t\) is the number of connected components. For every \(1\leq j\leq t\), we apply the induction hypothesis on \(G_{j}\), which yields a \(\big{(}\frac{k-1}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon,\Delta \big{)}\)-PPC for \(G_{j}\). This is a collection \(\mathcal{F}^{(j)}=\{\mathcal{P}_{1}^{(j)},\ldots\mathcal{P}_{\varepsilon^{-1}( k-1)}^{(j)}\}\) of \(\varepsilon^{-1}(k-1)\) partitions. For every \(1\leq i\leq\varepsilon^{-1}(k-1)\), we construct a partition \(\mathcal{H}_{i}\) for \(G\), by taking \(\cup_{j=1}^{t}\mathcal{P}_{i}^{(j)}\), and adding the remaining vertices (note these are the vertices of \(P\)) as singleton clusters. We return \(\mathcal{F}=\{\mathcal{P}_{i}\}_{i=0}^{\varepsilon^{-1}-1}\cup\{\mathcal{H}_{ i}\}_{1\leq i\leq\varepsilon^{-1}(k-1)}\) as the PPC for \(G\). It remains to show that \(\mathcal{F}\) is indeed a \(\big{(}\frac{k}{\varepsilon},\frac{2}{1-6\varepsilon},\varepsilon,\Delta \big{)}\)-PPC. Correctness.First observe that \(\mathcal{F}\) is a set of partitions: for \(0\leq i\leq\varepsilon^{-1}-1\), \(\mathcal{P}_{i}\) is a partition by definition, while for \(1\leq i\leq\varepsilon^{-1}\cdot(k-1)\), \(\mathcal{H}_{i}\) is a partition since the connected components are pairwise disjoint. The number of partitions is \(\varepsilon^{-1}+\varepsilon^{-1}(k-1)=\varepsilon^{-1}\cdot k\) as required. Diameter bound.Note that \(\mathcal{P}\) is \(\Delta\)-bounded, because every cluster is either a ball of radius \(\Delta/2\), a singleton, or a cluster in a \(\Delta\)-bounded partition \(\mathcal{H}_{i}\). Padding property.Let \(u,v\in V\), and denote by \(P_{uv}\) the shortest \(u-v\) path in \(G\), and by \(B_{u}=B(u,\varepsilon\Delta)\), \(B_{v}=B(v,\varepsilon\Delta)\). If \(\Delta>0\) is such that \(\frac{(1-6\varepsilon)\Delta}{4}\leq d(u,v)\leq\frac{(1-6\varepsilon)\Delta}{2}\), then we need to show that at least one of the partitions in \(\mathcal{P}\) contains a cluster \(C\) such that both \(B_{u},B_{v}\) are contained in \(C\). Suppose first that \(P\) is disjoint from \(P_{uv}\cup B_{u}\cup B_{v}\). In this case, there exists a connected component \(G_{j}\) in \(G\backslash P\), such that \(B_{u}\cup B_{v}\cup P_{uv}\subseteq G_{j}\), and therefore \(d_{G_{j}}(u,v)=d(u,v)\) Thus, by the induction hypothesis, there is a cluster \(C\) in \(\mathcal{F}^{(j)}\) which contains both \(B_{u},B_{v}\), and this cluster is also in one of the \(\mathcal{H}_{i}\), and thus in \(\mathcal{F}\). (While in general, distances in \(G_{j}\) can be larger from those of \(G\), the balls \(B_{u},B_{v}\) and \(P_{uv}\) remain exactly the same, as they are disjoint from \(P\).) Consider now the case (see Figure 2 (a)), where \(P\) intersects \(P_{uv}\). Let \(x\in P\cap P_{uv}\) be an (arbitrary) vertex in the intersection. By the covering property of nets, there exists \(z\in\mathcal{N}\) such that \(d(x,z)\leq\varepsilon\Delta\). We bound the distance from any \(y\in B_{u}\) to \(z\) by the triangle inequality, \[d(z,y) \leq d(z,x)+d(x,u)+d(u,y)\] \[\leq d(z,x)+d(v,u)+d(u,y)\] \[\leq\varepsilon\Delta+\frac{(1-6\varepsilon)\Delta}{2}+\varepsilon \Delta\leq\Delta/2.\] Thus, the cluster \(C=B(z,\Delta/2)\) satisfies \(B_{u}\subseteq C\) and by a symmetric argument \(B_{v}\subseteq C\), as required. The remaining case is that \(P\) intersects \(B_{u}\) or \(B_{v}\). Assume w.l.o.g. \(P\) intersects \(B_{v}\), and let \(x\in P\cap B_{v}\) (see Figure 2 (b)). As before, there exists \(z\in\mathcal{N}\) such that \(d(x,z)\leq\varepsilon\Delta\). Let \(y\in B_{u}\). By the triangle inequality \[d_{G}(z,y) \leq d_{G}(z,x)+d_{G}(x,v)+d_{G}(v,u)+d_{G}(u,y)\] \[\leq\varepsilon\Delta+\varepsilon\Delta+\frac{(1-6\varepsilon) \Delta}{2}+\varepsilon\Delta\leq\Delta/2,\] hence \(B_{u}\subseteq C:=B(z,\Delta/2)\). The argument for \(B_{u}\) is simpler. So both balls are in the same cluster \(C\), as required. Figure 2: _Illustration of the proof of Lemma 12, where we show if \(P\) (colored red) intersects either \(P_{uv}\) (figure (a)) or any of \(B_{u},B_{v}\) (figure (b)), then both \(B_{u},B_{v}\) are in \(B(z,\Delta/2)\)._ From Pairwise Partition Cover to Light \(k\)-Hst Cover In this section we devise a light \(k\)-HST cover (see Definition 3) from a Pairwise Partition Cover Scheme (PPCS, see Definition 4). The framework essentially follows that of [12], except that we need to guarantee also a bound on the lightness of each tree in the cover. To this end, we ensure each cluster contains a net point (recall Definition 5). The following simple claim, which lower bounds the MST weight with respect to a net, is proven in [13, Claim 1]. **Claim 13** ([13]).: _Let \(\mathcal{N}\) be a \(\Delta\)-net of a metric space \((X,d)\). Then \(|\mathcal{N}|\leq\left\lceil\frac{2}{\Delta}\cdot w(\mathrm{MST}(X))\right\rceil\)._ The main result of this section is captured by the following theorem. **Theorem 4**.: _Fix any integer \(\tau\geq 1\), and parameters \(\rho\geq 1\) and \(0<\varepsilon<1/12\). Suppose that a given metric space \((X,d)\) admits a \((\tau,\rho,\varepsilon)\)-PPCS, then for any \(k\geq\frac{8\rho}{\varepsilon}\), \((X,d)\) admits a \(O(k\log n)\)-light \(\left(O(\frac{\tau}{\varepsilon}\log k),\rho(1+3\varepsilon)\right)\)-\(k\)-HST cover._ Assume w.l.o.g. that the minimal distance in \((X,d)\) is \(1\), and let \(\Phi\) be the maximal distance. Fix a real number \(1\leq l\leq k\), and for \(-1\leq i\leq\log_{k}\Phi\), let \(\Delta_{i}(l)=l\cdot k^{i}\) (for brevity we will omit \(l\) when it is clear from context), and let \(\mathcal{N}_{i}\) be an \(\frac{\varepsilon\Delta_{i}}{4}\)-net. The following lemma shows how to change a collection of pairwise partition covers, so it will become hierarchical and each cluster will contains a net point. **Lemma 14**.: _Fix a real number \(1\leq l\leq k\). For each integer \(-1\leq i\leq\log_{k}\Phi\), let \(\{\mathcal{P}^{i}_{1},\ldots\mathcal{P}^{i}_{\tau}\}\) be a \((\tau,\rho,\varepsilon,\Delta_{i})\)-pairwise partition cover. Then there exists a collection of \((\tau,(1+\varepsilon)\rho,0,(1+\varepsilon)\Delta_{i})\)-pairwise partition covers \(\{\tilde{\mathcal{P}}^{i}_{1},\ldots\tilde{\mathcal{P}}^{i}_{\tau}\}_{i=-1}^ {\log_{k}\Phi}\) that satisfies the following two properties:_ 1. _For every_ \(-1\leq i\leq\log_{k}\Phi\) _and_ \(1\leq j\leq\tau\)_,_ \(|\tilde{\mathcal{P}}^{i}_{j}|\leq|\mathcal{N}_{i}|\)_._ 2. _For every_ \(1\leq j\leq\tau\)_, the partitions_ \(\{\tilde{\mathcal{P}}^{i}_{j}\}_{i\geq-1}\) _are hierarchical (that is, for each_ \(0\leq i\leq\log_{k}\Phi\)_, every cluster of_ \(\tilde{\mathcal{P}}^{i-1}_{j}\) _is contained in a cluster of_ \(\tilde{\mathcal{P}}^{i}_{j}\)_)._ Proof.: Fix \(j\in[\tau]\). We show how to construct \(\{\tilde{\mathcal{P}}^{i}_{j}\}_{i\geq-1}\) by induction on \(i\). For \(i=-1\), since \(\Delta_{-1}=l/k\leq 1\), there is no padding requirement, and we may take the trivial partition to singletons. Assume that for some \(0\leq i\leq\log_{k}\Phi\), we constructed \(\tilde{\mathcal{P}}^{i-1}_{j}\) that satisfies both properties, and we will show how to construct \(\tilde{\mathcal{P}}^{i}_{j}\). Start with the partition \(\mathcal{P}^{i}_{j}\). The first change will force every cluster to contain a net point. For each cluster \(C\in\mathcal{P}^{i}_{j}\), if \(C\cap\mathcal{N}_{i}=\emptyset\), we remove \(C\) from \(\mathcal{P}^{i}_{j}\). Then for every \(v\in C\) we add \(v\) to the cluster in \(\mathcal{P}^{i}_{j}\) containing the nearest net point in \(\mathcal{N}_{i}\) to \(v\). This creates a partition \(\hat{P}^{i}_{j}\). Now every cluster contains at least one net point, therefore \(|\hat{P}^{i}_{j}|\leq|\mathcal{N}_{i}|\). Also observe that the new cluster of \(v\) will not be removed. The second change will guarantee the hierarchical property. For each cluster \(C^{\prime}\in\tilde{\mathcal{P}}^{i-1}_{j}\), move all the vertices of \(C^{\prime}\) to some cluster \(C\in\hat{P}^{i}_{j}\) which intersects \(C^{\prime}\). Call the resulting partition \(\tilde{\mathcal{P}}^{i}_{j}\), which satisfies the second property by construction. Observe that it is no longer true that every cluster of \(\tilde{\mathcal{P}}^{i}_{j}\) contains a net point (it could have moved in the second change). Nevertheless, the number of clusters in \(\tilde{\mathcal{P}}^{i}_{j}\) did not change. It remains to show that \(\{\tilde{\mathcal{P}}^{i}_{1},\ldots\tilde{\mathcal{P}}^{i}_{\tau}\}\) is indeed a \((\tau,(1+\varepsilon)\rho,0,(1+\varepsilon)\Delta_{i})\)-pairwise partition cover. Diameter bound.We start by showing that each cluster \(\tilde{C}\in\tilde{\mathcal{P}}^{i}_{j}\) has diameter at most \((1+\varepsilon)\Delta_{i}\), by induction on \(i\). The base case \(i=-1\) is trivial since every cluster has diameter \(0\). Assume the claim holds for \(i-1\) and we will prove it for \(i\). Let \(C\in\mathcal{P}^{i}_{j}\) be the cluster before the updates leading to \(\tilde{C}\). In the first change we may have moved vertices from other clusters (those without a net point) to \(C\), creating the cluster \(\hat{C}\). By the covering property of nets, these vertices are at distance most \(\frac{\varepsilon\Delta_{i}}{4}\) from some net point in \(C\). For any \(u\in\hat{C}\), let \(r_{u}\in C\) be the closest point to \(u\) in \(C\) (not necessarily a net point). Then for any \(u,v\in\hat{C}\), \[d(u,v)\leq d(u,r_{u})+d(r_{u},r_{v})+d(r_{v},v)\leq\frac{\varepsilon\Delta_{i }}{4}+\operatorname{diam}(C)+\frac{\varepsilon\Delta_{i}}{4}=\operatorname{ diam}(C)+\frac{\varepsilon\Delta_{i}}{2}. \tag{6}\] In particular, \(\operatorname{diam}(\hat{C})\leq\operatorname{diam}(C)+\frac{\varepsilon \Delta_{i}}{2}\). In the second change, we may have added to \(\hat{C}\) entire clusters \(C^{\prime}\in\tilde{\mathcal{P}}^{i-1}_{j}\) which intersect it, creating \(\tilde{C}\) (note that we may have also removed points from \(C\), but this surely will not increase the diameter). The diameter of each \(C^{\prime}\) is at most \((1+\varepsilon)\Delta_{i-1}\) by the induction hypothesis. Hence, by a similar argument to above, \[\operatorname{diam}(\tilde{C})\leq\operatorname{diam}(\hat{C})+2 \operatorname{diam}(C^{\prime})\leq\operatorname{diam}(\hat{C})+2(1+ \varepsilon)\Delta_{i-1}\.\] Recall that \(k\geq 8\rho/\varepsilon\geq(1+\varepsilon)4/\varepsilon\), and so \(2(1+\varepsilon)\Delta_{i-1}=2(1+\varepsilon)\Delta_{i}/k\leq\varepsilon \Delta_{i}/2\). We conclude that \[\operatorname{diam}(\tilde{C})\leq\operatorname{diam}(\hat{C})+\frac{ \varepsilon\Delta_{i}}{2}\leq\operatorname{diam}(C)+2\cdot\frac{\varepsilon \Delta_{i}}{2}\leq(1+\varepsilon)\cdot\Delta_{i}\.\] Padding property.It remains to show that for \(u,v\in X\), if there exists \(-1\leq i\leq\log_{k}\Phi\) such that \(\frac{\Delta_{i}}{2\rho}=\frac{(1+\varepsilon)\Delta_{i}}{2(1+\varepsilon) \rho}\leq d(u,v)\leq\frac{(1+\varepsilon)\Delta_{i}}{2(1+\varepsilon)\rho}= \frac{\Delta_{i}}{\rho}\), then both \(u,v\) are contained in a single cluster in at least one of the partitions \(\{\tilde{\mathcal{P}}^{i}_{1},...,\tilde{\mathcal{P}}^{i}_{\tau}\}\). By the padding property of \(\{\mathcal{P}^{i}_{1},...,\mathcal{P}^{i}_{\tau}\}\), there exists \(1\leq j\leq\tau\) and a cluster \(C\in\mathcal{P}^{i}_{j}\), such that \(B(u,\varepsilon\Delta_{i}),B(v,\varepsilon\Delta_{i})\subseteq C\). We argue that \(u,v\in\tilde{C}\) for the cluster \(\tilde{C}\in\tilde{\mathcal{P}}^{i}_{j}\) created from \(C\) by our construction. By the covering property of nets, there is a net point of \(\mathcal{N}_{i}\) in \(B(u,\varepsilon\Delta_{i})\subseteq C\), thus \(C\) was not removed in the first change, and there is a corresponding cluster \(\hat{C}\in\tilde{\mathcal{P}}^{i}_{j}\) (note that \(C\subseteq\hat{C}\)). Let \(\tilde{C}_{u},\tilde{C}_{v}\in\tilde{\mathcal{P}}^{i-1}_{j}\) be the clusters containing \(u,v\) respectively. The diameter of \(\tilde{C}_{u},\tilde{C}_{v}\) is bounded by \((1+\varepsilon)\Delta_{i-1}=(1+\varepsilon)\cdot\frac{\Delta_{i}}{k}\leq \frac{(1+\varepsilon)\varepsilon}{8\rho}\Delta_{i}<\varepsilon\Delta_{i}\). Thus, these clusters are contained in \(B(u,\varepsilon\Delta_{i}),B(v,\varepsilon\Delta_{i})\) respectively, and therefore also in \(\hat{C}\). So after the second change, \(u,v\) do not move to any other cluster, and are both in \(\tilde{C}\). This concludes the proof that \(\{\tilde{\mathcal{P}}^{i}_{1},\ldots\tilde{\mathcal{P}}^{i}_{\tau}\}\) is a \((\tau,(1+\varepsilon)\rho,0,(1+\varepsilon)\Delta_{i})\)-pairwise partition cover. We are now ready to prove the main theorem of this section. Proof of Theorem 4.: Fix \(l\in\{(1+\varepsilon)^{c}\ :\ c\in[0,\log_{1+\varepsilon}k]\}\). Since \((X,d)\) admits a PPCS, for every integer \(i\geq-1\) there exist \(\{\mathcal{P}^{i}_{1},\ldots\mathcal{P}^{i}_{\tau}\}\) that is a \((\tau,\rho,\varepsilon,\Delta_{i})\)-pairwise partition cover. Apply Lemma 14 to obtain a \((\tau,(1+\varepsilon)\rho,0,(1+\varepsilon)\Delta_{i})\)-pairwise partition cover \(\{\tilde{\mathcal{P}}^{i}_{1},\ldots\tilde{\mathcal{P}}^{i}_{\tau}\}\) that satisfy both properties described in the lemma. For every \(j\in[\tau]\) we construct a single \(k\)-HST \(T\) from the collection of partitions \(\{\tilde{\mathcal{P}}^{i}_{j}\}_{-1\leq i\leq\log_{k}\Phi}\). There is a bijection from the nodes of \(T\) to the clusters of the partitions. The leaves of \(T\) correspond to the singleton clusters of \(\tilde{\mathcal{P}}^{-1}_{j}\). For each \(0\leq i\leq\log_{k}\Phi\), and each cluster \(C\in\tilde{\mathcal{P}}^{i}_{j}\), create a node \(x=x(C)\) with label \(\Gamma_{x}=(1+\varepsilon)\cdot\Delta_{i}\), and connect \(x\) to all the nodes corresponding to the clusters \(\{C^{\prime}\subseteq C\ :\ C^{\prime}\in\tilde{\mathcal{P}}^{i-1}_{j}\}\) (here we use the fact that this pairwise partition cover is hierarchical). Since the label of every such \(C^{\prime}\) is \((1+\varepsilon)\cdot\Delta_{i-1}=(1+\varepsilon)\cdot\Delta_{i}/k\), and the distance between every two points in \(C\) is at most \((1+\varepsilon)\cdot\Delta_{i}\), this \(T\) is indeed a dominating \(k\)-HST. We construct \(\tau\) of these \(k\)-HSTs for every \(l\), and the collection of all these is our \(k\)-HST cover for \((X,d)\). The number of \(k\)-HSTs is indeed \(\tau\cdot(1+\log_{1+\varepsilon}k)=O(\frac{\tau}{\varepsilon}\cdot\log k)\), as required. It remains to bound the lightness of each \(T\), and argue about the stretch of this cover. Lightness bound.Now we show that for any \(k\)-HST \(T\) created as above, its lightness is \(O(k\log n)\). Recall that the weight of \(T\) is \(\sum_{x\in T}(\deg(x)-1)\cdot\Gamma_{x}\) (see equation (1)). For any \(0\leq i\leq\log_{k}\Phi\), by construction the sum of degrees of nodes corresponding to clusters of \(\tilde{\mathcal{P}}^{i}_{j}\) is exactly equal to \(|\tilde{\mathcal{P}}^{i-1}_{j}|\). By the first property of the lemma we have that \(|\tilde{\mathcal{P}}^{i-1}_{j}|\leq|\mathcal{N}_{i-1}|\), so \[w(T) = \sum_{x\in T}(\deg(x)-1)\cdot\Gamma_{x}\] \[\leq \sum_{i=0}^{\log_{k}\Phi}|\tilde{\mathcal{P}}^{i-1}_{j}|\cdot \Delta_{i}\] \[\leq k\cdot\sum_{i=0}^{\log_{k}\Phi}|\mathcal{N}_{i-1}|\cdot\Delta_{ i-1}\] Denote \(W=w(MST(X))\). If \(\Phi\geq n^{3}\), we bound separately the lower terms in the sum, \[k\cdot\sum_{i=0}^{\log_{k}(\Phi/n^{3})}|\mathcal{N}_{i-1}|\cdot \Delta_{i-1} \leq k\cdot\sum_{i=0}^{\log_{k}(\Phi/n^{3})}n\cdot l\cdot k^{i}\] \[\leq 2n\cdot k^{2}\cdot(\Phi/n^{3})\] \[\leq 2W\,\] using that \(l\leq k\) and \(\Phi\leq W\). For the remaining terms, we have by Claim 13 that \(|\mathcal{N}_{i}|\cdot\Delta_{i}=O(W)\), therefore \[k\cdot\sum_{i=\max\{0,\log_{k}(\Phi/n^{3})\}}^{\log_{k}\Phi}| \mathcal{N}_{i-1}|\cdot\Delta_{i-1} \leq k\cdot\sum_{i=\max\{0,\log_{k}(\Phi/n^{3})\}}^{\log_{k}\Phi}O(W)\] \[= O(k\cdot\log n\cdot W)\,\] so the lightness of each tree is indeed \(O(k\log n)\). Stretch bound.Fix any \(u,v\in X\), and let \(D=\rho\cdot(1+\varepsilon)\cdot d(u,v)\). Let \(i=\lfloor\log_{k}D\rfloor\), and note that \(k^{i}\leq D<k^{i+1}\), so there exists integer \(0\leq c\leq\log_{1+\varepsilon}k\) such that \(l\cdot k^{i}\leq D<(1+\varepsilon)\cdot l\cdot k^{i}\) (recall that \(l=(1+\varepsilon)^{c}\)). With these choices of \(l\) and \(i\) we get that \[\frac{\Delta_{i}}{2\rho}\leq\frac{\Delta_{i}}{\rho\cdot(1+\varepsilon)}\leq d (u,v)\leq\frac{\Delta_{i}}{\rho}.\] By the padding property of \(\{\tilde{\mathcal{P}}^{i}_{j}\}_{1\leq j\leq\tau}\), there exists \(j\in[\tau]\) and a cluster \(C\in\tilde{\mathcal{P}}^{i}_{j}\) such that \(u,v\in C\). So in the \(k\)-HST \(T\) created from \(\tilde{\mathcal{P}}^{i}_{j}\), there is a node \(x\) corresponding to \(C\) with \(\Gamma_{x}=(1+\varepsilon)\Delta_{i}\), and so \[d_{T}(u,v)\leq(1+\varepsilon)\Delta_{i}\leq\rho\cdot(1+\varepsilon)^{2}\cdot d (u,v)\leq\rho\cdot(1+3\varepsilon)\cdot d(u,v)\.\] ### \(k\)-Hst Cover for Doubling Metrics. The following lemma asserts that in our construction of \(k\)-HST cover described above, every tree has bounded degree. **Lemma 15**.: _If a metric space \((X,d)\) has doubling dimension \(\mathrm{ddim}\), then every \(T\) in the \(k\)-HST cover of Theorem 4 has maximum degree \(O(k/\varepsilon)^{\mathrm{ddim}}\)._ Proof.: Let \(x\in T\) be any node with children \(x_{1},...,x_{t}\). The node \(x\) corresponds to a cluster \(\tilde{C}\in\tilde{\mathcal{P}}^{i}_{j}\), and its children to clusters \(\tilde{C}_{1},\ldots,\tilde{C}_{t}\in\tilde{\mathcal{P}}^{i-1}_{j}\) contained in \(\tilde{C}\). Recall that in the partition \(\hat{\mathcal{P}}^{i-1}_{j}\), every cluster contains a net point from an \(\varepsilon\Delta_{i-1}/4\)-net \(\mathcal{N}_{i-1}\). Since every cluster of \(\tilde{\mathcal{P}}^{i-1}_{j}\) was a cluster of \(\hat{\mathcal{P}}^{i-1}_{j}\), the clusters \(\tilde{C}_{1},\ldots,\tilde{C}_{t}\) correspond to different net points. The maximal distance between any two such net points is \[\mathrm{diam}(\tilde{C})+2\varepsilon\Delta_{i-1}/4<2\Delta_{i}\,\] so all these net points are contained in a ball of radius \(2\Delta_{i}\). Since \(\Delta_{i-1}=\Delta_{i}/k\), by the packing lemma (Lemma 7) we conclude that \(t\leq O(k/\varepsilon)^{\mathrm{ddim}}\). Filtser and Le [11] constructed a PPCS for doubling metrics: **Lemma 16** ([11]).: _Every metric space \((X,d)\) with doubling dimension \(\mathrm{ddim}\) admits an \((\varepsilon^{-O(\mathrm{ddim})},1+\varepsilon,\varepsilon)\)-pairwise partition cover scheme for any \(\varepsilon\in(0,1/16)\)._ By applying Theorem 4 (and using Lemma 15), we conclude **Corollary 5**.: _For any \(\varepsilon\in(0,1/16)\), every \(n\)-point metric space \((X,d)\) with doubling dimension \(\mathrm{ddim}\) admits an \(O(\varepsilon^{-1}\cdot\log n)\)-light \((\varepsilon^{-O(\mathrm{ddim})},1+\varepsilon)\)-\(\frac{16}{\varepsilon}\)-HST cover, furthermore, the maximum degree of any tree in the cover is \(\varepsilon^{-O(\mathrm{ddim})}\)._ Proof.: Using Lemma 16, consider a \((\varepsilon^{-O(\mathrm{ddim})},1+\varepsilon,\varepsilon)\)-PPCS for \(X\). Fix \(k=\frac{16}{\varepsilon}\). By Theorem 4, \(X\) admits a \(O(\varepsilon^{-1}\cdot\log n)\)-light \((\varepsilon^{-O(\mathrm{ddim})},1+O(\varepsilon))\)-\(k\)-HST cover. Furthermore, by Lemma 15, every HST in the cover has maximum degree \(O(\frac{k}{\varepsilon})^{\mathrm{ddim}}=\varepsilon^{-O(\mathrm{ddim})}\). The corollary follows by rescaling \(\varepsilon\) accordingly. ## 6 Reliable Spanners for Metric Spaces We begin this section by proving a meta theorem, which given a light \(k\)-HST cover, constructs an oblivious light reliable spanner. In the following subsections, we will apply this meta-theorem to obtain the main results of the paper. **Theorem 6** (Light Reliable Spanner from Light HST Cover).: _Consider an \(n\) point metric space \((X,d)\) that admits \(\psi\)-light \((\tau,\rho)\)-\(k\)-HST cover \(\mathcal{T}\), for some \(k>1\). Then for every parameter \(\nu\in(0,1/6)\), \(X\) admits an oblivious \(\nu\)-reliable \((2+\frac{2}{k-1})\cdot\rho\)-spanner of size \(n\cdot\tilde{O}\left(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right))\) and lightness \(\psi\cdot\tilde{O}(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2})\)._ Proof.: For every \(k\)-HST \(T\in\mathcal{T}\), using Theorem 1 we construct a \(\nu^{\prime}\)-reliable spanner \(H_{T}\) for \(T\) for \(\nu^{\prime}=\frac{\nu}{\tau}\). The final spanner we return is \(H=\cup_{T\in\mathcal{T}}H_{T}\). By Theorem 1, the size of the spanner is \(|H|=\tau\cdot n\cdot\tilde{O}(\nu^{\prime-1}\cdot\log\log n)^{2}=n\cdot\tilde {O}\left(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right)\), while the lightness is \[w(H)\leq\sum_{T\in\mathcal{T}}w(H_{T}) \leq\sum_{T\in\mathcal{T}}\tilde{O}(\nu^{\prime-1}\cdot\log\log n )^{2}\cdot w(MST(T))\] \[\leq\psi\cdot\tilde{O}(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2} )\cdot w(MST(X))\] Consider an attack \(B\subseteq X\). For every spanner \(H_{T}\), let \(B_{T}^{+}\) be the respective super set, and denote \(B^{+}=\cup_{T\in\mathcal{T}}B_{T}^{+}\). It holds that \[\mathbb{E}\left[\left|B^{+}\setminus B\right|\right]\leq\sum_{T\in\mathcal{T}} \mathbb{E}\left[\left|B_{T}^{+}\setminus B\right|\right]\leq\tau\cdot\nu^{ \prime}\cdot|B|=\nu\cdot|B|\.\] Finally, consider a pair of points \(u,v\notin B^{+}\). The is some \(k\)-HST \(T\in\mathcal{T}\) such that \(d_{T}(u,v)\leq\rho\cdot d_{X}(u,v)\). As \(u,v\notin B_{T}^{+}\), it holds that \[d_{H\setminus B}(u,v)\leq d_{H_{T}\setminus B}(u,v)\leq(2+\frac{2}{k-1})\cdot d _{T}(u,v)\leq(2+\frac{2}{k-1})\cdot\rho\cdot d_{X}(u,v)\.\] By using Theorem 2 instead of Theorem 1 in the proof of Theorem 6 (and keeping all the rest intact) we obtain: **Corollary 7**.: _Consider an \(n\) point metric space \((X,d)\) that admits \(\psi\)-light \((\tau,\rho)\)-\(k\)-HST cover \(\mathcal{T}\), where all the trees in \(\mathcal{T}\) have maximum degree \(\delta\). Then for every parameter \(\nu\in(0,1/6)\), \(X\) admits an oblivious \(\nu\)-reliable \((1+\frac{2}{k-1})\cdot\rho\)-spanner of size \(n\cdot\delta\cdot\tilde{O}\left(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right)\) and lightness \(\psi\cdot\delta\cdot\tilde{O}(\tau^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2})\)._ ### Doubling Metrics By applying Corollary 7, on the HST cover of Corollary 5 (and rescaling \(\varepsilon\)) we obtain: **Corollary 8**.: _For any \(\varepsilon,\nu\in(0,1/16)\), every \(n\)-point metric space \((X,d_{X})\) with doubling dimension \(\mathrm{ddim}\) admits \(\nu\)-reliable \((1+\varepsilon)\)-spanner with size \(n\cdot\varepsilon^{-O(\mathrm{ddim})}\cdot\tilde{O}(\nu^{-1}\cdot\log\log n)^{2}\), and lightness \(\varepsilon^{-O(\mathrm{ddim})}\cdot\tilde{O}(\nu^{-2}\cdot\log n)\)._ Note that the shortest path metric of the path graph has doubling dimension \(1\). Hence the lower bound of Theorem 21 apply. In particular, for constant \(\mathrm{ddim}\) and \(\varepsilon\), Corollary 8 is tight up to lower order terms. ### General Metric Spaces In this subsection we construct oblivious light reliable spanner for general metric spaces. We begin with the pairwise partition cover of Filtser and Le [11]. **Lemma 17** ([11]).: _Every \(n\)-point metric space \((X,d_{X})\) admits an \((O(n^{1/t}\log n),2t+\varepsilon,\frac{\varepsilon}{2t(2t+\varepsilon)})\)-PPCS for any \(\varepsilon\in[0,1]\) and integer \(t\geq 1\)._ By applying Theorem 4, we conclude **Corollary 9**.: _Every \(n\)-point metric space \((X,d_{X})\) admits a \(O(\varepsilon^{-1}\cdot t^{3}\cdot\log n)\)-light \(\left(n^{1/t}\cdot\log n\cdot\tilde{O}(\frac{t^{2}}{\varepsilon}),2t+ \varepsilon\right)\)-\(\frac{200\cdot t^{3}}{\varepsilon}\)-HST cover for any \(\varepsilon\in(0,1/3)\) and integer \(t\geq 1\)._ Proof.: Using Lemma 17, consider a \((O(n^{1/t}\log n),2t+\varepsilon,\frac{\varepsilon}{2t(2t+\varepsilon)})\)-PPCS for \(X\). Fix \(k=\frac{8\cdot(2t+\varepsilon)}{2t(2t+\varepsilon)}=\frac{16t\cdot(2t+ \varepsilon)^{2}}{\varepsilon}\geq\frac{64t^{3}}{\varepsilon}\). Note that \(k=O(\varepsilon^{-1}\cdot t^{3})\). By Theorem 4 (note that indeed \(\frac{\varepsilon}{2t(2t+\varepsilon)}<1/12\)), \(X\) admits a \(\phi\)-light \((\tau,\rho)\)-\(k\)-HST cover for \[\phi= O(k\log n)=O(\varepsilon^{-1}\cdot t^{3}\cdot\log n)\] \[\tau= O(\frac{n^{1/t}\log n}{\frac{\varepsilon}{2t(2t+\varepsilon)}} \cdot\log k)=O(n^{1/t}\cdot\varepsilon^{-1}\cdot t^{2}\cdot\log n\cdot\log(t/ \varepsilon))=n^{1/t}\cdot\log n\cdot\tilde{O}(t^{2}/\varepsilon)\] \[\rho= (2t+\varepsilon)(1+\frac{3\varepsilon}{2t(2t+\varepsilon)})=2t+ \varepsilon+\frac{3\varepsilon}{2t}<2t+3\varepsilon\.\] The corollary follows by rescaling \(\varepsilon\) by \(3\), and noting that every \(k\)-HST is also a \(\frac{64t^{3}}{\varepsilon}\)-HST. By applying Theorem 6 on the HST cover from Corollary 9 we obtain: **Corollary 10**.: _For any parameters \(\nu\in(0,1/6)\), \(t\in\mathbb{N}\), \(\varepsilon\in(0,1/2)\), any metric space admits an oblivious \(\nu\)-reliable \((12t+\varepsilon)\)-spanner with size \(\tilde{O}\left(n^{1+1/t}\cdot\nu^{-2}\cdot\varepsilon^{-3}\right)\) and lightness \(n^{1/t}\cdot\tilde{O}(\nu^{-2}\cdot\varepsilon^{-4})\cdot\operatorname{ polylog}(n)\)._ Proof.: We can assume that \(t\leq\log n\), as taking larger \(t\) will not reduce size or lightness. Using Theorem 6 on the \(k\)-HST cover from Corollary 9, we obtain an oblivious \(\nu\)-reliable spanner with stretch \((2+\frac{2}{200\cdot t^{3}/\varepsilon})\cdot(2t+\varepsilon)\leq 4t+3\varepsilon\), size \[n\cdot\tilde{O}\left(\left(n^{1/t}\cdot\log n\cdot\tilde{O}(t^{2}/\varepsilon )\right)^{3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right)=\tilde{O}\left(n^{1+3/t }\cdot\nu^{-2}\cdot\varepsilon^{-3}\right)\.\] and lightness \[\varepsilon^{-1}\!\cdot\!t^{3}\!\cdot\!\log n\!\cdot\!\tilde{O}\left(\left(n^ {1/t}\cdot\log n\cdot\tilde{O}(t^{2}/\varepsilon)\right)^{3}\cdot(\nu^{-1} \cdot\log\log n)^{2}\right)=n^{3/t}\!\cdot\!\tilde{O}(\nu^{-2}\!\cdot\! \varepsilon^{-4})\!\cdot\!\operatorname{polylog}(n)\,\] where in the equality we assumed \(\nu,\varepsilon\geq\frac{1}{n}\) (as trivially every spanner has size and lightness \(O(n^{2})\)). The corollary follows by replacing \(t\) with \(3t\) (and scaling \(\varepsilon\) accordingly). For stretch \(t=\log n\), the lightness of Corollary 10 is \(\approx\nu^{-2}\cdot\operatorname{polylog}(n)\), while by Theorem 21, \(\Omega(\nu^{-2}\cdot\log n)\) lightness is necessary (even for preserving only the connectivity of the path metric). In Appendix B (see Corollary 22) we construct a light reliable \(O(\log n)\)-spanner with lightness \(\tilde{O}(\nu^{-2}\cdot\log^{4}n)\). ### Minor Free Graphs In this subsection we use Corollary 9 to obtain a reliable \((4+\varepsilon)\)-spanner for minor free graphs. Later, in Theorem 18 we will improve the stretch to a near optimal \(2+\varepsilon\). Nevertheless, if the goal is to minimize lightness, the result in this subsection is better. By applying Theorem 4 on the PCSS of Corollary 3 we conclude **Corollary 11**.: _Let \(G\) be an \(n\)-vertex graph excluding a fixed minor. For any \(\varepsilon\in(0,1/12)\), \(G\) admits a \(O(\frac{\log n}{\varepsilon})\)-light \(\left(\log n\cdot\tilde{O}(\varepsilon^{-2}),2+\varepsilon\right)\!\cdot\! \frac{32}{\varepsilon}\)-HST cover._ Proof.: Fix \(k=\frac{32}{\varepsilon}\), and apply Theorem 4 on the PCSS of Corollary 3. As a result we obtain a \(O(\frac{\log n}{\varepsilon})\)-light \((\tau,\rho)\)-\(k\)-HST cover for \[\tau =O(\frac{\varepsilon^{-1}\log n}{\varepsilon}\log k)=\log n\cdot \tilde{O}(\varepsilon^{-2})\] \[\rho =\frac{2}{1-6\varepsilon}\cdot(1+3\varepsilon)=2+O(\varepsilon)\.\] The corollary follows by rescaling \(\varepsilon\) accordingly (and noting that it will still be \(\frac{32}{\varepsilon}\)-HST cover). By applying Theorem 6 on the HST cover from Corollary 11 we obtain: **Corollary 12**.: _Let \(G\) be an \(n\)-vertex graph excluding a fixed minor. For any \(\varepsilon,\nu\in(0,1/20)\), \(G\) admits an oblivious \(\nu\)-reliable \((4+\varepsilon)\)-spanner with size \(\tilde{O}\left(n\cdot\varepsilon^{-6}\cdot\nu^{-2}\right)\) and lightness \(\tilde{O}(\varepsilon^{-7}\cdot\log^{4}n\cdot\nu^{-2})\)._ Proof.: Using Theorem 6 upon the \(k\)-HST cover from Corollary 11, we obtain a \(\nu\)-reliable spanner with stretch \((2+\frac{10}{32/\varepsilon})\cdot(2+\varepsilon)<4+3\varepsilon\), size \(n\cdot\tilde{O}\left(\left(\frac{\log n}{\varepsilon^{2}}\right)^{3}\cdot\nu^ {-2}\right)=\tilde{O}\left(n\cdot\varepsilon^{-6}\cdot\nu^{-2}\right)\), and lightness \(O(\frac{\log n}{\varepsilon})\cdot\tilde{O}((\varepsilon^{-2}\cdot\log n)^{3 }\cdot\nu^{-2})=\tilde{O}(\varepsilon^{-7}\cdot\log^{4}n\cdot\nu^{-2})\). The corollary follows by rescaling \(\varepsilon\) accordingly. ### Doubling Metric of High Dimension Consider a metric space with a moderately large doubling dimension \(\operatorname{ddim}\), e.g. \(\sqrt{\log n}\). The reliable spanner from Corollary 8 has exponential dependence on the dimension in both size and lightness, which might be too large. Nevertheless, such a metric space is much more structured than a general metric space (that has doubling dimension \(O(\log n)\)), and thus we expect to be able to construct better spanners for such graphs (compared to Corollary 10). Such a phenomena was previously shown for light spanners [11], and for reliable sparse spanners [10]. We begin by observing that a PPCS for such metric spaces follow by the sparse covers of Filtser [10]. **Lemma 18** ([10] implicit).: _Every \(n\)-point metric space \((X,d_{X})\) with doubling dimension \(\operatorname{ddim}\) admits a \((2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t,t,\frac{ 1}{t})\)-PPCS, for any \(\Omega(1)\leq t\leq\operatorname{ddim}\)._ Proof.: Fix the scale parameter \(\Delta>0\). Filtser [10] constructed a collection \(\mathbb{P}=\{\mathcal{P}_{1},\ldots,\mathcal{P}_{s}\}\) of \(s=2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t\), \(\Delta\)-bounded partitions, such that every ball of radius \(R=\frac{2}{t}\cdot\Delta\) is fully contained in some cluster, in one of the partitions. We argue that \(\mathbb{P}\) is an \((2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t,t,\frac{ 1}{t})\)-PPCS. Consider two points \(x,y\) such that \(d_{X}(x,y)\leq\frac{1}{2}R=\frac{\Delta}{t}\). There is some partition \(\mathcal{P}_{i}\in\mathbb{P}\), and a cluster \(C\in\mathcal{P}_{i}\) such that \(B_{X}(x,R)\subseteq C\). For every point \(z\in B_{X}(y,\frac{1}{2}R)\), it holds that \(d_{X}(x,z)\leq d_{X}(x,y)+d_{X}(y,z)\leq\frac{1}{2}R+\frac{1}{2}R=R\), implying \(z\in B_{X}(x,R)\), and in particular \(B_{X}(y,\frac{1}{2}\cdot R)\subseteq C\). Similarly \(B_{X}(x,\frac{1}{2}\cdot R)\subseteq C\). It follows that \(\mathbb{P}\) is a \((2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t,t,\frac{ 1}{t})\)-PPCS as required. By applying Theorem 4, we conclude **Corollary 13**.: _Every \(n\)-point metric space \((X,d_{X})\) with doubling dimension \(\operatorname{ddim}\) admits an \(O(t^{2}\cdot\log n)\)-light \(\left(2^{O(\frac{\operatorname{ddim}}{t})}\cdot\operatorname{ddim}\cdot t \cdot\log t,t\right)\)-\(\frac{t^{2}}{2}\)-HST cover, for any \(\Omega(1)\leq t\leq\operatorname{ddim}\)._ Proof.: Fix \(k=8t^{2}\), \(\varepsilon=\frac{1}{12}\), and apply Theorem 4 on the PCSS of Lemma 18. As a result we obtain a \(O(t^{2}\cdot\log n)\)-light \((\tau,\rho)\)-\(k\)-HST cover for \[\tau =O(\frac{2^{O(\frac{\mathrm{ddim}}{t})}\cdot\mathrm{ddim}\cdot t}{ \varepsilon}\cdot\log k)=2^{O(\frac{\mathrm{ddim}}{t})}\cdot\mathrm{ddim} \cdot t\cdot\log t\] \[\rho =\frac{2}{1-6\varepsilon}\cdot t=4t\.\] The corollary follows by rescaling \(t\) accordingly. By applying Theorem 6 on the HST cover from Corollary 13 we obtain: **Corollary 14**.: _Every \(n\)-point metric space \((X,d_{X})\) with doubling dimension \(\mathrm{ddim}\) admits an oblivious \(\nu\)-reliable \(t\)-spanner with size \(n\cdot\tilde{O}\left(\nu^{-2}\right)\cdot 2^{O(\frac{\mathrm{ddim}}{t})}\cdot \mathrm{poly}(\mathrm{ddim},\log\log n)\) and lightness \(2^{O(\frac{\mathrm{ddim}}{t})}\cdot\tilde{O}(\log n\cdot\nu^{-2})\cdot \mathrm{poly}(\mathrm{ddim})\), for any \(\Omega(1)\leq t\leq\mathrm{ddim}\)._ Proof.: Using Theorem 6 upon the \(k\)-HST cover from Corollary 13, we obtain a \(\nu\)-reliable spanner with stretch \((2+\frac{20}{t^{2}})\cdot t\), size \(n\cdot\tilde{O}\left(\nu^{-2}\right)\cdot 2^{O(\frac{\mathrm{ddim}}{t})}\cdot \mathrm{poly}(\mathrm{ddim},\log\log n)\), and lightness \(2^{O(\frac{\mathrm{ddim}}{t})}\cdot\tilde{O}(\log n\cdot\nu^{-2})\cdot \mathrm{poly}(\mathrm{ddim})\). The corollary follows by scaling \(\varepsilon\) accordingly. A particularly interesting choice of parameters is \(t=\mathrm{ddim}\), where we will get an oblivious \(\nu\)-reliable ddim-spanner of size \(n\cdot\tilde{O}\left(\nu^{-2}\right)\cdot\mathrm{poly}(\mathrm{ddim},\log \log n)\), and lightness \(\tilde{O}(\log^{2}n\cdot\nu^{-2})\cdot\mathrm{poly}(\mathrm{ddim})\). ### General Ultrametric A major part of this paper is devoted to constructing light reliable spanners for \(k\)-HST. However, Theorem 1 requires \(k>1\), and the stretch grows as \(k\) is closer to \(1\). What about the general case of \(1\)-HST (a.k.a ultrametric)? A stretch of \(8\) can be obtained trivially by first embedding the ultrametric into a \(2\)-HST with distortion \(2\) (see [1]). However, we would like preserve the near optimal stretch of \(2+\varepsilon\). In this subsection we provide an answer for this question. We begin be constructing a \(k\)-HST cover for ultramertics. **Lemma 19**.: _For every \(\varepsilon\in(0,1)\), every ultrametric admits an \(\varepsilon^{-1}\)-light \(\left(O(\varepsilon^{-1}\log\frac{1}{\varepsilon}),1+\varepsilon\right)\)-\(\frac{1}{\varepsilon}\)-HST cover._ Proof.: Consider a \(1\)-HST \(T\). Fix \(N=\left\lceil\log_{1+\varepsilon}\frac{1}{\varepsilon}\right\rceil=O( \varepsilon^{-1}\log\frac{1}{\varepsilon})\). For every \(i\in\{0,1,\ldots,N\}\), let \(T_{i}\) be the HST \(T\), where we change the label of every internal node \(x\), from \(\Gamma_{x}\) to \((1+\varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}\), for \(j\in\mathbb{Z}\) such that \[(1+\varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j-1}}<\Gamma_{x}\leq(1+ \varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}\.\] Finally, contract all the internal nodes that have the same label as their father. As a result, we obtain a dominating \(\frac{1}{\varepsilon}\)-HST \(T_{i}\), where the distance between every two vertices is increased by at most a factor of \(\frac{1}{\varepsilon}\). In particular, \(T_{i}\) has weight at most \(\frac{1}{\varepsilon}\) times larger than \(T\). It remains to show that the distance between every pair of leaves is preserved up to a factor of \(1+\varepsilon\) in one of the \(\frac{1}{\varepsilon}\)-HST's in the cover. Consider a pair \(u,v\) with lca \(x\), and let \(i\in\{0,\ldots,N\}\), \(j\in\mathbb{Z}\) such that \((1+\varepsilon)^{i-1}\cdot\frac{1}{\varepsilon^{j}}<\Gamma_{x}\leq(1+ \varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}\). In the HST \(T_{i}\), the label of the lca of \(u,v\) will be changed to \((1+\varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}\), and hence \(d_{T_{i}}(u,v)=(1+\varepsilon)^{i}\cdot\frac{1}{\varepsilon^{j}}<(1+ \varepsilon)\cdot\Gamma_{x}=(1+\varepsilon)\cdot d_{T}(u,v)\) By applying Theorem 6 on the HST cover from Lemma 19 (and scaling \(\varepsilon\) accordingly) we obtain: **Theorem 15**.: _For any parameters \(\nu,\varepsilon\in(0,1/12)\), every ultrametric (\(1\)-HST) \(T\) admits an oblivious \(\nu\)-reliable \((2+\varepsilon)\)-spanner of size \(n\cdot\tilde{O}\left(\varepsilon^{-3}\cdot(\nu^{-1}\cdot\log\log n)^{2}\right)\) and lightness \(\tilde{O}(\varepsilon^{-4}\cdot(\nu^{-1}\cdot\log\log n)^{2})\)._ ## 7 Light Reliable Spanner for the Path Graph In this section we present our hop-bounded oblivious reliable \(1\)-spanner for the weighted path graph. Let \(P_{n}=([n],E)\) be a weighted path on \(n\) vertices and let \(\nu\in(0,1)\), \(h\in[\log n]\) be two parameters of the construction. The parameter \(\nu\) is the input reliablity parameter, while the parameter \(h\) governs the tradeoff between the hop-bound of the spanner, to its size and lightness. As previous works [12, 11] were concerned with the hop parameter (as in some scenarios it governs stretch), we prove Theorem 16 for a general hop parameter \(h\). **Theorem 16**.: _For any parameters \(\nu\in(0,1)\), and \(h\in[\log n]\), any weighted path graph \(P_{n}\) admits an oblivious \(\nu\)-reliable \((2h+1)\)-hop \(1\)-spanner with lightness \(O\left(hn^{2/h}\cdot\left(\frac{\log(h/\nu)}{\nu}\right)^{2}\right)\) and size \(O\left(n^{1+1/h}\cdot\frac{\log(h/\nu)}{\nu}\right)\)._ By setting \(h=\lfloor(\log n-1)/2\rfloor\), we get the following corollary: **Corollary 17**.: _For any weighted path graph \(P_{n}\), and parameter \(\nu\in(0,1)\), there is an oblivious \(\nu\)-reliable, \(\log n\)-hop \(1\)-spanner with lightness \(\tilde{O}(\nu^{-2}\cdot\log n)\) and size \(O\left(\nu^{-1}\cdot n\cdot\log\left(\frac{\log n}{\nu}\right)\right)\)._ ### Construction Let \([n]=V_{0}\supseteq V_{1}\supseteq\cdots\supseteq V_{h}\) be a hierarchy of randomly selected sets, such that for all \(1\leq i\leq h\), every vertex of \(V_{i-1}\) is taken into \(V_{i}\) independently with probability \(p=n^{-1/h}\). Let \(\ell=c\cdot\nu^{-1}\cdot\ln\left(\frac{h}{\nu}\right)\) for some constant \(c\) to be fixed later. Assume w.l.o.g. that \(\ell\) is an integer. For every index \(0\leq i<h\) and \(x\in V_{i}\), let \(x\leq u_{1}<...<u_{\ell}=u\) be the first \(\ell\) vertices of \(V_{i+1}\) that lie to the right of \(x\), and similarly \(x\geq v_{1}>...>v_{\ell}=v\) the first \(\ell\) vertices of \(V_{i+1}\) that lie to the left of \(x\). If there are less than \(\ell\) such vertices to the right (resp., left), we simply define \(u=v_{n}\) as the last vertex (resp., \(v=v_{1}\) as the first vertex). Now, for every \(y\in[v,u]\cap V_{i}\), add the edge \(\{x,y\}\) to the spanner \(H\). In other words, we connect \(x\in V_{i}\) to every vertex of \(V_{i}\) that is not farther than the first \(\ell\) neighbors of \(x\) in \(V_{i+1}\) (in either direction). Finally, vertices in \(V_{h}\) connect to all other vertices in \(V_{h}\). Denote by \(E_{i}\) the edges we added at step \(i\) to the spanner. ### Analysis Size analysis.Take \(0\leq i<h\), and condition on any fixed choice of \(V_{i}\). Consider any vertex \(x\in V_{i}\), and arrange the vertices of \(V_{i}\) that lie to the right of \(x\) in increasing order. For each such vertex we throw an independent coin with probability \(p\) for success (meaning it goes to \(V_{i+1}\) with this probability). Note that the number of edges \(x\) adds to the right in step \(i\) is essentially the number of coins we throw until the \(\ell\)-th success. (In fact, the number of edges can only be smaller if there are less than \(\ell\) successes when we run out of vertices in \(V_{i}\).) The expected number of trials until we see \(\ell\) successes is \(\ell/p\). The same argument holds for the left side edges. This bound holds for any choice of \(V_{i}\). Note that for \(0\leq i\leq h\), \(E[|V_{i}|]=np^{i}\), so the expected number of edges added in step \(i\) for \(0\leq i<h\) is at most \[np^{i}\cdot 2\ell/p=2np^{i-1}\cdot\ell\,\] and over the first \(h\) steps it is at most \[2n\ell\cdot\sum_{i=0}^{h-1}p^{i-1}=O(n\ell/p)=O(n^{1+1/h}\cdot\ell)\,\] using that \(p=n^{-1/h}\leq 1/2\). For \(i=h\) we add at most \(|V_{h}|^{2}\) edges. In expectation: \[\mathbb{E}[|V_{h}|^{2}]=\sum_{i}\Pr[v_{i}\in V_{h}]+\sum_{i\neq j}\Pr[v_{i},v_ {j}\in V_{h}]=n\cdot p^{h}+n\cdot(n-1)\cdot p^{2h}<2. \tag{7}\] We conclude that the expected size of the spanner is \(O\left(n^{1+1/h}\cdot\ell\right)\). Lightness Analysis.Fix any edge \(\{u,v\}\in E(P_{n})\), we say that a spanner edge \(\{x,y\}\)_crosses_ the edge \(\{u,v\}\) if \(x\leq u\) and \(v\leq y\). Let \(c(u,v)\) denote the number of times \(\{u,v\}\) is crossed. Observe that the weight of each spanner edge is equal to the sum of weights of edges in \(P_{n}\) that it crosses, therefore, the total weight of the spanner is \[\sum_{e\in P(n)}c(e)\cdot w(e)\.\] Thus, it suffices to show that for every edge \(e\in E(P_{n})\): \[\mathbb{E}[c(e)]\leq O(hn^{2/h}\cdot\ell^{2})\.\] To this end, fix an edge \(\{u,v\}\in E(P_{n})\), and an index \(0\leq i<h\). We will bound the expected number of edges in \(E_{i}\) that cross \(\{u,v\}\). Condition on any fixed choice of \(V_{i}\). Note that an edge \(\{x,y\}\) with \(x,y\in V_{i}\), \(x\leq u\) and \(y\geq v\) is added to \(E_{i}\) by \(x\) iff there are less than \(\ell\) vertices of \(V_{i+1}\) in the interval \([x:y)\). Consider the vertices of \(V_{i}\) from \(u\) to the left in decreasing order, and similarly to the above lemma, let \(X\) be a random variable counting the number of coins (with probability for success) we throw until getting \(\ell\) successes. Denote by \(Y\) the symmetric random variable, when considering vertices of \(V_{i}\) from \(v\) to the right, in increasing order. Then observe that at most \(X\cdot Y\) edges of \(E_{i}\) cross \(\{u,v\}\). Since \(X,Y\) are independent, we have that \[\mathbb{E}[X\cdot Y]=\mathbb{E}[X]\cdot\mathbb{E}[Y]\leq(\ell/p)^{2}\.\] By (7), the expected number of edges in \(E_{h}\) is bounded by \(2\), so each edge of \(P_{n}\) is expected to be crossed at most twice by edges in \(E_{h}\). Overall, when considering all the \(h+1\) levels, for each \(e\in E(P_{n})\) \[\mathbb{E}[c(e)]\leq O(h\cdot\ell^{2}/p^{2})=O\left(h\cdot n^{2/h}\cdot\ell^{ 2}\right)\,\] We conclude that the expected lightness of the spanner is \(O\left(h\cdot n^{2/h}\cdot\ell^{2}\right)\). Stretch and hop-bound analysis.We say a path \(p=(v_{0},\ldots,v_{k})\) is monotone if it is either monotone increasing: \(v_{0}\leq\cdots\leq v_{k}\), or monotone decreasing: \(v_{0}\geq\cdots\geq v_{k}\). The following definition is crucial for our analysis of which vertices survive an attack \(B\), and which will be added to \(B^{+}\). **Definition 20**.: _We say a monotone increasing (resp. decreasing) path \(p=(v_{0},\ldots,v_{k})\) of the spanner \(H\) is usable for \(v_{0}\) if the following holds._ 1. _For every_ \(0\leq i\leq k\)_,_ \(v_{i}\in V_{i}\)_._ 2. _For every_ \(0\leq i<k\)_, if_ \(v_{i}\neq v_{i+1}\)_, then_ \(\{v_{i},v_{i+1}\}\in E_{i}\)_._ 3. \(v_{k}\) _is connected in_ \(H\) _to all vertices in_ \(V_{k}\cap[v_{k}:n]\) _(resp._ \(V_{k}\cap[1:v_{k}]\)_)_ _We say a vertex \(v\) is safe w.r.t. an attack \(B\subseteq V\), if it has a monotone increasing usable path and a monotone decreasing usable path which are both disjoint from the attack \(B\)._ The following lemma asserts that the spanner contains a shortest path that is not damaged by the attack (also with a bounded number of hops) between safe vertices. **Lemma 21**.: _If \(u,v\in[n]\) are safe w.r.t. an attack \(B\), then the spanner contains a \((2h+1)\)-hop monotone path between \(u,v\) that is disjoint from \(B\)._ Proof.: Assume w.l.o.g. that \(u<v\) and let \((u=u_{0},\ldots,u_{k})\) be a _usable_ monotone increasing path of \(u\) and \((v=v_{0},\ldots,v_{j})\) a monotone decreasing _usable_ path of \(v\). Additionally, assume w.l.o.g. that \(k\leq j\). If \(u_{k}\leq v_{k}\), then by item 3, \(u_{k}\) is connected to every vertex in \([u_{k}:n]\cap V_{k}\), in particular the spanner contains the edge \(\{u_{k},v_{k}\}\). Thus, we may take the monotone path \(u_{0},\ldots,u_{k},v_{k},\ldots,v_{0}\). Otherwise, there exists \(i<k\) s.t. \(u_{i}<v_{i}\) and \(u_{i+1}\geq v_{i+1}\). Recall that by our spanner construction, \(u_{i}\) is also connected to all the vertices \([u_{i}:u_{i+1}]\cap V_{i}\), and \(v_{i}\) is connected to all the vertices \([v_{i+1}:v_{i}]\cap V_{i}\). If \(v_{i}\leq u_{i+1}\) then \(v_{i}\in[u_{i}:u_{i+1}]\), and we may use the monotone path \(u_{0},\ldots,u_{i},v_{i},\ldots,v_{0}\). Else, \(u_{i+1}<v_{i}\), therefore \(u_{i+1}\in[v_{i+1}:v_{i}]\), and as \(u_{i+1}\in V_{i}\) as well, we have the monotone path \(u_{0},\ldots,u_{i+1},v_{i},\ldots,v_{0}\). It remains to bound the number of hops. Note that by item 1, a _usable_ path contains at most \(h\) edges, and every \(u-v\) path we considered here is a concatenation of (a prefix of) two such paths, so the number of edges used is at most \(2h+1\) Reliability analysis.Let \(B\) be an oblivious attack. For any spanner \(H\) in the support of the distribution, the faulty extension \(B^{+}:=B^{+}_{H}\) will consist of \(B\) and all the vertices \(v\) that are not _safe_. Recall that the attack is oblivious to our choice of the random sets \(V_{i}\). In the remainder of this section, for each vertex we analyse the probability that it is safe, which will depend on the number of faulty vertices in its neighborhoods, as captured by the notion of _shadow_. **Definition 22** ([2]).: _Let \(P_{n}\) be a path graph and let \(B\) be a subset of its vertices \((B\subseteq[n])\). The left \(\alpha\)-shadow of \(B\) is all the vertices \(b\) such for some \(a\in[n],a\leq b\), \(|[a:b]\cap B|\geq\alpha\cdot|[a:b]|\), denoted by \(\mathcal{S}_{L}(\alpha,B)\). The right \(\alpha\)-shadow \(\mathcal{S}_{R}(\alpha,B)\) is defined symmetrically. The set \(\mathcal{S}_{\alpha}(B)=\mathcal{S}_{L}(\alpha,B)\cup\mathcal{S}_{R}(\alpha,B)\) is called the \(\alpha\)-shadow of \(B\). If \(B\) is clear from context, we may simply write \(\mathcal{S}_{\alpha}\) for the \(\alpha\)-shadow of \(B\)._ **Lemma 23** ([2]).: _For any \(B\subseteq[n]\):_ * _For every_ \(\alpha\in[\frac{2}{3},1)\) _,_ \(|\mathcal{S}_{\alpha}|\leq\frac{|B|}{2\alpha-1}\)_._ * _For every_ \(\alpha\in(0,1)\)_,_ \(|\mathcal{S}_{\alpha}|\leq O\left(\frac{|B|}{\alpha}\right)\)_._ The following lemma provides a quantitative bound, exponential in the parameter \(\ell\), on the failure probability of vertices outside a certain shadow. **Lemma 24**.: _For any \(0<\alpha<1\), if \(x\in[n]\setminus S_{\alpha}\), then_ \[\Pr[x\text{ is not safe}]\leq O(\sqrt{\ell}\cdot h\cdot\alpha^{\ell-1})\.\] Proof.: Note that \(x\notin B\), as otherwise by definition it will be contained in \(S_{\alpha}\) for any \(0\leq\alpha\leq 1\). We will try to construct a usable monotone increasing path for \(x\), \((v_{0},v_{1},...,v_{k})\) for some \(0\leq k\leq h\), that is disjoint from \(B\). Initially set \(v_{0}=x\in V_{0}\setminus B\). Assume we built the path until \(v_{i}\in V_{i}\setminus B\), and now we attempt to find the next vertex \(v_{i+1}\in V_{i+1}\). Consider the first \(\ell\) vertices in \(V_{i+1}\) that lie to the right of \(x\). If there are less than \(\ell\) such neighbors, then observe that there are less than \(\ell\) vertices in \(V_{i+1}\) to the right of \(v_{i}\) as well (as \(v_{i}\geq x\)). In this case, by the spanner construction, \(v_{i}\) connects to all vertices in \(V_{i}\) to its right, and we can set \(k=i\) and stop the process (observe that \(v_{k}\) will satisfy item 3 in the definition of usable path, so indeed we may stop here). Otherwise, if there is a vertex in \(V_{i+1}\setminus B\) among the first \(\ell\) neighbors of \(x\), we may take the first such vertex as \(v_{i+1}\). Note that the path remains monotone: \(v_{i}\leq v_{i+1}\). This is because \(v_{i+1}\in V_{i}\), i.e. it was a valid choice for \(v_{i}\), and we always take the first possible vertex. We conclude that the only case the path-building fails is the event that all these \(\ell\) vertices in \(V_{i+1}\) fall in \(B\). By the virtue of \(x\notin S_{\alpha}\), we have that in any interval \([x:y]\) (for \(y>x\)), at most \(\alpha\) fraction of the vertices are in \(B\). Fix any \(y>x\), and condition on the event that \(y\) is the smallest such that the first \(\ell\) neighbors in \(V_{i+1}\) to the right of \(x\) are in the interval \(I=[x:y]\). Recall that every vertex is sampled to \(V_{i+1}\) obliviously to the attack \(B\). Note that the conditioning does create dependencies and change the probability to be in \(V_{i+1}\), but the main observation is, that except for the vertex \(y\in V_{i+1}\), every set of \(\ell-1\) vertices in \([x:y)\) has equal probability to be the remaining \(\ell-1\) vertices of \(V_{i+1}\). Thus, the failure probability at step \(i+1\), which is the probability that these \(\ell\) vertices in \(V_{i+1}\) are all taken from the set \(B\), is at most \[\frac{\binom{|I\cap B|}{\ell-1}}{\binom{|I|}{\ell-1}}\leq\frac{\binom{\alpha|I| }{\ell-1}}{\binom{|I|}{\ell-1}}\leq O(\sqrt{\ell}\cdot\alpha^{\ell-1}). \tag{8}\] The last inequality uses standard approximation of binomial coefficients, see Appendix A for a proof. The lemma follows by noticing that the bound obtained is independent of \(y\), and by taking a union bound over both sides (left and right) of the at most \(h\) steps \(i=0,1,...,h-1\). We will consider two regimes of shadows separately, the first when \(\alpha\) is close to \(1\), and the second for small \(\alpha\). For the first regime, define for each index \(0\leq j\leq\lfloor\log\frac{1}{3\nu}\rfloor\), \(\alpha_{j}=1-2^{j}\cdot\nu\). Note that for any such \(j\), \(\alpha_{j}\geq 2/3\), so by the first item in Lemma 23 we have \[|S_{\alpha_{j}}|\leq\frac{|B|}{2\alpha_{j}-1}=\frac{|B|}{1-2^{j+1}\nu}\leq(1+ 2^{j+2}\nu)|B|\.\] Since all vertices of \(B\) are included in any shadow, it follows that \[|S_{\alpha_{j}}\setminus B|\leq 2^{j+2}\nu|B|. \tag{9}\] For the smaller shadows, by the second item in Lemma 23 we have \[|S_{2^{-j}}|\leq O(2^{j}|B|). \tag{10}\] **Lemma 25**.: \(\mathbb{E}[|B^{+}|]\leq(1+O(\nu))|B|\)_._ Proof.: First, consider the case that \(B=\emptyset\). Note that in this case, every vertex is safe, as it has a monotone increasing and a monotone decreasing usable paths. To see the former: for \(0\leq i<h\), every vertex \(v\in V_{i}\) is either connected to the closest vertex of \(V_{i+1}\) that lie to the right of \(v\), or, if there is no such vertex, then \(v\) is connected to every vertex in \(V_{i}\cap[v:n]\). Thus one can easily build a monotone increasing path. Therefore, in this case \(B^{+}=B=\emptyset\). Notice that \[\mathbb{E}[|B^{+}|]\leq|B|+\sum_{x\in[n]\setminus B}\Pr[x\text{ is not safe}]. \tag{11}\] We analyze Equation (11) by considering vertices in different shadow regimes separately, i.e., \[[n]=S_{\alpha_{0}}+\sum_{j=1}^{\lfloor\log\frac{1}{3\nu}\rfloor}\left(S_{ \alpha_{j}}\setminus S_{\alpha_{j-1}}\right)+S_{1/2}\setminus S_{\alpha_{ \lfloor\log\frac{1}{3\nu}\rfloor}}+\sum_{j=2}^{\log n}S_{2^{-j}}\setminus S_{ 2^{-(j-1)}}\.\] Note that \(S_{1/n}=[n]\), as \(B\neq\emptyset\), so every vertex was accounted for. It holds that \[\mathbb{E}\left[|B^{+}|\right]\leq \underset{(1)}{\underbrace{|\mathcal{S}_{\alpha_{0}}|}}+\underset {(2)}{\underbrace{\sum_{j=1}^{\log\frac{1}{3\nu}}\sum_{x\in\mathcal{S}_{\alpha_{ j}}\backslash\mathcal{S}_{\alpha_{j-1}}}}}\Pr\left[x\in B^{+}\right]}\] \[+\underset{(3)}{\underbrace{\sum_{x\in\mathcal{S}_{\frac{1}{2} \backslash\mathcal{S}_{\alpha_{\lfloor\log\frac{1}{3\nu}\rfloor}}}}}\Pr\left[x \in B^{+}\right]}+\underset{(4)}{\underbrace{\sum_{j=2}^{\log n}\sum_{x\in \mathcal{S}_{2^{-j}}\backslash\mathcal{S}_{2^{-(j-1)}}}}}\Pr\left[x\in B^{+} \right]\.\] We next bound each one of the summands:13 Footnote 13: For convenience we will ignore the \(-1\) in the exponent of \(\alpha\) in lemma 24, it can easily be handled by increasing slightly \(\ell\). 1. By Equation (9), \((1)=|S_{\alpha_{0}}|\leq(1+4\nu)\cdot|B|\). 2. Fix \(1\leq j\leq\lfloor\log\frac{1}{3\nu}\rfloor\), and \(x\notin S_{\alpha_{j-1}}\), then by Lemma 24 the probability that \(x\) is not safe is at most \[O(\sqrt{\ell}\cdot h)\cdot(1-2^{j-1}\nu)^{c\cdot\nu^{-1}\cdot\ln(h/\nu)}\leq O (h/\nu)^{2}\cdot e^{-2^{j-1}\cdot c\cdot\ln(h/\nu)}\leq 2^{-2j}\,\] where the last inequality holds for large enough constant \(c\). By Equation (9), \(|S_{\alpha_{j}}\backslash B|\leq 4\nu\cdot 2^{j}|B|\). Summing over all indices \(j\) we conclude \((2)\leq\sum_{j=1}^{\log\frac{1}{3\nu}}4\nu\cdot 2^{j}|B|\cdot 2^{-2j}\leq 4\nu \cdot|B|\). 3. For the transition between large and small shadows, whenever \(x\in S_{1/2}\setminus S_{\alpha_{\lfloor\log\frac{1}{3\nu}\rfloor}}\), since \(\alpha_{\lfloor\log\frac{1}{3\nu}\rfloor}\leq 5/6\) we have that the probability that \(x\) is not safe is at most \[O(h/\nu)^{2}\cdot(5/6)^{-c\cdot\nu^{-1}\cdot\ln(h/\nu)}\leq\nu\,\] for large enough \(c\). By Equation (10), \(|\mathcal{S}_{\frac{1}{2}}\setminus B|\leq O(|B|)\), thus \((3)\leq O(\nu|B|)\). 4. For \(2\leq j\leq\log n\) and \(x\notin S_{2^{-(j-1)}}\), by Lemma 24 the probability that \(x\) is not safe is at most \[O(\sqrt{\ell}h)\cdot(2^{-(j-1)})^{c\cdot\nu^{-1}\cdot\ln(h/\nu)}\leq O(h/\nu )^{2}\cdot(\nu/h)^{j\cdot c}\leq 2^{-2j}\cdot\nu\,\] for large enough constant \(c\). By Equation (10), \(|S_{2^{-j}}|\leq O(2^{j}|B|)\). It follows that \((4)\leq\sum_{j=2}^{\log n}O(2^{j}|B|)\cdot 2^{-2j}\cdot\nu=O(\nu)\cdot|B|\). Combining the 4 cases together, we conclude that \(\mathbb{E}\left[B^{+}\right]\leq(1+O(\nu))\cdot|B|\), as required. Proof of Theorem 16.: The bounds on the expected size and lightness of the spanner were shown above, and by Lemma 8, they can be translated to worst-case bounds, incurring only a constant loss. Recall that we set \(B^{+}\) to be all the vertices which are not _safe_. By Lemma 21 we get a shortest path with \(2h+1\) hops for any pair of _safe_ vertices. By Lemma 25, the expected size of \(B^{+}\) is \((1+O(\nu))|B|\), the theorem follows by rescaling \(\nu\) by a constant. Improved Light Reliable Spanners for Minor-free Graphs In this section we refine our techniques in order to obtain near optimal stretch for light reliable spanners of minor-free graphs. More generally, we show that a certain property of the Pairwise Partition Cover Scheme (PPCS) allows us to improve the stretch to be almost \(2\), which is near optimal, while increasing the lightness by polylog factors. We begin by formally defining this property, which could be useful for other graph families as well. Throughout this section \(G=(X,E,w)\) is a weighted graph with \(n\) vertices excluding a constant size minor. \(d_{G}\) denotes the shortest path metric in \(G\). That is \(d_{G}(u,v)\) denotes the minimum weight of a path from \(u\) to \(v\) in \(G\). Centrally-padded PPCS for Minor-free Graphs.The property of PPCS we will exploit is captured by the following definition. **Definition 26**.: _A \((\tau,\rho,\varepsilon,\Delta)\)-pairwise partition cover \(\mathbb{P}=\{\mathcal{P}_{1},\ldots,\mathcal{P}_{s}\}\) of a metric space \((X,d)\) is called centrally-padded, if every cluster \(C\) in every partition has a designated center \(x\in X\), and for every pair \(u,v\) such that \(\frac{\Delta}{2\rho}\leq d_{G}(u,v)\leq\frac{\Delta}{\rho}\), there is a cluster \(C\) in one of the partitions \(\mathcal{P}_{i}\) such that \(C\) contains both closed balls \(B(u,\varepsilon\Delta),B(v,\varepsilon\Delta)\), and also_ \[d_{G}(u,x)+d_{G}(v,x)\leq(1+32\varepsilon)\cdot d_{G}(u,v). \tag{12}\] The following lemma asserts that our construction of PPCS for minor-free graphs in Section 4 is in fact centrally-padded. **Lemma 27**.: _For any minor-free graph \(G\) with \(n\) vertices and \(0<\varepsilon<1/12\), there exists \(\big{(}O(\varepsilon^{-1}\log n),\frac{2}{1-6\varepsilon},\varepsilon\big{)}\)-PPCS which is centrally-padded._ Proof.: Consider the construction of Lemma 12. Recall that every cluster is a ball centered at a net point, so we naturally define its center as that net point. For any \(u,v\in X\) with \(\frac{(1-6\varepsilon)\Delta}{4}\leq d(u,v)\leq\frac{(1-6\varepsilon)\Delta}{2}\), we found the first shortest path \(P\) in the SPD that intersects \(P_{uv}\) (the shortest \(u-v\) path) or at least one of the balls \(B_{u}=B(u,\varepsilon\Delta)\), \(B_{v}=B(v,\varepsilon\Delta)\) (see Figure 2). We denoted \(x\in P\) as a vertex on that intersection. Then we found a net-point \(z\in\mathcal{N}\) on \(P\) at distance at most \(\varepsilon\Delta\) from \(x\), and consider the cluster \(C=B(z,\Delta/2)\). If \(x\in P_{uv}\) then \[d(z,u)+d(z,v)\leq 2d(z,x)+d(x,u)+d(x,v)\leq 2\varepsilon\Delta+d(u,v)\leq(1+ 16\varepsilon)\cdot d(u,v)\.\] Otherwise, w.l.o.g. \(x\in B_{u}\) and we get that \[d(z,u)+d(z,v)\leq d(z,u)+d(z,u)+d(u,v)\leq 4\varepsilon\Delta+d(u,v)\leq(1+ 32\varepsilon)\cdot d(u,v)\,\] as required. \(k\)-HST Cover.The next step is to compute an \(k\)-HST cover, which is done exactly in the same manner as in Theorem 4, so we get a \(O(k\log n)\)-light \(\left(O(\varepsilon^{-2}\log n\cdot\log k),\frac{2(1+3\varepsilon)}{1-6 \varepsilon}\right)\)-\(k\)-HST cover. The main point is that we will use these \(k\)-HSTs to construct reliable spanners, but the edge weights and the stretch guarantees will be with respect to the original distances in the graph. That is, in some sense we ignore the distances induced by the \(k\)-HSTs, and just use their laminar structure. The property that we will use from the proof of Theorem 4 is the following. * For every pair \(u,v\), there exists a cluster \(C\) of diameter at most \(\Delta\) in the PPCS in which \(u,v\) are centrally-padded, and so \(C\) contains a net point. Thus, there will be a \(k\)-HST in the cover with an internal node \(x\) and label \(\Gamma_{x}=\Delta\) corresponding to \(C\), that contains \(u,v\). We remark that \(L(x)\) is not necessarily equal to \(C\), since we changed \(C\) a bit before making it an internal node of the \(k\)-HST (to guarantee the laminar structure, and a bound on the lightness). The main result of this section is the following theorem. **Theorem 18**.: _Let \(G=(V,E)\) be a graph with \(n\) vertices that excludes a fixed minor. Then for any \(0<\varepsilon<1/12\) and \(0<\nu^{\prime}<1\), \(G\) admits an oblivious \(\nu^{\prime}\)-reliable \(2(1+\varepsilon)\)-spanner of size \(\tilde{O}\left(\frac{n}{\varepsilon^{6}\nu^{2}}\right)\) and lightness \(\tilde{O}\left(\frac{\log^{8}n}{\varepsilon^{7}\nu^{\prime 2}}\right)\)._ Let \(k=c^{\prime}/\varepsilon\), for a constant \(c^{\prime}\) to be determined later. We create a \(O(k\log n)\)-light \(\left(\tau,\frac{2(1+3\varepsilon)}{1-6\varepsilon}\right)\)-\(k\)-HST cover for the graph \(G\), with \(\tau=O(\varepsilon^{-2}\log n\cdot\log k)\), as discussed above. Since we desire a \(\nu^{\prime}\)-reliable spanner for \(G\), we will use the parameter \(\nu=\frac{\nu^{\prime}}{5\tau}\) when devising a \(\nu\)-reliable spanner for each \(k\)-HST in the cover. Let \(T\) be one of the \(k\)-HSTs in the cover. Note that every internal node of \(T\) corresponds to a cluster \(C\) in the centrally-padded PPCS, which have a center \(x\). In a hope to avoid confusion, we will refer to \(x\) both as the cluster center, and as the internal node of \(T\). Recall that in Section 3 every internal node chose an arbitrary ordering on its leaves, which was used to define the preorder of \(T\). Here, the order will not be arbitrary. Instead, it will be defined with respect to distances in \(G\). That is, each internal node \(x\) orders its children \(x_{1},\ldots,x_{t}\) (each is a net point in the graph) by their distance to \(x\) (in \(G\)). Then, let \(P\) be the resulting preorder path on the leaves of \(T\). The intuition behind the sampling of the random bi-cliques, is that we want vertices "near" \(x\) to be chosen, since the centrally-padded property gives us a better stretch guarantee going through \(x\), than just \(2\Gamma_{x}\). To this end, let \(L(x)=(v_{1},v_{2},...,v_{s})\) be the ordering given by the restriction of \(P\) to \(L(x)\). We sample each \(v_{j}\) independently to be included in \(Z_{x}\) with probability \(p_{j}=\min\{1,\frac{c\cdot\ln n}{j\cdot\nu}\}\), for a constant \(c\) to be determined later. The edges of the spanner are: For every internal node \(x\in T\) with children \(x_{1},\ldots,x_{t}\), for every \(j=1,\ldots,t\), we add all the edges \(\{\{y,z\}\ :\ y\in Z_{x},z\in Z_{x_{j}}\}\) to the spanner \(H\), weighted according to the distances in the graph \(G\). The final spanner will consist of the union of all \(O(k\log n)\) spanners for all the \(k\)-HSTs in the cover. Safe Leaves.Fix a \(k\)-HST \(T\). Under an attack \(B\), we say that a vertex \(u\) is _safe_ w.r.t. \(B\), if for every ancestor \(x\in T\) of \(u\), \(Z_{x}\setminus B\) contain a vertex \(y\) such that \[d_{G}(x,y)\leq d_{G}(x,u)+2\Gamma_{x}/k. \tag{13}\] In other words, we want that every ancestor \(x\) of \(u\) to have a surviving vertex \(y\) in its sample set, which is not much farther than the distance of \(u\) to the center \(x\). Denote \(B_{T}^{+}\) as all the vertices which are not safe in \(T\) w.r.t. \(B\). The final bad set is defined as \(B^{+}=B\cup\bigcup_{T}B_{T}^{+}\). The following claim will be useful for bounding the size and lightness of our spanner. **Claim 28**.: _Fix any \(T\) in the cover, then for any \(x\in T\),_ \[\mathbb{E}[|Z_{x}|]\leq O((\ln^{2}n)/\nu)\.\] Proof.: Let \(L(x)=(v_{1},\ldots,v_{s})\), with the order induced by the restriction of \(P\) to \(L(x)\), then \[\mathbb{E}[|Z_{x}|]=\sum_{j=1}^{s}p_{j}\leq\frac{c\ln n}{\nu}\cdot\sum_{j=1}^{ s}\frac{1}{i}=O\left(\frac{\ln^{2}n}{\nu}\right)\.\] Size Analysis.Fix any tree \(T\) in the cover. Denote \(z_{x}=|Z_{x}|\), and note that these random variables \(\{z_{x}\}_{x\in T}\) are independent, so \(\mathbb{E}[z_{x}\cdot z_{y}]=\mathbb{E}[z_{x}]\cdot\mathbb{E}[z_{y}]\) whenever \(x\neq y\). Using Claim 28, the expected number of edges added to \(H\) by the random bi-cliques is \[\mathbb{E}\left[\sum_{x\in T}\sum_{i=1}^{\deg(x)}z_{x}\cdot z_{x _{i}}\right] =\sum_{x\in T}\sum_{i=1}^{\deg(x)}\mathbb{E}[z_{x}]\cdot\mathbb{E }[z_{x_{i}}].\] \[=O(\nu^{-2}\cdot\log^{4}n)\cdot\sum_{x\in T}\deg(x)=O(\nu^{-2} \cdot n\log^{4}n)\.\] The final spanner is a union of \(\tau=O(\varepsilon^{-2}\log n\cdot\log k)\) spanners for each \(T\) in the cover, and \(\nu^{\prime}=\nu\cdot\tau\), so the final size is \(\tau\cdot O((\frac{\tau}{\nu^{\prime}})^{2}\cdot n\log^{4}n)=n\cdot\nu^{\prime -2}\cdot\varepsilon^{-6}\cdot\log^{7}n\cdot\log^{3}k=\nu^{\prime-2}\cdot \varepsilon^{-6}\cdot\overset{\cdot}{O}(n)\). Lightness Analysis.Let \(T\) be any \(k\)-HST in the cover, and recall that the MST weight of \(T\) is equal to \[\sum_{x\in T}(\deg(x)-1)\cdot\Gamma_{x}\.\] Each edge that \(x\) adds to the spanner has weight at most \(\Gamma_{x}\) (even though we use the graph distance, as \(T\) is dominating). Using Claim 28, the total weight of edges in the random bi-cliques is expected to be at most \[\mathbb{E}\left[\sum_{x\in T}\sum_{i=1}^{\deg(x)}\Gamma_{x}\cdot z_ {x}\cdot z_{x_{i}}\right] = \sum_{x\in T}\Gamma_{x}\sum_{i=1}^{\deg(x)}\mathbb{E}[z_{x}]\cdot \mathbb{E}[z_{x_{i}}]\] \[\leq O((\log^{4}n)/\nu^{2})\cdot\sum_{x\in T}\Gamma_{x}\cdot\deg(x)\] \[= O((\log^{4}n)/\nu^{2})\cdot w(MST(T))\.\] Since every \(k\)-HST has lightness \(O(k\log n)\), and there are \(\tau=O(\varepsilon^{-2}\log n\cdot\log k)\) trees in the cover, and \(\nu^{\prime}=\nu\cdot\tau\), the lightness of the resulting spanner compared to \(G\) is \[\sum_{T\text{ in the cover}}O\left(\frac{\log^{4}n}{\nu^{2}} \right)\cdot\frac{w(MST(T))}{w(MST(G))} =O\left(\frac{\tau^{3}}{\nu^{\prime 2}}\cdot k\cdot\log^{5}n\right)\] \[=O\left(\frac{k\cdot\log^{8}n\cdot\log^{3}k}{\nu^{\prime 2}\cdot \varepsilon^{6}}\right)\] \[=\nu^{\prime-2}\cdot\tilde{O}(\varepsilon^{-7}\cdot\log^{9}n)\] Reliability Analysis.Fix an attack \(B\) and a tree \(T\), and define the shadow with respect to the path \(P\) and the set \(B\) (recall definition 22). We start by showing that for any internal node \(x\), the preorder of \(L(x)\) almost respects the distances to the center \(x\) in \(G\). **Claim 29**.: _Fix any node \(x\in T\), and let \(L(x)=(v_{1},\ldots,v_{s})\) be the ordering given by the restriction of \(P\) to \(L(x)\). Then for any \(1\leq i<j\leq s\) we have that_ \[d_{G}(x,v_{i})\leq d_{G}(x,v_{j})+2\Gamma_{x}/k\.\] Proof.: Let \(x_{i^{\prime}}\) (resp., \(x_{j^{\prime}}\)) be the child of \(x\) whose subtree contains \(v_{i}\) (resp., \(v_{j}\)). Since \(v_{i}\) appears in \(P\) before \(v_{j}\), it follows that the order on the children of \(x\) is such that \(x_{i^{\prime}}\) appears before \(x_{j^{\prime}}\) (we allow \(i^{\prime}=j^{\prime}\)). By our definition it means that \(d_{G}(x,x_{i^{\prime}})\leq d_{G}(x,x_{j^{\prime}})\). As \(T\) is a \(k\)-HST, all distances in \(T\) between vertices in \(L(x_{i^{\prime}})\) (resp., \(L(x_{j^{\prime}})\)) are at most \(\frac{\Gamma_{x}}{k}\). Since \(T\) is dominating, this also holds for the graph distances, which gives that both \(d_{G}(v_{i},x_{i^{\prime}}),d_{G}(v_{j},x_{j^{\prime}})\leq\frac{\Gamma_{x}}{k}\). We conclude that \[d_{G}(x,v_{i}) \leq d_{G}(x,x_{i^{\prime}})+d_{G}(x_{i^{\prime}},v_{i})\] \[\leq d_{G}(x,x_{j^{\prime}})+d_{G}(x_{i^{\prime}},v_{i})\] \[\leq d_{G}(x,v_{j})+d_{G}(x_{j^{\prime}},v_{j})+d_{G}(x_{i^{ \prime}},v_{i})\] \[\leq d_{G}(x,v_{j})+\frac{2\cdot\Gamma_{x}}{k}\.\] **Lemma 30**.: _For every tree \(T\), \(0<\alpha\leq 1-\nu\), and any vertex \(u\notin S_{\alpha}(B)\),_ \[\Pr[\text{$u$ is not safe}]\leq n^{-2}\.\] Proof.: Let \(x\) be any ancestor of \(u\), and let \(L(x)=(v_{1},\ldots,u=v_{j},\ldots,v_{s})\) be the ordering given by the restriction of \(P\) to \(L(x)\). If we want that \(u\) will not fail to be safe due to \(x\), it suffices that \(Z_{x}\setminus B\) contains a vertex in the prefix \((v_{1},\ldots,v_{j})\). This is because Claim 29 suggests that any such vertex will satisfy (13). Since \(u\notin S_{\alpha}(B)\), it follows that at most \(\alpha\) fraction of the vertices \((v_{1},\ldots,v_{j})\) are in \(B\). As the probability of being sampled to \(Z_{x}\) decreases with the index, it can be easily checked that the worst possible case is that \(\{v_{1},\ldots,v_{\lfloor\alpha\cdot j\rfloor}\}\subseteq B\) (in any other case the probability of success will only be higher). We assume that \(p_{\lfloor\alpha j\rfloor+1}<1\), as otherwise \(v_{\lfloor\alpha j\rfloor+1}\) is surely sampled into \(Z_{x}\). This means \(p_{i}=\frac{c\ln n}{i\cdot\nu}\) for all \(i>\alpha\cdot j\). Note that \(Z_{x}\) is sampled independently of \(B\), thus \[\Pr[Z_{x}\cap\{v_{\lfloor\alpha j\rfloor+1},\ldots,v_{j}\}=\emptyset] =\prod_{i=\lfloor\alpha\cdot j\rfloor+1}^{j}(1-p_{i}) \tag{14}\] \[\leq e^{-\sum_{i=\lfloor\alpha\cdot j\rfloor+1}^{j}p_{i}}\] \[=e^{-\sum_{i=\lfloor\alpha\cdot j\rfloor+1}^{j}\frac{c\ln n}{i \cdot\nu}}\] \[\leq n^{-\frac{c}{2\nu}\cdot(\ln j-\ln(\alpha\cdot j))}\] \[=n^{-\frac{c}{2\nu}\cdot(\ln(\frac{1}{\alpha})}\] \[\leq n^{-3}\.\] The last inequality holds as \(\alpha\leq 1-\nu\), so \[\ln(\frac{1}{\alpha})\geq\ln(\frac{1}{1-\nu})>\ln(1+\nu)\geq\frac{\nu}{2}\] and by picking a large enough constant \(c\geq 12\). The lemma follows by a union bound over all possible ancestors \(x\). We are now ready to bound the final set \(B^{+}\). **Lemma 31**.: \(\mathbb{E}[|B^{+}|]\leq(1+\nu^{\prime})\cdot|B|\) _._ Proof.: First consider the case \(B=\emptyset\). In this case \(B^{+}=\emptyset\) as well. This is because all the vertices are safe. Indeed, for every tree \(T\) and node \(x\), the first leaf in \(L(x)\) is sampled to \(Z_{x}\) with probability \(1\), and thus \(Z_{x}\not\subseteq B\). Thus we will assume \(B\neq\emptyset\). We can also assume that \(\nu^{\prime}\geq\frac{1}{n}\), as otherwise theorem 18 holds trivially with \(H=G\). Fix a tree \(T\) in the \(k\)-HST cover. As \(\nu<1/3\), by the first item of Lemma 23, the shadow \(S_{1-\nu}(B)\) of the path \(P\) satisfies \[|S_{1-\nu}(B)|\leq\frac{|B|}{2(1-\nu)-1}\leq(1+4\nu)\cdot|B|\.\] By Lemma 30, every vertex outside \(S_{1-\nu}(B)\) joins \(B_{T}^{+}\) with probability at most \(n^{-2}\). It follows that \[\mathbb{E}\left[\big{|}B_{T}^{+}\big{|}\right] \leq|S_{1-\nu}(B)\setminus B|+\sum_{v\notin S_{1-\nu}(B)}\Pr \left[v\in B_{T}^{+}\right]\] \[\leq 4\nu\cdot|B|+n\cdot\frac{1}{n^{2}}\] \[\leq 4\nu\cdot|B|+\nu\cdot|B|=5\nu\cdot|B|\.\] Summing up over all the \(\tau\) trees in the cover, and recalling that \(\nu=\frac{\nu^{\prime}}{5\tau}\), we conclude that \[\mathbb{E}\left[\big{|}B^{+}\setminus B\big{|}\right]\leq\sum_{T\text{ in the cover}}\mathbb{E}\left[\big{|}B_{T}^{+}\big{|}\right]\leq\sum_{T\text{ in the cover}}5\nu\cdot|B|\leq 5\tau\cdot\nu\cdot|B|=\nu^{\prime}\cdot|B|\.\] Stretch Analysis.Let \(u,v\notin B^{+}\) be two safe leaves in the \(k\)-HST \(T\) that has an internal node \(x\) corresponding to a cluster \(C\) in which \(u,v\) are centrally-padded. Let \(\Delta\) be the diameter bound on \(C\), and so \(\Gamma_{x}=\Delta\). By definition of padding we have that \(u,v\in L(x)\) and \[\frac{(1-6\varepsilon)\Delta}{4}\leq d_{G}(u,v)\leq\frac{(1-6\varepsilon) \Delta}{2}\, \tag{15}\] and as \(\varepsilon<1/12\), it follows that \(\Delta\leq 8d_{G}(u,v)\). Let \(x_{i},x_{j}\) be the children of \(x\) in \(T\) which are the ancestors of \(u\) and \(v\) respectively.14 As \(u,v\notin B^{+}\), it holds that there are vertices \(z\in Z_{x}\setminus B\), \(u^{\prime}\in Z_{x_{i}}\setminus B\), and \(v^{\prime}\in Z_{x_{j}}\setminus B\), and by (13) we also have that Footnote 14: Since \(k=c^{\prime}/\varepsilon>8\), and all vertices in \(L(x_{i})\) are at distance at most \(\Gamma_{x_{i}}\leq\Delta/k\leq 8d_{G}(u,v)/k\) from each other, it cannot be that \(i=j\). \[d_{G}(x,z)\leq\min\{d_{G}(x,u),d_{G}(x,v)\}+2\Delta/k. \tag{16}\] It follows that the edges \(\{u^{\prime},z\},\{v^{\prime},z\}\in H\) survive the attack, and furthermore \[d_{G}(u^{\prime},z) \leq d_{G}(u^{\prime},u)+d_{G}(u,z)\] \[\leq \Gamma_{x_{i}}+d_{G}(u,x)+d_{G}(x,z)\] \[\stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_ Since we used the same bi-clique construction as we did in theorem 1, and a more restrictive definition of _safe_, we have that lemma 10 still holds (with \(f(x)=x\), since we did not apply the heavy-path decomposition here). In particular, \(H\) contains a \(u-u^{\prime}\) path (resp., \(v-v^{\prime}\) path) which is disjoint from \(B\), of length at most \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{i}}\) (resp., \(\left(1+\frac{1}{k-1}\right)\cdot\Gamma_{x_{j}}\)). As \(T\) is a \(k\)-HST, \(\Gamma_{x_{i}},\Gamma_{x_{j}}\leq\frac{\Delta}{k}\), so we have that \[d_{H\setminus B}(u,v) \leq d_{H\setminus B}(u,u^{\prime})+d_{G}(u^{\prime},z)+d_{G}(z,v^{ \prime})+d_{H\setminus B}(v^{\prime},v)\] \[\leq 2(d_{G}(u,x)+d_{G}(v,x)+3\Delta/k)+\left(2+\frac{2}{k-1}\right) \cdot\frac{\Delta}{k}\] \[\leq 2d_{G}(u,v)\cdot\left(1+O(\varepsilon)\right)\;.\] The last inequality uses that \(\Delta\leq 8d_{G}(u,v)\), the definition of centrally-padded (12), and the choice of \(k=\Theta(1/\varepsilon)\). ## 9 Lower Bounds This section is devoted to proving lower bounds. All of our lower bounds hold for any finite stretch, that is, even if one requires from the reliable spanners just to preserve connectivity. For an attack \(B\) on a spanner \(H\), a valid super set \(B_{H}^{+}\) should satisfy that all the points in \(X\setminus B^{+}\) belong to the same connected component in \(H\setminus B\). In Section 9.1 we show that every deterministic reliable spanner for the path must have at least \(\Omega(n)\) lightness. This lower bound holds for any constant \(\nu>0\). The main point of this lower bound is that deterministic reliable spanners have huge lightness. In particular, we did not attempt to optimize the dependence on \(\nu\) (or other terms). Note that deterministic \(\nu\)-reliable \(1\)-spanner with lightness \(O(\nu^{-6}\cdot n\log n)\) follows from [1]. In Section 9.3 we prove that every _oblivious_\(\nu\)-reliable spanner for the path has lightness \(\Omega(\nu^{-2}\cdot\log(n\nu))\). In Section 9.2 we construct an ultrametric such that every oblivious \(\nu\)-reliable spanner has lightness \(\Omega(\nu^{-2})\). These two lower bounds show that the lightness parameters in our Theorems 1 and 16 are tight up to second order terms (even if we ignore the stretch factor). The proof of Section 9.2 appears before Section 9.3 as the two arguments are somewhat similar, while the proof in Section 9.2 is simpler. ### Lower bound for deterministic light reliable spanners **Theorem 19**.: _[Deterministic Lower bound for Path] For any constant \(\nu>0\), every deterministic \(\nu\)-reliable spanner for the unweighted path graph \(P_{n}\) has lightness \(\Omega(n)\)._ Proof.: Consider a deterministic \(\nu\)-reliable spanner \(H\) for \(P_{n}\). Set \(\varepsilon=\min\{\frac{1}{16},\frac{1}{16\nu}\}=\Omega(1)\). Denote by \(L=[1,(1/2-\varepsilon)n]\) and \(R=[(1/2+\varepsilon)n+1,n]\) the first and last \((1/2-\varepsilon)n\) vertices along \(P_{n}\) respectively. (For simplicity we assume that these are all integers.) Let \(E_{1}\) be the subset of \(H\) edges going from a vertex in \(L\) to a vertex in \(R\). Seeking contradiction, assume that \(|E_{1}|<\varepsilon n\). Let \(B\subseteq[n]\) be a subset consisting of all the vertices in \([(1/2-\varepsilon)n+1,(1/2+\varepsilon)n]\), and all the vertices in \(L\) that are contained in an edge of \(E_{1}\). Since \(|B|<3\varepsilon n\), it holds that \(|L\setminus B|\geq(1/2-4\varepsilon)n\), and \(|R\setminus B|=(1/2-\varepsilon)n\). However, the graph \(H\setminus B\) does not contain any edge from a vertex in \(L\setminus B\) to a vertex in \(R\). In particular, \(B^{+}\) must contain either all the vertices in \(L\), or all the vertices in \(R\). It follows that \[|B^{+}\setminus B|\geq(1/2-4\varepsilon)n\geq n/4\geq\nu\cdot 4\varepsilon n >\nu\cdot|B|\,\] a contradiction to the fact that \(H\) is a \(\nu\)-reliable spanner. It follows that \(|E_{1}|\geq\varepsilon n\). Note that each edge in \(E_{1}\) has weight at least \(2\varepsilon n\). We conclude that \[w(H)\geq|E_{1}|\cdot 2\varepsilon n\geq 2(\varepsilon n)^{2}=\Omega(n)\cdot w (\text{MST})\,\] where in the last equality we used that \(\nu\) is a constant. ### Lower Bound for HST Similarly to Theorem 21, the lower bound here holds even if one is only interested in preserving connectivity. **Theorem 20** (Oblivious Lower Bound for HST).: _For every \(\nu\in(0,1)\), there is an ultrametric such that every oblivious \(\nu\)-reliable spanner has lightness \(\Omega(\nu^{-2})\)._ _Proof._ Set \(\ell=\frac{1}{4\nu}\). Consider an ultrametric consisting of a root \(r\) with label \(1\), and \(\ell\) children \(\{v_{1},\ldots,v_{\ell}\}\), each with label \(\varepsilon=\frac{1}{\ell}\), and \(\ell\) children each, where \(\{v_{1}^{i},v_{2}^{i},\ldots,v_{\ell}^{i}\}\) are the children of \(v_{i}\). In total we have \(\ell^{2}\) leaves. See illustration on the right. The MST for this ultrametric will consist of \(\ell-1\) edges of weight \(1\), and \(\ell\cdot(\ell-1)\) edges of weight \(\varepsilon=\frac{1}{\ell}\). So the total weight is \(2(\ell-1)\). Consider an oblivious \(\nu\)-reliable spanner \(\mathcal{D}\), and let \(H\sim\operatorname{supp}(\mathcal{D})\). Let \(\mathcal{J}\in\{1,2\ldots,\ell\}^{\ell}\) be a string of \(\ell\) indices between \(1\) and \(\ell\). Let \(H_{\mathcal{J}}\) be the subgraph of \(H\) induced by \(\{v_{\mathcal{J}_{1}}^{1},v_{\mathcal{J}_{2}}^{2},\ldots,v_{\mathcal{J}_{\ell} }^{\ell}\}\). That is, for each \(i\), we keep only the vertex corresponding to the \(i\)'th index in \(\mathcal{J}\). Let \(\Psi_{\mathcal{J}}\) be the event that the graph \(H_{\mathcal{J}}\) contains at least \(\frac{\ell}{2}\) edges. Consider the attack \(B_{\mathcal{J}}\) which consist of all the vertices except \(\{v_{\mathcal{J}_{1}}^{1},v_{\mathcal{J}_{2}}^{2},\ldots,v_{\mathcal{J}_{\ell} }^{\ell}\}\). If the event \(\Psi_{\mathcal{J}}\) did not occur for a spanner \(H\), then \(H\setminus B_{\mathcal{J}}\), is disconnected, and the largest connected component has size at most \(\frac{\ell}{2}\). Observe that in order to preserve connectivity, \(B_{\mathcal{J}}^{+}\) must contain all vertices in all connected components of \(H\setminus B_{\mathcal{J}}\), except for one component. In particular, \(B_{\mathcal{J}}^{+}\setminus B_{\mathcal{J}}\) will contain at least \(\frac{\ell}{2}=2\nu\cdot\ell^{2}\geq 2\nu\cdot|B|\) vertices. As \(\mathcal{D}\) is \(\nu\)-reliable, it holds that \[\nu\cdot|B|\geq\mathbb{E}[|B^{+}\setminus B|]\geq\Pr\left[\overline{\Psi_{ \mathcal{J}}}\right]\cdot 2\nu\cdot|B|\,\] It follows that \(\Pr\left[\overline{\Psi_{\mathcal{J}}}\right]\leq\frac{1}{2}\), and in particular \(\Pr\left[\Psi_{\mathcal{J}}\right]\geq\frac{1}{2}\). We conclude that for every \(\mathcal{J}\) it holds that \(\mathbb{E}_{H\sim\mathcal{D}}\left[|E(H_{\mathcal{J}})|\right]\geq\Pr\left[ \Psi_{\mathcal{J}}\right]\cdot\frac{\ell}{2}\geq\frac{\ell}{4}\). On the other hand, denote by \(\widehat{H}\) the subset of \(H\) edges of weight \(1\) (i.e. between children of \(v_{i},v_{j}\) for \(i\neq j\)). Note that for every \(\mathcal{J}\), \(H_{\mathcal{J}}\subseteq\widehat{H}\) (as \(H_{\mathcal{J}}\) does not contain \(\varepsilon\)-weight edges). Every edge \(e\in\widehat{H}\) belongs to \(H_{\mathcal{J}}\) if and only if both its endpoints are chosen by \(\mathcal{J}\). If we choose \(\mathcal{J}\) u.a.r., \(e\) will survive with probability \(\frac{1}{\ell^{2}}\). We conclude \[\mathbb{E}_{\mathcal{J},H}\left[\left|E(H_{\mathcal{J}})\right| \right] =\mathbb{E}_{\mathcal{J}}\left[\mathbb{E}_{H}\left[\left|E(H_{ \mathcal{J}})\right|\right]\right]\geq\mathbb{E}_{\mathcal{J}}\left[\frac{ \ell}{4}\right]=\frac{\ell}{4}\] \[\mathbb{E}_{\mathcal{J},H}\left[\left|E(H_{\mathcal{J}})\right| \right] =\mathbb{E}_{H}\left[\mathbb{E}_{\mathcal{J}}\left[\left|E(H_{ \mathcal{J}})\right|\right]\right]=\mathbb{E}_{H}\left[\frac{1}{\ell^{2}}\cdot \left|\widehat{H}\right|\right]\.\] As all \(\widehat{H}\) edges have weight \(1\), \[\mathbb{E}_{H\sim\mathcal{D}}\left[w(H)\right]\geq\mathbb{E}_{H}\left[\left| \widehat{H}\right|\right]\geq\frac{\ell^{3}}{4}=\Omega(\nu^{-2})\cdot w(\text{ MST})\.\] ### Lower Bound for the Unweighted Path In this section we prove an \(\Omega(\nu^{-2}\cdot\log(n\nu))\) lower bound on the lightness any oblivious reliable spanner for the shortest path metric induced by the unweighted path (for any finite stretch parameter). As this metric has doubling dimension \(1\), it follows that our light reliable spanner for doubling metrics is tight (Corollary 8) up to second order terms (for constant \(\operatorname{ddim}\) and \(\varepsilon\)). **Theorem 21** (Oblivious Lower Bound for the Path).: _For every \(\nu\in(0,1)\), every oblivious \(\nu\)-reliable spanner for the unweighted path graph \(P_{n}\) has lightness \(\Omega(\nu^{-2}\cdot\log(n\nu))\)._ Proof.: This proof follow similar lines to the proof of Theorem 20, however, there are some required adaptations due to the path metric, and an additional \(\log n\) factor which is introduced due to the \(\log n\) different scales. For simplicity we will assume that \(n=2^{m}\) and \(\nu=2^{-s}\) are powers of \(2\). We will also assume that \(m\geq s+5\). For every index \(i\) and \(H\in\operatorname{supp}(\mathcal{D})\), denote by \(\mathcal{E}_{H}^{i}\) the subset of \(H\) edges of weight at least \(2^{i}\). **Claim 32**.: _For every index \(i\in\{s+3,\ldots,m-2\}\), \(\mathbb{E}_{H\sim\mathcal{D}}\left[\left|\mathcal{E}_{H}^{i}\right|\right]\geq \frac{1}{\nu^{2}}\cdot\frac{n}{2^{i+3}}\)._ Proof.: Set \(\ell=\frac{1}{8\nu}\). Divide the path \(P_{n}\) to \(\frac{n}{2^{i}}\) intervals of length \(2^{i}\). Remove every other interval. Every remaining interval, partition further into \(\ell\) intervals of length \(\frac{2^{i}}{\ell}\). Denote these intervals by \(\left\{A_{j}^{k}\right\}_{k\in[\frac{n}{2^{i+1}}],j\in[\ell]}\) where \[A_{j}^{k}=\left\{v_{2(k-1)\cdot 2^{i}+(j-1)\cdot\frac{2^{i}}{\ell}+1},\cdots,v_{ 2(k-1)\cdot 2^{i}+j\cdot\frac{2^{i}}{\ell}}\right\}\.\] See illustration below. For every subgraph \(H\in\operatorname{supp}(\mathcal{D})\), we create an unweighted supergraph \(G_{H}\) where its vertex set is \(\left\{A_{j}^{k}\right\}_{k\in[\frac{n}{2^{i+1}}],j\in[\frac{1}{\ell}]}\), and add an edge from \(A_{j}^{k}\) to \(A_{j^{\prime}}^{k^{\prime}}\) if and only if \(k\neq k^{\prime}\) and \(H\) contains an edge between points in \(A_{j}^{k}\) and \(A_{j^{\prime}}^{k^{\prime}}\). Note that \(G_{H}\) is a \(\frac{n}{2^{i+1}}\)-partite simple graph. Denote by \(V(G_{H})\) and \(E(G_{H})\) the vertex and edges sets of \(G_{H}\) respectively. Clearly, every edge in \(G_{H}\) corresponds to (at least one) edge of weight at least \(2^{i}\) in \(H\). Thus, \(|E(G_{H})|\leq|\mathcal{E}_{H}^{i}|\), and hence in order to prove the claim it suffices to lower bound \(\mathbb{E}_{H\sim\mathcal{D}}\left[|G_{H}|\right]\). We will proceed by a double-counting argument. Consider a \(\frac{n}{2^{i+1}}\)-tuple \(\mathcal{J}=\left(j_{1},\ldots,j_{\frac{n}{2^{i+1}}}\right)\in[\ell]^{\frac{n} {2^{i+1}}}\). The graph \(G_{H}^{\mathcal{J}}=G_{H}\left[\left\{A_{j_{k}}^{k}\right\}_{k\in[\frac{n}{2^ {i+1}}]}\right]\) is the induced graph by the \(\frac{n}{2^{i+1}}\) vertices \(\left\{A_{j_{k}}^{k}\right\}_{k\in[\frac{n}{2^{i+1}}]}\) of \(G_{H}\). Let \(B_{\mathcal{J}}=[n]\setminus\cup_{k}A_{j_{k}}^{k}\) be all the vertices not in the sub intervals specified by \(\mathcal{J}\) (in particular \(B_{\mathcal{J}}\) contains the \(\frac{n}{2}\) vertices of the removed intervals). Let \(\Psi_{\mathcal{J}}\) be an indicator for the event that \(G_{H}^{\mathcal{J}}\) contains at least \(\frac{n}{2^{i+2}}\) edges. Note that if the event \(\Psi_{\mathcal{J}}\) did not occur, then in \(H\setminus B_{\mathcal{J}}\) the maximum size of a connected component is \(\frac{n}{2^{i+2}}\cdot\frac{2^{i}}{\ell}=\frac{1}{4}\cdot\frac{n}{\ell}\) (since at most \(\frac{n}{2^{i+2}}\) of the \(\frac{n}{2^{i+1}}\) intervals can be connected, and each has \(\frac{2^{i}}{\ell}\) points). In particular, \(B_{\mathcal{J},H}^{+}\setminus B_{\mathcal{J}}\) must contain at least \(\frac{n}{4\ell}\) points. As \(H\) is an oblivous \(\nu\)-reliable spanner, it follows that \[(1+\nu)\cdot|B_{\mathcal{J}}|\geq\mathbb{E}_{H\sim\mathcal{D}}\left[|B_{ \mathcal{J},H}^{+}|\right]\geq|B_{\mathcal{J}}|+\frac{n}{4\ell}\cdot\Pr\left[ \overline{\Psi_{\mathcal{J}}}\right]\.\] Hence \(\Pr\left[\overline{\Psi_{\mathcal{J}}}\right]\leq\nu\cdot|B_{\mathcal{J}}| \cdot\frac{4\ell}{n}<\frac{1}{2}\), and thus \(\mathbb{E}_{H\sim\mathcal{D}}\left[\left|E(G_{H}^{\mathcal{J}})\right|\right] \geq\frac{n}{2^{i+2}}\cdot\Pr\left[\Psi_{\mathcal{J}}\right]\geq\frac{n}{2^{i +3}}\). We will abuse notation and state \((k,j)\in\mathcal{J}\) if the \(k\)'th index in \(\mathcal{J}\) is \(j\) (i.e. \(j_{k}=j\)). Next, we sample \(\mathcal{J}\) uniformly at random for all the possible \(\frac{n}{2^{i+1}}\)-tuples, and thus \(\Pr\left[(k,j)\in\mathcal{J}\right]=\frac{1}{\ell}\). It holds that for every \(H\in\operatorname{supp}(\mathcal{D})\), \[\mathbb{E}_{\mathcal{J}}\left[\left|E(G_{H}^{\mathcal{J}})\right|\right]=\sum_ {\left(A_{j}^{k},A_{j^{\prime}}^{k^{\prime}}\right)\in E(G_{H})}\Pr\left[(k,j ),(k^{\prime},j^{\prime})\in\mathcal{J}\right]=\frac{1}{\ell^{2}}\cdot|E(G_{H} )|\.\] We now sample both a subgraph \(H\sim\mathcal{D}\), and independently a tuple \(\mathcal{J}\). It holds that: \[\frac{1}{\ell^{2}}\cdot\mathbb{E}_{H}\left[|E(G_{H})|\right]=\mathbb{E}_{H} \left[\mathbb{E}_{\mathcal{J}}\left[\left|E(G_{H}^{\mathcal{J}})\right|\right] \right]=\mathbb{E}_{\mathcal{J}}\left[\mathbb{E}_{H}\left[\left|E(G_{H}^{ \mathcal{J}})\right|\right]\right]\geq\mathbb{E}_{\mathcal{J}}\left[\frac{n}{2 ^{i+3}}\right]=\frac{n}{2^{i+3}}\,\] and thus \(\mathbb{E}_{H}\left[|E(G_{H})|\right]\geq n\cdot\frac{\ell^{2}}{2^{i+3}}= \Omega(\frac{n}{2^{i}\cdot\nu^{2}})\) as required. Consider a pair \(p<q\in[n]\) such that \(2^{w}\leq q-p<2^{w+1}\). The event \((p,q)\in H\) occurs if and only if all the events \(\left\{(p,q)\in\mathcal{E}_{H}^{i}\right\}_{i=0}^{w}\) occurred (note that all these \(w+1\) events are actually equivalent). As \(q-p\geq 2^{w}>\sum_{i=0}^{w}2^{i-1}\), it holds that \[\Pr\left[(p,q)\in H\right]\cdot(q-p) \geq\sum_{i=0}^{w}\Pr_{H\sim\mathcal{D}}\left[(p,q)\in\mathcal{E} _{H}^{i}\right]\cdot 2^{i-1}\] \[=\sum_{i=0}^{m-1}\Pr_{H\sim\mathcal{D}}\left[(p,q)\in\mathcal{E}_{ H}^{i}\right]\cdot 2^{i-1}\geq\sum_{i=s+3}^{m-2}\Pr_{H\sim\mathcal{D}}\left[(p,q)\in \mathcal{E}_{H}^{i}\right]\cdot 2^{i-1}\,\] where the equality holds as for every \(i\geq w+1\), \(\Pr_{H\sim\mathcal{D}}\left[(p,q)\in\mathcal{E}_{H}^{i}\right]=0\). By Claim 32 \[\mathbb{E}_{H\sim\mathcal{D}}\left[w(H)\right] =\sum_{p<q}\Pr_{H\sim\mathcal{D}}\left[(p,q)\in H\right]\cdot(q-p)\] \[\geq\sum_{p<q}\sum_{i=s+3}^{m-2}\Pr_{H\sim\mathcal{D}}\left[(p,q) \in\mathcal{E}_{H}^{i}\right]\cdot 2^{i-1}\] \[=\sum_{i=s+3}^{m-2}\mathbb{E}_{H\sim\mathcal{D}}\left[\left| \mathcal{E}_{H}^{i}\right|\right]\cdot 2^{i-1}\] \[\geq\sum_{i=s+3}^{m-2}\Omega(\frac{n}{2^{i}\cdot\nu^{2}})\cdot 2 ^{i-1}\] \[=\frac{n}{\nu^{2}}\cdot\Omega(m-s-4)=\frac{n}{\nu^{2}}\cdot\Omega (\log(n\cdot\nu))\,\] where the last equality holds as \(m=\log n\), \(s=\log\frac{1}{\nu}\), and thus \(m-s-4=\log\frac{n\cdot\nu}{16}\). The theorem now follows.
2307.00070
Map-based cosmology inference with weak lensing -- information content and its dependence on the parameter space
Field-level inference is emerging as a promising technique for optimally extracting information from cosmological datasets. Indeed, previous analyses have shown field-based inference produces tighter parameter constraints than power spectrum analyses. However, estimates of the detailed quantitative gain in constraining power differ. Here, we demonstrate the gain in constraining power depends on the parameter space being constrained. As a specific example, we find that field-based analysis of an LSST Y1-like mock data set only marginally improves constraints relative to a 2-point function analysis in $\Lambda$CDM, yet it more than doubles the constraining power of the data in the context of $w$CDM models. This effect reconciles some, but not all, of the discrepant results found in the literature. Our results demonstrate the importance of using a full systematics model when quantifying the information gain for realistic field-level analyses of future data sets.
Supranta S. Boruah, Eduardo Rozo
2023-06-30T18:20:05Z
http://arxiv.org/abs/2307.00070v1
Map-based cosmology inference with weak lensing - information content and its dependence on the parameter space ###### Abstract Field-level inference is emerging as a promising technique for optimally extracting information from cosmological datasets. Indeed, previous analyses have shown field-based inference produces tighter parameter constraints than power spectrum analyses. However, estimates of the detailed quantitative gain in constraining power differ. Here, we demonstrate the gain in constraining power depends on the parameter space being constrained. As a specific example, we find that field-based analysis of an LSST Y1-like mock data set only marginally improves constraints relative to a 2-point function analysis in \(\Lambda\)CDM, yet it more than doubles the constraining power of the data in the context of \(w\)CDM models. This effect reconciles some, but not all, of the discrepant results found in the literature. Our results demonstrate the importance of using a full systematics model when quantifying the information gain for realistic field-level analyses of future data sets. keywords: large-scale structure of Universe - gravitational lensing: weak - methods: data analysis ## 1 Introduction Current lensing analyses typically rely on 2-point functions (Hikage et al., 2019; Heymans et al., 2021; Abbott et al., 2022). However, 2-point analyses are sub-optimal due to the highly non-Gaussian nature of the late-time density field. Indeed, one can extract additional cosmological information by supplementing 2-point function measurements with non-Gaussian summary statistics (Takada and Jain, 2003; Kilbinger and Schneider, 2005), e.g. peak counts (Liu et al., 2015; Harnois-Deraps et al., 2021; Zurcher et al., 2022), one-point PDFs (Thiele et al., 2020; Boyle et al., 2021), wavelet transforms (Cheng et al., 2020; Cheng and Menard, 2021; Ajani et al., 2021), and Minkowski functionals (Kratochvil et al., 2012; Petri et al., 2013). Field-level inference (Jasche and Wandelt, 2013; Wang et al., 2014; Modi et al., 2018) is a new approach in which one forward-models the cosmology-dependent density field of the Universe as constrained by the data. A field-based inference approach is fully optimal at any given scale: it automatically and self-consistently incorporates _all_ summary statistics up to the recovered scale. For this reason, it has been proposed to model a broad range of observables, including weak lensing (Porqueres et al., 2021, 2022; Fiedorowicz et al., 2022, 2022), CMB lensing (Millea et al., 2019, 2020, 2021), peculiar velocities (Boruah et al., 2022; Prideaux-Ghee et al., 2022; Bayer et al., 2022), and galaxy clustering (Ramanah et al., 2019; Dai and Seljak, 2022). Although numerically challenging, steady progress in numerical techniques (Modi et al., 2021; Li et al., 2022; Modi et al., 2022; Dai and Seljak, 2022) is helping realize the potential of this new technique. While there is consensus in the literature that field-based inference leads to tighter parameter constraints than 2-point analyses, there are also significant differences in the detailed quantitative measure of this improvement. Leclercq and Heavens (2021) found that field-based inference leads to massive improvement in parameter constraints over 2-pt function analysis, even for only mildly non-Gaussian fields. Similarly, Porqueres et al. (2022, 2023) found large gains for a field-level cosmic shear analysis. By contrast, Boruah et al. (2022) found field-based inference results in only modest improvements for cosmic shear analyses. In light of these differences, we have set out to examine the information gain from field-level inference of weak lensing data in more detail. ## 2 Formalism We model the convergence field as a lognormal random field. Ugnormal fields are commonly used to approximate non-Gaussian density and convergence fields in cosmological applications (Coles and Jones, 1991; Jasche and Kitaura, 2010; Clerkin et al., 2017; Xavier et al., 2016). Throughout this paper, we perform our analysis at a pixel scale of 10 arcminutes. This is sufficiently large for the lognormal distribution to provide a reasonable description of the underlying convergence field (Xavier et al., 2016; Clerkin et al., 2017; Friedrich et al., 2020). We do not consider smaller scales to avoid having to model baryonic feedback, which is expected to significantly impact the matter density distribution at higher resolution (e.g, Eiffler et al., 2015; Huang et al., 2019; Osato et al., 2021). When modelled as a lognormal variable, \(\kappa\) is related to a Gaussian variable \(y\) via \[\kappa=e^{y}-\lambda, \tag{1}\] where \(\lambda\) is called the shift parameter. The shift parameter denotes the minimum value that \(\kappa\) can take, and directly impacts the non-Gaussian features of the resulting convergence field. The mean of the \(y\)-field is chosen so as to enforce the condition that the \(\kappa\) field has a zero mean. We use the perturbation theory code cosmomentum (Friedrich et al., 2018, 2020) to calculate the cosmology-dependent shift parameters. For further details on lognormal fields, we refer the reader to Boruah et al. (2022). We use the field-level analysis pipeline of Boruah et al. (2022) to analyze synthetic weak lensing data generated from a lognormal convergence map. To create the synthetic data, we assume the redshift distribution forecasted for LSST-Y1 in The LSST Dark Energy Science Collaboration et al. (DESC-SRD, 2018). We then analyze the synthetic data using two different models: _(i)_ a two-parameter toy model presented in Section 3, and _(ii)_ a cosmological model in which the power spectrum and the shift parameters are determined by the underlying cosmological parameters. Following Leclercq & Heavens (2021), the toy-model analysis of section 3 is non-tomographic. The cosmological analysis of section 4 assumes the data is binned into 4 tomographic bins. ## 3 Toy model with scaling parameters Leclercq & Heavens (2021) used a two-parameter log-normal toy model to demonstrate that field-based analyses can dramatically outperform standard 2-point approaches. This result is apparently in tension with that of Boruah et al. (2023), who find only marginal improvements in a \(\Lambda\)CDM cosmology. To resolve this apparent discrepancy, we analyzed a synthetic data set using two different models: a toy model similar to the one used by Leclercq & Heavens (2021), and the standard \(\Lambda\)CDM model. Our toy model is constructed so that our fiducial toy model exactly matches our fiducial \(\Lambda\)CDM model. Our fiducial model is a flat \(\Lambda\)CDM universe with \(\Omega_{\rm m}=0.279\), \(\sigma_{\rm B}=0.82\), \(\Omega_{\rm b}=0.046\), \(h=0.7\), \(n_{\rm s}=0.97\). This choice defines the power-spectrum \(C_{y}(\ell)\) and the shift parameter \(\lambda\) of the lognormal random field \(\kappa\), where \(y=\ln(\kappa-\lambda)\), and \(y\) is a Gaussian random field. Our toy model depends on two parameters \(\alpha\) and \(\beta\) that rescale: 1) the power-spectrum \(C_{y}\); or 2) the shift parameter \(\lambda\). These rescalings are defined via \[\log C_{y}(\ell) \rightarrow \alpha\times\log C_{y}(\ell) \tag{2}\] \[\lambda \rightarrow \beta\times\lambda. \tag{3}\] For simplicity, we refer to this toy model as the \(\alpha\)-\(\beta\) model, with \(\alpha=\beta=1\) corresponding to our fiducial model. As in Leclercq & Heavens (2021), we restrict our analysis to a single tomographic redshift bin, for which we adopt the expected redshift distribution of source galaxies for the LSST-Y1 data set. We produce a lognormal realization of the fiducial model, which we then analyze using the field-based inference framework of Boruah et al. (2022). We perform our analysis both in the toy \(\alpha\)-\(\beta\) model and the \(\sigma_{\rm B}\)-\(\Omega_{\rm m}\) parameter space. Both analyses rely on the same noisy shear map as the data vector. Figure 1 compares the posteriors for the \(\alpha\)-\(\beta\) model (left) and the \(\Lambda\)CDM model (right). Red and blue contours correspond to posteriors from a field-based (red) and a power spectrum based (blue) analysis. Evidently, field-based inference dramatically improves parameter constraints in our toy model, but has only a modest impact on the posteriors in the \(\sigma_{\rm B}\)-\(\Omega_{\rm m}\) space. This demonstrates that: 1) despite being superficially different, the results of Leclercq & Heavens (2021) and Boruah et al. (2022) are fully consistent with each other; and 2) the amount of information gained from field-based inference depends on the parameter space of interest. We can readily understand the difference in gains between the Figure 1: Comparison of the constraints obtained using field-based inference and power spectrum analysis for the toy model described in section 3 (_left_), and a \(\Lambda\)CDM cosmological model (_right_). We use the same observed data vector — i.e. the noisy realization of the observed shear field for the two panels. We plot \(\beta/\alpha^{2}\) on the \(y\) axis for the toy model to account for the strong degeneracy between these two parameters. We see that field-based inference dramatically improves parameter constraints in the \(\alpha\)-\(\beta\) toy model, but have only a modest impact on cosmological posteriors (Boruah et al., 2022). That is, the gains due to field-based inference methods relative to 2-point analyses depend on the parameter space under consideration. two parameters spaces as follows. In the \(\alpha\)-\(\beta\) toy model, the 1-point and 2-point functions of the field vary in nonphysical and largely independent ways. However, in the real Universe, the power spectrum and the 1 pt PDF are determined by the same physics and therefore contain correlated information. To demonstrate this, we select models from the power spectrum posteriors in each of the two parameter space we considered. The models selected are exactly \(2\sigma\) away from the fiducial model and along the degeneracy direction of the 2-point posterior in each space. Figure 2 compares the 1-point function for each of these models. We see that the difference between the 1-point function for each of these models and that of the fiducial model is many times larger in the \(\alpha\)-\(\beta\) parameter space than in the \(\sigma_{8}\)-\(\Omega_{\rm m}\) space. Moreover, the differences in the 1-point functions in the \(\sigma_{8}\)-\(\Omega_{\rm m}\) parameter space are comparable to the cosmic variance errors, explaining why field-based inference results in only marginal gains relative to the 2-point posterior. In short, the reason the toy-model of Leclercq & Heavens (2021) results in large gains is because it allows for an unphysical de-correlation of the information content of the 1- and 2-point functions of the convergence field. ## 4 Implications for cosmological inference We have seen that that the choice of parameter space impacts the relative gain of field-based inference methods relative to traditional 2-point analyses. This raises the question: are there other cosmological parameters for which the gain in cosmology constraints is large? Here, we compare cosmological constraints in \(w\)CDM models from cosmic shear as derived from field-based and power spectrum analyses. In contrast to the previous section, we perform a tomographic analysis with 4 redshift bins, each containing the same number of galaxies. The redshift distribution of the bins is based on the expected LSST-Y1 redshift distributions. The source density is set to 10 galaxies/arcmin\({}^{2}\). Figure 3 summarizes our results. The figure demonstrates that a field-based approach significantly improves parameter constraints relative to the standard 2-point analysis in a \(w\)CDM cosmology. We quantify the improvement using the covariance matrix of the posterior. Specifically, we define the figure-of-merit \[{\rm FoM}_{ij}=\frac{1}{\sqrt{\det({\rm Cov}[\theta_{i},\,\theta_{j}])}}, \tag{4}\] where, \({\rm Cov}[\theta_{i},\,\theta_{j}]\) denotes the covariance matrix of the parameters \(\theta_{i}\) and \(\theta_{j}\) as computed from the MCMC posterior samples. We find that field-based inference leads to an improvement in the figure of merit by a factor of 2.2, 2.2, and 2.5 times in the \(\Omega_{\rm m}\)-\(A_{\rm s}\), \(\Omega_{\rm m}\)-\(w\) and \(A_{\rm s}\)-\(w\) subspaces respectively. These improvements are particularly noteworthy in that the cosmological information content of the shear power spectrum begins to saturate at \(\approx 10\) arcmin scales (Kayo et al., 2013; Boruah et al., 2023). That is, field-based analyses are a powerful complement to efforts centered on improving small scale modeling. As in section 3, the additional information in the field-based inference analysis comes from the 1-point function. This is illustrated in Figure 4. There, we compare: 1) the spread in the predicted 1-point functions obtained by sampling the power-spectrum analysis posterior; and 2) the observational uncertainties in the 1-point distribution. This comparison is done both for \(\Lambda\)CDM and \(w\)CDM posteriors, and each of the four tomographic bins. We see that the spread in the one-point function within the \(\Lambda\)CDM chain is less than or comparable to the statistical noise in the one-point function measurement. On the other hand, the spread in the predicted 1-point distributions from the \(w\)CDM power spectrum posterior is broader than observational uncertainties. Consequently, the 1-point distribution function adds significant information to the 2-point analysis for \(w\)CDM models. Conversely, a measurement of the 1-point distribution adds little information within the context of a \(\Lambda\)CDM analysis. Our results are in tension with those of Porqueres et al. (2022) and Porqueres et al. (2023), who report large gains from field-based inference in a \(\Lambda\)CDM cosmology. Barring numerical issues/bugs in one or both of these codes, this discrepancy can only be resolved by the differences in the forward models. The convergence field in Porqueres et al. (2023) is calculated using 2LPT simulations plus ray tracing, whereas we rely on an approximate log-normal model. However the lognormal model provides a good description of the convergence field at the current resolution (e.g, Xavier et al., 2016; Clerkin et al., 2017; Fiedorowicz et al., 2022) and therefore the massive difference between the two results is surprising. Understanding the sources of this discrepancy is outside the scope of this work, but the difference highlights the need for more extensive testing and detailed comparisons between different field-level inference codes. In this context we note that in Boruah et al. (2022) we have verified that our posteriors match the analytic expectation when using a Gaussian random field model. ## 5 Summary and discussion We used the lognormal model to study the relative information content from field-based and 2-point analyses of the convergence field. We confirm the finding that field-based parameter posteriors are significantly tighter than those of the corresponding 2-point analysis Figure 2: Comparison of the 1-point distributions of models that are \(2\sigma\) away from the fiducial value in the 2-point posterior analyses for both the \(\alpha\)–\(\beta\) red dashed and dotted lines) and \(\sigma_{8}\)–\(\Omega_{\rm m}\) (blue dashed and dotted lines) parameter spaces. The bottom panel shows the differences between the 1-point distributions from that of the fiducial model. The error bars in the bottom panel are the noise in the measured 1-point distribution. Note that the differences in the 1-point distribution are highly significant in the case of the \(\alpha\)–\(\beta\) parameter space, but only marginally significant in the \(\sigma_{8}\)–\(\Omega_{\rm m}\) space. in the case of the Leclercq and Heavens (2021) toy model. However, we have also demonstrated that the relative gains of field-based inference depend on the specific parameter space being investigated. In particular, we have found field-based inference leads to modest gains in \(\Lambda\)CDM, but large gains in \(w\)CDM. These improvements are driven by the information content in the 1-point distribution of the convergence field. It is important to note that in this analysis we have not considered systematic effects. As we saw in section 3, the constraining power depends on the parameter space considered. Therefore, the addition of systematic parameters to the model will impact our conclusions regarding the impact of field-based inference on cosmological posteriors. That said, several studies in the literature have shown that non-Gaussian information can improve constraints on systematics parameters such as photo-\(z\) biases (Jasche and Wandelt, 2012; Tsaprazi et al., 2023) and intrinsic alignment (Pyne and Joachimi, 2021), which would in turn likely produce gains in cosmological constraining power. Detailed quantification of these gains will require further analyses, which we leave for future work. ## Acknowledgement We thank Alan Heavens for suggestions that led to some of the early tests in the paper and Elisabeth Krause for useful discussions. The computation presented here was performed on the High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department. SSB is supported by the Department of Energy Cosmic Frontier program, grant DE-SC0020215. ER's work is supported is supported by NSF grant 2009401. ER also receives funding from DOE grant DE-SC0009913 and NSF grant 2206688. ## Data Availability Statement The data underlying this article will be shared on request to the corresponding authors. Figure 4: Spread in the 1-point function calculated for the cosmological parameters drawn from the power spectrum posterior for a \(\Lambda\)CDM (red) and a wCDM analysis (blue). The black bars show the expected statistical error in the recovered distributions including shape noise. The differences in the posterior predictions for the 1-point distributions are larger than the observational errors in wCDM, but smaller in \(\Lambda\)CDM. Consequently, field-based inference leads to large improvements in parameter constraints in the context of wCDM, but only modest improvements in \(\Lambda\)CDM. Figure 3: Comparison of the cosmological constraints with power spectrum analysis (blue) and map-based inference (red) for wCDM parameter space. We find that map-based inference leads to much stronger constraints than a power spectrum based analysis. This is in contrast to our findings within the context of \(\Lambda\)CDM, where field-based inference resulted in only modest improvements.
2309.14581
Assessing Utility of Differential Privacy for RCTs
Randomized control trials, RCTs, have become a powerful tool for assessing the impact of interventions and policies in many contexts. They are considered the gold-standard for inference in the biomedical fields and in many social sciences. Researchers have published an increasing number of studies that rely on RCTs for at least part of the inference, and these studies typically include the response data collected, de-identified and sometimes protected through traditional disclosure limitation methods. In this paper, we empirically assess the impact of strong privacy-preservation methodology (with \ac{DP} guarantees), on published analyses from RCTs, leveraging the availability of replication packages (research compendia) in economics and policy analysis. We provide simulations studies and demonstrate how we can replicate the analysis in a published economics article on privacy-protected data under various parametrizations. We find that relatively straightforward DP-based methods allow for inference-valid protection of the published data, though computational issues may limit more complex analyses from using these methods. The results have applicability to researchers wishing to share RCT data, especially in the context of low- and middle-income countries, with strong privacy protection.
Soumya Mukherjee, Aratrika Mustafi, Aleksandra Slavković, Lars Vilhuber
2023-09-26T00:10:32Z
http://arxiv.org/abs/2309.14581v1
# Assessing Utility of Differential Privacy for RCTs ###### Abstract Randomized control trials, RCT, have become a powerful tool for assessing the impact of interventions and policies in many contexts. They are considered the gold-standard for inference in the biomedical fields and in many social sciences. Researchers have published an increasing number of studies that rely on RCTs for at least part of the inference, and these studies typically include the response data collected, de-identified and sometimes protected through traditional disclosure limitation methods. In this paper, we empirically assess the impact of strong privacy-preservation methodology (with differential privacy (DP) guarantees), on published analyses from RCTs, leveraging the availability of replication packages (research compendia) in economics and policy analysis. We provide simulations studies and demonstrate how we can replicate the analysis in a published economics article on privacy-protected data under various parametrizations. We find that relatively straightforward DP-based methods allow for inference-valid protection of the published data, though computational issues may limit more complex analyses from using these methods. The results have applicability to researchers wishing to share randomized control trials (RCTs) data, especially in the context of low- and middle-income countries, with strong privacy protection. Introduction Randomized control trials, RCT, have become a powerful tool for assessing the impact of interventions and policies in many contexts (e.g., in economics, see Esther Duflo's Nobel Prize lecture Duflo, 2020). Today, they are considered the gold-standard for inference in the biomedical fields and in many social sciences. In economics, much of the growth has been since the 1990s. Studies can involve small-scale interventions, randomized at the personal, family, or village, but are sometimes also measured with province- or national-level outcomes. Researchers have published an increasing number of studies that rely on RCTs for at least part of the inference. In a parallel development, the quest for improved transparency in the social sciences has lead to more of the supplementary materials for articles to be made public as "replication packages". For instance, the American Economic Association (AEA) journals for applied economics (AEJ:Applied) and economic policy (AEJ:Economic Policy) have required that analysis data and code be made available since the journals' creation in 2009. The increased availability of complete replication packages has allowed other researchers to leverage the materials, and conduct re-analyses and meta-analyses, furthering our understanding of the methods as well as of the conclusions drawn from these studies. Meager (2019) re-analyzed numerous RCTs to assess the robustness of their findings using Bayesian hierarchical analysis (BHA). Roth (2022) selected event studies for which complete replication packages were available, to re-analyze them in light of pre-treatment time trends. These kinds of studies are possible because of the increased availability of complete replication materials.1 Footnote 1: It should be noted that Roth (2022) still had to exclude nearly four times as many papers as they included because data were not readily available. The data included in such replication packages usually allow to reproduce the results in the papers exactly, suggesting that all the analysis is conducted on these data. However, the typical guidance followed by researchers who conduct RCTs (Department of Health and Human Services, 2012; Kopper, Sautmann and Turitto, 2020; DIME, 2020) suggests primarily de-identification, the most basic anonymization, as the protection mechanism, and where further anonymization is suggested, more traditional disclosure avoidance methods (e.g., \(l\)-diversity, Machanavajjhala et al. (2006); Hundepool et al. (2012), and other aggregation-based methods are suggested). Differential privacy (DP) (Dwork et al., 2016) is sometimes referenced (Wood et al., 2021), but we are not aware of the application of DP in the context of RCTs. This suggests that much of the current literature on RCTs publishes replication packages that contain inadequately protected data. This is particularly concerning in the economic data setting we are exploring because many of these studies have data from respondents in low and middle income countries (LMIC). One of the reasons for the absence of strong privacy protection methods in this literature is that no tools are available to non-specialists that would allow for easy but efficient protection using differentially private tools. Efficiency here is defined as "perturbing inference as little as possible compared to the unprotected inference." We note that inference even in the "unprotected" case is already subject to uncertainty that is often not adequately taken into account, as evidenced by Meager (2019). This is even more important for the uncertainty and data modifications that are generated through statistical disclosure limitation (SDL). Abowd and Schmutte (2015); Slavkovic and Seeman (2022) demonstrate the need to account for the privacy-preserving noise in analyses. Slavkovic and Seeman (2022), and references therein, discuss a way to make an adjustment for privacy-preservation noise in addition to other source of uncertainty. The present article is part of a project that aims to provide an assessment of the feasibility of using privacy enhancing technologies (PETs), in particular differentially private methods, for data publication and adjusted inference in the context of RCTs. More broadly, we contribute to a literature on privacy-aware analysis, and privacy-aware planning for such analyses. The project is, as far as we know, the first systematic exploratory analysis of RCTs to understand the impact of privacy-preservation and with the focus on LMIC data, and here we focus on some of our early explorations. Broadly, we aim to contribute on two separate dimensions. First, we will assess the feasibility of privacy protections that are stronger than the simple de-identification usually used, in the context of data collected for RCTs, taking into account the ability to make robust inferences. Second, we do so while maintaining feasibility of application, here defined as computationally feasible on commodity hardware used by researchers in these fields (and in particular, in LMIC). Our focus on RCTs is intentionally narrow. We believe that exploring the impact of privacy-preserving technologies in the context of RCTs is useful for several reasons. First, methods are, in general, quite straightforward: standard linear regression, difference-in-difference methods, possibly even simple difference in means across treated and untreated populations. These are amongst the first analysis methods for which adaptations to DP protection have been studied (e.g., Awan and Slavkovic, 2020; Alabi et al., 2020; Slavkovic and Molinari, 2021; Barrientos et al., 2018; Bowen et al., 2020). If formal privacy-preserving methods cannot be made to work "out-of-the-box" and at scale in this context, then it will be much more difficult to argue for more broad application. Second, most RCTs are small-scale, using samples of the overall population, potentially allowing us to leverage privacy-amplifying methods (Balle, Barthe and Gaboardi, 2018), but also not be faced with insurmountable computational constraints. Third, RCTs are often accompanied by pre-analysis plans, with specific hypotheses in mind and with the intent to avoid false discovery. These areas have also been explored within the DP framework (e.g., Vu and Slavkovic, 2009; Pistner, 2020; Dwork, Su and Zhang, 2021)). Furthermore, it is already understood in the privacy community that the inherent noisiness of the sampling may affect inference (e.g., Slavkovic and Seeman, 2022). The analogy between adding noise for the purpose of BHA, Meager (2019), and adding noise for privacy protection may be a convenient analogy to improve acceptance of such methods. A similar Bayesian framework can be used to adjust noisy inference due to privacy (e.g., Seeman, Slavkovic and Reimherr (2020).) Specifically, we explore the impact of privacy-preserving methods through a set of simulations and in the context of a published study, Blattman, Jamison and Sheridan (2017_a_) [henceforth the "Liberia study"]. The Liberia study is one of many published articles based on RCTs, for which typically the complete data underlying the study is available.2 Problem setup We focus on a particular key table in the Liberia study, Table 2. It is the result of several independent regressions, each with a separate dependent (response) variable of interest, measured in the short term (Panel a) and long term (Panel b), controlling for a set of assignment indicators for the various treatments, other covariates, and stratifiers.3 These are "intent-to-treat" (ITT) regressions. They are typical of RCTs, where the experimenter is interested in determining whether a particular treatment has any effect on a response variable when the treatment is applied to an entity, individual, or treatment unit. The experimental treatment is randomly assigned to the treatment units according to some chosen experimental design, and response variables are recorded at various points after the treatment. As is typical for this type of analysis, the Liberia study has both discrete and continuous covariates. The response variables are also a mix of continuous and discrete outcomes. Footnote 3: Blattman, Jamison and Sheridan (2017_a_) stratify the treatment across a few subgroups, but for expositional ease, we mostly ignore that aspect. In the empirical analysis, we implement the authors’ stratified random assignment as per their methodological description. The various characteristics and attributes of the treatment unit, including membership in various strata, as well as the actual outcomes, may all be sensitive. In the Liberia study, participation in therapy, receipt of cash (two outcomes), and self-reported criminal activity (covariates) are all quite sensitive attributes, and may pose privacy concerns. The inclusion of many covariates in the regression analysis improves the statistical utility of the estimate of the treatment effect. However, these covariates also pose a privacy concern for the treatment units participating in the study. An attacker, who may access the database provided as part of the replication package, may be able to reidentify individuals, and learn new, sensitive information about them. This would constitute a privacy violation for the treatment units. Thus, we consider a setting where the typical researcher attempts to pursue three goals: publish a sufficiently precise inference of the effect of the treatment on the treated from the model, given the data (Aim 1); release (publish) the database so that others can scrutinize the analysis; and protect the privacy of the respondents whose data is contained in the database (Aim 2). In this paper, we focus on privacy-protection that in part relies on differentially private synthetic data generation, and assess the ability to suitably meet Aims 1 and 2. ## 3 Synthetic data generation approach based on perturbed histogram Let the analysis data be available in the form of a dataframe with \(n\) rows and \(p+t+b+1\) columns, where \(n\) is the total number of treatment units, \(p\) is the number of covariates, \(t\) is the number of mutually exclusive treatment assignments (the control group being one of them), and \(b\) is the number of blocking variables. One column contains the values of the response variable \(y\).4 Footnote 4: In the Liberia study, there are many response variables, but each is treated independently, and can be assumed to be a different dataframe with the same \(p+t+b\) columns as other dataframes. Assuming that a linear regression model is suitable, the regression model of interest in the absence of blocking variables is given by \[y_{i}=\alpha+\sum_{k=1}^{b}\tau_{k}T_{k,i}+\sum_{l=1}^{p}\gamma_{l}X_{l,i}+ \epsilon_{i},\quad i=1,\ldots,n, \tag{1}\] where \(T_{k}\) represent the dummy variables for the treatment level combinations and \(X_{l}\) represent the covariates variables associated with the \(n\) treatment units and \(\epsilon_{i}\overset{i.i.d}{\sim}N(0,\sigma^{2})\). When stratification is used with a total of \(m\) block combinations and \(n_{j}\) treatment units are assigned to \(j\)-th block combination, the corresponding regression model is given by \[\begin{split}& y_{ij}=\alpha+\sum_{k=1}^{b}\tau_{k}T_{k,i}+\sum_{l=1} ^{p}\gamma_{l}X_{l,ij}+\epsilon_{ij}\\ & i=1,\ldots,n_{j},j=1,\ldots,m,\sum_{j}^{m}n_{j}=n\end{split} \tag{2}\] In both the above models, the parameter(s) of interest to the experimenter are the fixed effects \(\tau_{k}\), \(k=1,\ldots,b\). From the point of view of the experimenter, statistical utility is preserved if the inference concerning the fixed effects \(\tau_{k}\) is affected as little as possible by the data release mechanism used to sanitize the data in order to protect privacy. Using Equation (1) and the private dataframe \(D=[Y,T,X]\), we obtain point estimates of regression coefficients \(\hat{\tau}_{k}\) and \(\hat{\gamma}_{l}\), along with a point estimate of the residual variance \(\hat{\sigma}^{2}\). We now adopt a synthetic data generation approach that aims to preserve the inference concerning the (estimated) fixed effects \(\hat{\tau}_{k}\) while ensuring that the data release mechanism produces a (protected) dataframe \(\widetilde{D}\) with \(n\) observations5 and which satisfies \(\epsilon\)-differential privacy (DP), with the following caveats. Footnote 5: We note that it is not strictly necessary to output the exact same \(n\) observations as in the private data frame, but this seems to be the convention. * We prioritize inference validity by using the private parameters \(\hat{\tau}\) estimated on the private data \(D\) as part of the algorithm. The released \(\widetilde{\tau}\) is protected because it is based on \(\epsilon\)-DP-protected \(\widetilde{X}\) (as outlined below), but is not itself DP-protected. * We assume that the \(t\) treatment assignments do not need to be protected. As assignment is randomized, the \(t\) columns are independent of (orthogonal to) the \(p\) covariate columns, and contain no information as to the sensitive attributes. In the mechanism, we simply redraw new assignments conforming to the specified design of the RCT. * We ignore the effect of the sanitization mechanism on parameters \(\gamma\), because \(\gamma\) are not released (published). To create \(\epsilon\)-DP covariate synthetic data \(\widetilde{X}\), we follow one of the originally proposed mechanisms by sampling from the perturbed histogram (e.g., see Dwork et al., 2006\(a\); Wasserman and Zhou, 2010). We use the covariate information \(X\) to construct a generative model using a multidimensional histogram. Where necessary, we choose precision \(\zeta\) and discretize continuous variables. The histogram counts are sanitized using the Laplace mechanism, adding independent noise with mean 0 and scale parameter \(2/\epsilon\) to each count. We sample from the protected histogram to generate \(\epsilon\)-DP protected \(\widetilde{X}\). We then reimplement the treatment design, by doing random assignment (where appropriate, within strata) for \(n\) synthetic treatment units, thus creating treatment indicators \(\widetilde{T}\). The private parameter estimates \(\hat{\tau}_{k}\), the treatment indicators \(\widetilde{T}\), the protected covariates \(\widetilde{X}\), and a suitable model are then used as a generative model for the protected response variable \(\widetilde{Y}\). Note that one suitable model is (1), but we also experiment with other data generating processes. Finally, we can publish \(\widehat{\tau}_{k}\) and associated standard errors, estimated using (1) and \(\widetilde{D}=\left[\widetilde{Y},\widetilde{T},\widetilde{X}\right]\), and release \(\widetilde{D}(\epsilon,\zeta)\) as part of the replication package. In principle, it is possible to extend this approach to other regression models (such as logistic regression) which might be more suitable than linear regression in some scenarios. ### Algorithm Here we describe the basic algorithm for the case where there are no blocking variables. The only change for the case where there are blocking variables is in the experimental design used to assign treatment levels and block combinations to the \(n\) synthetic treatment units, which is straightforward. 1. Construct a multivariate histogram for the \(p\)-dimensional covariate data \(X\). Number of bins along each of the dimensions corresponding to the continuous variables is taken to be of the order \(n^{\zeta}\) (with \(\zeta\) defined below), and number of bins along the dimensions corresponding to the discrete variables is equal to the known number of distinct values of the variable.6 Let \(q\) be the number of bins required to construct the histogram. Let \(C_{i}\) be the count/frequency of the observations in the covariate dataframe corresponding to the \(i\)-th bin, \(i=1,\ldots,q\). Let \(C\) be the vector of counts given by \(C=(C_{1},\ldots,C_{q})\). Footnote 6: Strictly, we would also collapse discrete variables if the number of distinct values is greater than \(n^{\zeta}\), but in practice, this never occurred. 2. Draw \(q\) i.i.d observations \(Z_{1}\),..., \(Z_{q}\) from a Laplace distribution with location parameter/mean 0 and variance \(8/\epsilon^{2}\) (equivalently scale parameter \(2/\epsilon\)). Compute the sanitized vector of counts \(F=(F_{1},\ldots,F_{q})\) where \(F_{i}=C_{i}+Z_{i}\), \(i=1,\ldots,q\). Since some of the sanitized counts could be negative valued, we transform the negative counts to 0 and renormalize the counts to obtain a vector of sanitized relative frequencies as \(\widetilde{F}=(\widetilde{F}_{1},\ldots,\widetilde{F}_{q})\) where \(\widetilde{F}_{i}=\frac{F_{1}\mathbf{I}_{F_{i}>0}}{\sum_{i=1}^{q}D_{1}\mathbf{ I}_{F_{i}>0}}\), \(i=1,\ldots,q\). 3. Draw \(n\) i.i.d \(p\)-dimensional vectors \(\widetilde{X}_{1}\),..., \(\widetilde{X}_{n}\) using simple random sampling with replacement from the \(q\) bins of the constructed histogram in Step 1 using the sanitized relative frequency vector \(\widetilde{F}\) as the corresponding probabilities of each of the \(q\) bins. The sanitized covariate dataframe is denoted by \(\widetilde{X}^{n\times p}=\left[\widetilde{X}_{1}^{T}\,\,...\,\,\widetilde{X}_{n}^ {T}\right]^{T}\). 4. Construct the \(t\) dummy variables corresponding to the treatment assignments using the experimental design and denote it by \(\widetilde{T}^{m\times t}=\left[\widetilde{T}_{1}\,\,...\,\,\widetilde{T}_{t}\right]\). The synthetic dataframe corresponding to the treatment level assignment dummy variables and the covariates is denoted as \(\widetilde{M}=[\widetilde{T},\widetilde{X}]\). 5. Compute \(\hat{\tau}_{k}\),\(\hat{\gamma}_{l}\) and \(\hat{\sigma}^{2}\) based on linear regression analysis using the original dataframe (without any privatization). (We can generalize this to any regression model). 6. Construct \(\widetilde{Y}=(\widetilde{Y}_{1},\,...\,,\widetilde{Y}_{N})\) using the privately computed \(\hat{\tau}_{k}\),\(\hat{\gamma}_{l}\) and \(\hat{\sigma}^{2}\) using \[\widetilde{Y}_{i}=\widetilde{M}\hat{\beta}+E_{i}\] where \(E_{i}\stackrel{{ i.i.d}}{{\sim}}N(0,\hat{\sigma}^{2})\), \(i=1,\,...\,,\,n\). (We can generalize this to any prediction model based on estimated regression model). 7. Release \(\widetilde{D}(\epsilon,\zeta)=[\widetilde{Y},\,\widetilde{M}]=[\widetilde{Y},\widetilde{T},\widetilde{X}]\), \(\widetilde{\tau}\) and its associated standard errors. The proof of differential privacy guarantee is based on Proposition 1 in Dwork et al. (2006\(b\)) along with the post-processing property of pure differential privacy, while the statistical optimality is based on Theorem 4.4 of Wasserman and Zhou (2008). We explore several variations of this algorithm. To assess the contribution to variability due to the resampling from the histogram (Step 3), we compute a version of the data where no noise is injected at Step 2 (denoted by \(\epsilon=\infty\)). We also experiment with variations in \(\zeta\), the precision of the discretization of continuous variables. We initially choose \(\zeta=2/3\) and denote this as "high-precision" models. By reducing \(\zeta\) to \(1/3\), thus coarsening the histogram created, we introduce a loss of precision of the synthesis, but reduce the computational cost as well. These results are denoted as "low-precision" models. This may be relevant when the protection mechanism needs to be run on researchers' laptops, rather than on a high-performance compute cluster. In fact, in the Liberia study, we initially scaled \(p\), the number of covariates, to be feasibly computed on a laptop, given the size \(q\) of the histogram and 32-bit limitations in R. This turned out to be fewer covariates than originally used by the authors. Finally, for a given low-precision \(\zeta\), we exploit the ability to add additional variables, allowing us to better approximate the authors' original model. These results are denoted as "expanded specification." ## 4 Evaluating the Mechanism Given a private dataset \(D=[Y,M]\) and a sanitized that is protected version of the same dataset (synthetic dataset) \(\widetilde{D}=[\widetilde{Y},\widetilde{M}]\) obtained using our proposed algorithm with a given privacy budget \(\epsilon\) and precision \(\zeta\), we compute the following four metrics of comparison to verify whether Aim 1 is achieved : 1. **Metric 1 - C.I. overlap indicator:** This binary (0 or 1) metric computes whether there is any overlap between the 95% confidence intervals (C.I.) for the regression coefficients (individual C.I.'s for each regression coefficient) computed based on the private dataset and the protected dataset. 2. **Metric 2 - Estimate coverage by sanitized C.I. indicator:** This binary (0 or 1) metric computes whether the point estimates for the regression coefficients computed based on the private dataset fall within the confidence intervals for the regression coefficients computed based on the sanitized dataset. A value of 1 indicates that the deviation of the inference regarding the regression coefficients based on the private dataset from the same inference based on the sanitized dataset is likely to be small. 3. **Metric 3 - C.I. overlap measure:** This metric computes a measure of the overlap between the 95% confidence intervals (C.I.) for the regression coefficients (individual C.I.'s for each regression coefficient) computed based on the private dataset and the protected dataset (Karr et al., 2006). Specifically, having chosen a particular regression coefficient \(\beta\), if \((L,U)\) is the C.I. for \(\beta\) computed based on the unsanitized dataset and \((\widetilde{L},\widetilde{U})\) is the C.I. for \(\widetilde{\beta}\) computed based on the sanitized dataset. Let \(L^{\mathit{over}}=\max(L,\widetilde{L})\) and \(U^{\mathit{over}}=\min(U,\widetilde{U})\). Then the average overlap in confidence intervals \(\widetilde{O}\) is \[\widetilde{O}=\frac{1}{2}\left[\frac{U^{\mathit{over}}-L^{\mathit{over}}}{U-L }+\frac{U^{\mathit{over}}-L^{\mathit{over}}}{\widetilde{U}-\widetilde{L}} \right]\,.\] This metric is a continuous measurement version of Metric 1. The average overlap \(\widetilde{O}\) can vary between 0 and 1, with higher values near 1 indicating that there is a large degree of overlap. Thus, higher values (near 1) indicate that the deviation of the inference regarding the regression coefficients based on the private dataset from the same inference based on the protected dataset is small. 4. **Metric 4 - Empirical Squared Error in Estimate:** This metric computes \((\beta-\widetilde{\beta})^{2}\), the square of the difference between the private and sanitized point estimates of the regression coefficients. Smaller values (near 0) indicate that the deviation of the inference regarding the regression coefficients based on the private dataset from the same inference based on the sanitized dataset is small. In order to verify whether Aim 2 is satisfied, we choose a statistic (**Metric 5**) that depends only on the private covariate data \(X\), computing it for pairs of datasets, treating the original dataset as the benchmark. In this study, we use the empirical squared error, the squared difference between the two values of the statistic computed. All statistics are averaged across multiple runs of the algorithm. Thus, metrics 1 and 2 will be reported as proportions, and metrics 3, 4 and 5 will be reported as means. We emphasize that in practice, researchers would likely only use a single run of the algorithm, and not publish multiple versions of the synthetic dataset. In the next two sections, we use these metrics to evaluate the performance of our proposed algorithm, first on simulation studies, and then on the Liberia study. Numerical Experiments There are two separate sources of noise addition to the original private dataset. The first source is the statistical noise introduced due to the uncertainty involved in estimating the distribution of the covariate data and the sampling of the synthetic dataset from the histogram. The second source is due to differential privacy (addition of Laplace noise). To assess the individual effect of noise from the second source separated from the first source, we perform the same synthetic data generation process, but without the addition of DP noise to the histogram counts (Step 2), creating what we refer to as non-DP synthetic dataset \(D^{*}=\widetilde{D}(\infty,\zeta)\). We then calculate the above four metrics, using \(D^{*}\) instead of \(\widetilde{D}\), and compare the two sets. This way we empirically observe an effect of the differential privacy constraint on our data generation process. Additionally, if the comparison metric values for the DP and non-DP procedures do not differ very much, we could argue for the DP implementation in practice since non-DP outputs are vulnerable to reconstruction attacks and other forms of loss in privacy. We perform two simulations studies to capture the above comparisons. Simulation Study 1 uses a single covariate, which we further explore by drawing from different distributions. We discuss here Simulation Study 1 with a single sensitive covariate simulated from the Uniform distribution. In Appendix A, we report on simulations where the covariate is based on the Beta distribution. Simulation Study 2 generalizes the covariate structure to include a mix (discrete and continuous variables). It leads to qualitatively similar results, and is reported in Appendix A. Throughout this section and in the next, we consider 3 different choices of the privacy-loss budget \(\epsilon=\{0.1,0.5,1\}\), and we use \(\epsilon=\infty\) to denote non-DP synthetic data. For a given privacy budget, we simulate 100 different private datasets (response variable, treatment variable and covariates combined). For each of these 100 datasets, we independently generate 20 protected synthetic datasets using our proposed algorithm. We evaluate the OLS point estimates and confidence intervals for the regression coefficients when computing the Metrics 1, 2, 3 and 4 to measure the degree of preservation of utility of the inference, not only for the treatment effects but also for the other regression coefficients. To compute Metric 5, we choose the variance of the covariate \(x_{1}\). ### Results of Simulation Study 1 For our first simulation study we consider a dataframe with \(n=100\) observations, 1 treatment variable, \(t_{1}\), with two treatment levels, "0" and "1" denoting whether or not the treatment was applied to the corresponding treatment unit and \(p=1\) continuous covariate, \(x_{1}\), where we considered two different distributions for the continuous covariate: Uniform(-5,5) and Beta(1,2). The treatment variable is generated from a binomial distribution with equal probabilities for the two treatment levels. All variables are generated independent of each other. We choose the true regression coefficient as \(\alpha=0.05\) (Intercept term), \(\tau_{1}=1,\gamma_{1}=0.2\) and the true residual variance to be \(0.5\). In Table 1, we compute the metric values for three different choices of the privacy budget \(\epsilon=0.1,0.5\), \(1\), and non-DP synthetic data, that is \(\epsilon=\infty\). We observe that Metric 1 always have value 1 indicating that in all these cases there is always an overlap between the original and synthetic data. The values under Metric 2 indicate that in all the cases, for all the regression coefficients, around \(94-95\%\) of the time the value of the point estimate of the original/unsanitized dataset lies within the confidence intervals for the regression coefficients computed based on the synthetic dataset. From the values under Metric 3 we can conclude that the measure of overlap between the confidence intervals of the original and synthetic dataset is around \(78-79\%\). From the values under Metric 4, we observe that the square of the differences between the point estimates of the regression coefficients based on the unsanitized dataset and the sanitized dataset are quite small. We observe that, irrespective of the privacy budget \(\epsilon\), the effect of the privatization on the utility of the estimates of the regression parameters is quite small. Thus, we can conclude on the basis of these results that the utility of the inference regarding the treatment effects as well as the remaining regression coefficients are is preserved to a large extent even under privatization using our proposed algorithm. On the other hand, we expect that as the privacy budget \(\epsilon\) decreases, we expect larger degrees of distortion of the covariate data in the synthetic data generation process. Thus, we should expect larger differences (as \(\epsilon\) decreases) between the values of sensitive statistics (which depend on the sensitive covariate data and which we aim to provide privacy protection) when computed using the unsanitized dataset and the sanitized dataset. From Table 2, we observe that the squared differences between the sensitive statistic (which we chose to be the variance of \(x_{1}\)) based on the private dataset and the sanitized dataset are increases as the privacy budget \(\epsilon\) decreases. Other choices of the sensitive statistic also yield similar results. Thus, we conclude that both Aim 1 and Aim 2 are satisfied to a large extent, based on this simulation study using uniform covariates. We also observe that the values for non-DP synthetic data are similar to the values computed based on the protected data generation procedure, thus empirically proving our conclusion that adding differential privacy guarantees is not coming at much extra cost. Further, in Table 2 we compute Metric 5 (MSE) for the sensitive statistic (Variance of Age) based on both DP and non-DP synthetic data generation procedures. The larger value of Metric 5 using DP synthetic data generation in comparison to the smaller value using non-DP synthetic data generation is indicative of the additional distortion introduced by the privatization. \begin{table} \begin{tabular}{l l c c c} \hline \hline **Privacy budget** & **Variable names** & **Metric 1** & **Metric 2** & **Metric 3** & **Metric 4** \\ \hline \(\epsilon=0.1\) & (Intercept) & 1.00000 & 0.95000 & 0.79427 & 0.01127 \\ & \(t_{1}\) & 1.00000 & 0.94650 & 0.79718 & 0.02099 \\ & \(x_{1}\) & 1.00000 & 0.95350 & 0.78564 & 0.00076 \\ \hline \(\epsilon=0.5\) & (Intercept) & 1.00000 & 0.95450 & 0.79703 & 0.01054 \\ & \(t_{1}\) & 1.00000 & 0.94600 & 0.79684 & 0.02094 \\ & \(x_{1}\) & 1.00000 & 0.94750 & 0.79166 & 0.00069 \\ \hline \(\epsilon=1\) & (Intercept) & 1.00000 & 0.95000 & 0.79809 & 0.01046 \\ & \(t_{1}\) & 1.00000 & 0.94900 & 0.79737 & 0.02094 \\ & \(x_{1}\) & 1.00000 & 0.95700 & 0.79582 & 0.00065 \\ \hline \(\epsilon=\infty\) & (Intercept) & 1.00000 & 0.95400 & 0.80176 & 0.00987 \\ & \(t_{1}\) & 1.00000 & 0.94900 & 0.79460 & 0.02119 \\ & \(x_{1}\) & 1.00000 & 0.95500 & 0.79557 & 0.00064 \\ \hline \hline \end{tabular} \end{table} Table 1: Effect on inference regarding regression coefficients measured using Metrics 1-4 for Simulation Study 1 with uniform covariate, averaged over 100 simulations of the sensitive dataframe, using 20 independently generated synthetic dataframes (with varying privacy-loss budgets \(\epsilon=0.1,0.5,1\) and \(\epsilon=\infty\) that is non-DP synthetic data) for each sensitive dataframe. Application to "Reducing Crime and Violence: Experimental Evidence from Cognitive Behavioral Therapy in Liberia" (Blattman, Jamison and Sheridan, 2017_a_) In this section, we apply and evaluate the potential of the proposed methodology on a real-world randomized control trial by focusing on the analyses as reported in Blattman, Jamison and Sheridan (2017_a_). The associated replication files, including the de-identified data, are available in Blattman, Jamison and Sheridan (2017_b_). ### Setup For our evaluation, we focus on the results reported in Table 2 Panel B of Blattman, Jamison and Sheridan (2017_a_). Specifically, the authors consider the long-term (12-13 months after the program)7 effect of therapy and cash grant on a variety of outcome variables both individually, and through a summary indicator called _Antisocial behaviours z-score_ (referred to as fam_asb_lt). The sample is composed of 999 high-risk youths in Monrovia, Liberia. A \(2\times 2\) factorial design is used with two stratification variables based on the groups the youths were in when they were randomly assigned the treatments, once at the time of being assigned to therapy (there were 55 such groups), and once at the time of being assigned to receive cash grant of 200 USD (there were 20 such groups). Blattman, Jamison and Sheridan (2017_a_) find that neither cash nor therapy alone have a lasting (12-13 month) effect, but that the combination of both treatments does reduce "antisocial behavior". Footnote 7: Formally, we use Round 5 data, as the original code (Blattman, Jamison and Sheridan, 2017_b_) specifies. The analysis data are obtained from the file named STYL_Final.dta as provided in the replica \begin{table} \begin{tabular}{l c c c c} \hline **Privacy Budget** & \(\epsilon\) **=0.1** & \(\epsilon\) **=0.5** & \(\epsilon\) **=1** & **Non-DP Synthesis** \\ \hline MSE of Variance of \(x_{1}\) & 6.888821 & 2.388792 & 1.273375 & 0.594822 \\ \hline \end{tabular} \end{table} Table 2: Effect on value of sensitive statistic (based on covariate data) measured using Metric 5 (MSE) for Simulation Study 1 using uniform covariate. Results are reported for DP synthesis with varying privacy budget \(\epsilon\) and non-DP synthesis, each type of synthesis being averaged over 100 simulations of the sensitive dataframe, using 20 independently generated synthetic dataframes for each sensitive dataframe. tion package (Blattman, Jamison and Sheridan, 2017_b_). The treatment assignments are encoded using 3 binary treatment variables tpassonly (indicating that only therapy was received), cashassonly (indicating that a cash-only grant was received), and tpcashass (indicating that both therapy and cash grant were received). The therapy assignment based blocking variable is tp_strata_alt while the cash grant assignment based blocking variable is cg_strata. In addition to the treatment variables and the blocking variables, we include 7 covariates in the core regressions: age_b, asbhostil_b, drugssellever_b, drinkboozeself_b, druggrassself_b, harddrugsever_b, steals_b. The first 2 covariates are age and antisocial behaviour index (Barret Antisocial Behavior (ASB) and Hostility z-score) for the individuals participating in the study. These are continuous variables. The remaining covariates record the antisocial behaviour of the youths in terms of ever having sold drugs, whether they drink alcohol, whether they smoke grass/opium, whether they have ever consumed hard drugs and whether they have exhibited stealing behaviour in the 2 weeks prior to their interview, respectively. The values of these covariates are recorded to be 1 if the answer is affirmative, otherwise 0. The variables of interest are shown in Table 3. ### Results Table 4 display the core results from the application of the mechanism to the Liberia study, in the form of the statistical inference regarding the treatment effects of the Cash Grant Only treatment, Therapy Only treatment and Both Cash and Therapy treatment on the various response variables. Panel (i) shows the replication of the authors' analysis on the original (unmodified) data \(D\), using the covariates listed in Table 3. This replication is necessary, because these covariates constitute a strict subset of the covariates that Blattman, Jamison and Sheridan (2017_a_) control for. This is done for computational reasons, to which we return later. Panel (ii) shows the same specification, estimated using the data protected using a single run of the mechanism described in Section 3, with \(\epsilon=1\) and \(\zeta=\frac{2}{3}\) (i.e., \(\widetilde{D}(1,\frac{2}{3})\)). Panel (iii) show the same specification, estimated on the non-DP synthetic data \(\widetilde{D}(\infty,\frac{2}{3})\). Figure 1 summarizes the results for the treatment effects for the first rows of each panel, for a single response variable, in this case the focal "ASB z-score". Specifically, Figure 1 displays the \begin{table} \begin{tabular}{l l} \hline \hline **Name of variable in database** & **Description** \\ \hline Outcomes & \\ \hline fam\_asb\_lt & Antisocial Behaviors, Z-score \\ drugsellever\_e & Usually sells drugs \\ crimes2wx\_e & No. of thefts/robberies in past 2 weeks \\ disputes\_all\_z\_e & Disputes and fights in past 2 weeks, z-score \\ carryweapon\_e & Carries a weapon on body at follow-up \\ arrested\_e & Arrested in past 2 weeks at follow-up \\ hostilitystd\_e & Aggressive behaviors, z-score at follow-up \\ domabuse\_z & Verbal/physical abuse of partner, z-score \\ \hline Treatments & \\ \hline cashassonly & Cash Only \\ tpassonly & Therapy Only \\ tpcashass & Both \\ \hline Covariates & \\ \hline age\_b & Age \\ ashbostil\_b & Barret ASB index \\ drugsellever\_b & Drugs Sell indicator at baseline \\ drinkboozeself\_b & Alcohol self indicator at baseline \\ druggrassself\_b & Grass/Opium self indicator at baseline \\ harddrugsever\_b & Hard Drugs indicator at base line \\ steals\_b & Steal self indicator at base line \\ \hline \hline \end{tabular} \end{table} Table 3: Variables of interest in the Liberia study \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline & \multicolumn{4}{c}{Therapy Only} & \multicolumn{4}{c}{Cash Only} & \multicolumn{4}{c}{Both} \\ \cline{2-13} & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value \\ \hline (i) Original & & & & & & & & & & & & \\ \hline Antisocial behaviors & \(-0.026\) & ( 0.085) & [ & 0.755] & 0.098 & ( 0.086) & [ & 0.257] & \(-0.224\) & ( 0.087) & [ & 0.010] \\ z-score & & & & & & & & & & & \\ Usually sells drugs & \(-0.029\) & ( 0.029) & [ & 0.316] & 0.025 & ( 0.030) & [ & 0.398] & \(-0.086\) & ( 0.030) & [ & 0.004] \\ No. of thefts/roboberies in & 0.075 & ( 0.514) & [ & 0.884] & 0.071 & ( 0.523) & [ & 0.892] & \(-1.087\) & ( 0.526) & [ & 0.039] \\ past 2 weeks & & & & & & & & & & & \\ Carries a weapon on body & \(-0.045\) & ( 0.031) & [ & 0.141] & 0.019 & ( 0.031) & [ & 0.548] & \(-0.057\) & ( 0.031) & [ & 0.067] \\ Arrested in past 2 weeks & & & & & & & & & & & \\ Argressive behaviors & \(-0.000\) & ( 0.031) & [ & 0.994] & 0.004 & ( 0.032) & [ & 0.901] & \(-0.026\) & ( 0.032) & [ & 0.420] \\ Aggressive behaviors & \(-0.041\) & ( 0.089) & [ & 0.649] & \(-0.011\) & ( 0.091) & [ & 0.904] & \(-0.201\) & ( 0.092) & [ & 0.029] \\ Yerbal/physical abuse of partner z-score & & & & & & & & & & & \\ \hline (ii) Synthetic & & & & & & & & & & & \\ \hline Antisocial behaviors & \(-0.110\) & ( 0.082) & [ & 0.181] & \(-0.026\) & ( 0.086) & [ & 0.762] & \(-0.312\) & ( 0.084) & [ & 0.000] \\ z-score & & & & & & & & & & & \\ Usually sells drugs & \(-0.032\) & ( 0.033) & [ & 0.336] & 0.065 & ( 0.034) & [ & 0.060] & \(-0.119\) & ( 0.034) & [ & 0.000] \\ No. of thefts/roboberies in & 0.016 & ( 0.190) & [ & 0.932] & 0.226 & ( 0.198) & [ & 0.253] & \(-1.336\) & ( 0.195) & [ & 0.000] \\ past 2 weeks & & & & & & & & & & & \\ Carries a weapon on body & \(-0.060\) & ( 0.032) & [ & 0.060] & 0.038 & ( 0.033) & [ & 0.253] & \(-0.055\) & ( 0.033) & [ & 0.094] \\ Arrested in past 2 weeks & & & & & & & & & & & & \\ Argressive behaviors & \(-0.129\) & ( 0.087) & [ & 0.137] & \(-0.142\) & ( 0.090) & [ & 0.117] & \(-0.293\) & ( 0.089) & [ & 0.001] \\ Yerbal/physical abuse of partner z-score & & & & & & & & & & & & \\ \hline (iii) Non-DP Synthetic & & & & & & & & & & & \\ \hline Antisocial behaviors & \(-0.037\) & ( 0.081) & [ & 0.647] & 0.159 & ( 0.083) & [ & 0.056] & \(-0.207\) & ( 0.083) & [ & 0.013] \\ z-score & & & & & & & & & & & & \\ Usually sells drugs & \(-0.026\) & ( 0.026) & [ & 0.327] & \(-0.007\) & ( 0.027) & [ & 0.793] & \(-0.081\) & ( 0.027) & [ & 0.003] \\ No. of thefts/roboberies in & 0.101 & ( 0.150) & [ & 0.502] & 0.151 & ( 0.153) & [ & 0.325] & \(-0.893\) & ( 0.154) & [ & 0.000] \\ past 2 weeks & & & & & & & & & & & & \\ Carries a weapon on body & \(-0.015\) & ( 0.028) & [ & 0.598] & 0.007 & ( 0.028) & [ & 0.795] & \(-0.044\) & ( 0.028) & [ & 0.126] \\ Arrested in past 2 weeks & & & & & & & & & & & & \\ Argressive behaviors & \(-0.052\) & ( 0.085) & [ & 0.542] & 0.053 & ( 0.087) & [ & 0.546] & \(-0.182\) & ( 0.088) & [ & 0.038] \\ z-score & & & & & & & & & & & & \\ Yerbal/physical abuse of partner z-score & & & & & & & & & & & & \\ \hline \end{tabular} Panel (i) provides estimated coefficients, standard errors, and the associated p-value, using the original data and a reduced specification closely resembling Blattman, Jamison and Sheridan (2017). P-values are not adjusted. Panel (ii) displays the same estimated parameters, for the same model, when using the protection mechanism described in the text with \(\epsilon=1\). Panel (iii) shows results when setting \(\epsilon=\infty\), but otherwise following the same approach. See text for further details. \end{table} Table 4: Table 2(b) with original, protected, and modified data treatment effect estimates (represented by dots), the standard error of the treatment effect estimates (represented by intervals) and the unadjusted p-value for the individual tests of significance of the treatment coefficients. The key inferential results - that the combination of cash and therapy is the only effective treatment - is maintained across all three panels. The replication reported in Panel (i) shows no significant coefficients for therapy (first set of columns), and only one outcome (abuse) shows a significant effect of cash payments, whereas multiple coefficients are significant for the combined treatment, closely replicating the results from Blattman, Jamison and Sheridan (2017) (for unadjusted coefficients). The same pattern is also observed in Panels (ii) and (iii), with some small differences. While most of our estimates of standard errors are reasonably close to the those computed from the original data, we observe a general trend that has been discussed in the privacy literature: the sanitized data often gives smaller standard errors of parameter estimates when the sanitized protected data (synthetic, DP or not) are being naively used in place of the original data. These standard errors are misleading and will not give honest confidence intervals. We observe the same in our analysis as shown in Table 4, where most (but not all) of the standard errors in the Synthetic and non-DP Synthetic panels are (marginally) smaller. For instance, for the key treatment of "Both" for the response variable "ASB z-score", standard errors are 0.087 for the unmodified data, but 0.084 and 0.083, respectively, for the two synthetic datasets. Solutions have been proposed that account for the noise due to privacy-preservation that will lead to wider confidence intervals but honest inference \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline & \multicolumn{4}{c}{T therapy Only} & \multicolumn{4}{c}{Cash Only} & \multicolumn{4}{c}{Both} \\ \cline{2-13} & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value \\ \hline Outcome: ASB Zscore & & & & & & & & & & & & & \\ \hline Antisocial behaviors & \(-0.026\) & ( 0.085) & [ & 0.755] & 0.098 & ( 0.086) & [ & 0.257] & \(-0.224\) & ( 0.087) & [ & 0.010] \\ z-score & & & & & & & & & & & & & & \\ Antisocial behaviors & \(-0.110\) & ( 0.082) & [ & 0.181] & \(-0.026\) & ( 0.086) & [ & 0.762] & \(-0.312\) & ( 0.084) & [ & 0.000] \\ z-score & & & & & & & & & & & & & & \\ Antisocial behaviors & \(-0.037\) & ( 0.081) & [ & 0.647] & 0.159 & ( 0.083) & [ & 0.056] & \(-0.207\) & ( 0.083) & [ & 0.013] \\ z-score & & & & & & & & & & & & & & \\ \hline \end{tabular} Estimated coefficients, standard errors, and the associated p-value, using the original data and a reduced specification closely resembling Blattman, Jamison and Sheridan (2017), the synthetic data for \(\epsilon=1\), and the non-DP synthetic data for \(\epsilon=\infty\). See text for further details. \end{table} Table 5: Table 2(b), ASB Z-score only, with original, protected, and modified data (e.g., Seeman, Slavkovic and Reimherr (2020), and others). In future work, we will explore that result in this setting. Tables 6 and 7 show results from experimentation with the privacy budget \(\epsilon\), the precision \(\zeta\) of the transformation of continuous to discrete during the process, and the method of imputing and protecting the response variable, across various levels of the privacy budget, for a single response variable (in this case, fam_asb_z). The top panel in each table reproduces the key estimates from Table 4, for ease of referenced. Panel (i) of Table 6 shows results for the same \(\zeta=\frac{2}{3}\) as used in Table 4, but with increasing levels of protection (decreasing \(\epsilon\)). None of the treatment effects reported change in any substantial fashion. Panel (ii) shows results when decreasing \(\zeta\) by half to \(\frac{1}{3}\). Most point estimates are affected (similarly across all levels of \(\epsilon\)), and the "Therapy only" treatment would now appear to be marginally significant, though all numbers are not statistically significant from those using the higher \(\zeta\). However, the previously favored "Both" treatment is numerically quite close to the higher-\(\zeta\) numbers. Thus, in \begin{table} \begin{tabular}{l r r r r r r r r r r r r} \hline \hline & \multicolumn{3}{c}{Therapy Only} & \multicolumn{3}{c}{Cash Only} & \multicolumn{3}{c}{Both} \\ \cline{2-13} & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value \\ \hline Reference value & & & & & & & & & & & & \\ \hline (Original) & \(-0.026\) & ( & 0.085) & [ & 0.755] & 0.098 & ( & 0.086) & [ & 0.257] & \(-0.224\) & ( & 0.087) & [ & 0.010] \\ ( Synthetic) & \(-0.110\) & ( & 0.082) & [ & 0.181] & \(-0.026\) & ( & 0.086) & [ & 0.762] & \(-0.312\) & ( & 0.084) & [ & 0.000] \\ ( Non DP Synthetic) & \(-0.037\) & ( & 0.081) & [ & 0.647] & 0.159 & ( & 0.083) & [ & 0.056] & \(-0.207\) & ( & 0.083) & [ & 0.013] \\ \hline \hline \multicolumn{13}{l}{(i) High precision discretization} & & & & & & & & & & & & \\ \hline \(\epsilon=0.1\) & \(-0.103\) & ( & 0.082) & [ & 0.212] & \(-0.027\) & ( & 0.086) & [ & 0.754] & \(-0.306\) & ( & 0.084) & [ & 0.000] \\ \(\epsilon=0.5\) & \(-0.102\) & ( & 0.082) & [ & 0.216] & \(-0.028\) & ( & 0.086) & [ & 0.748] & \(-0.307\) & ( & 0.084) & [ & 0.000] \\ \(\epsilon=1\) & \(-0.110\) & ( & 0.082) & [ & 0.181] & \(-0.026\) & ( & 0.086) & [ & 0.762] & \(-0.312\) & ( & 0.084) & [ & 0.000] \\ \(\epsilon=\infty\) & \(-0.037\) & ( & 0.081) & [ & 0.647] & 0.159 & ( & 0.083) & [ & 0.056] & \(-0.207\) & ( & 0.083) & [ & 0.013] \\ \hline \hline \multicolumn{13}{l}{(ii) Low precision discretization} & & & & & & & & & & & & & \\ \hline \(\epsilon=0.1\) & \(-0.168\) & ( & 0.081) & [ & 0.038] & \(-0.101\) & ( & 0.083) & [ & 0.223] & \(-0.338\) & ( & 0.083) & [ & 0.000] \\ \(\epsilon=0.5\) & \(-0.170\) & ( & 0.081) & [ & 0.036] & \(-0.103\) & ( & 0.083) & [ & 0.214] & \(-0.339\) & ( & 0.083) & [ & 0.000] \\ \(\epsilon=1\) & \(-0.168\) & ( & 0.080) & [ & 0.037] & \(-0.089\) & ( & 0.083) & [ & 0.283] & \(-0.337\) & ( & 0.083) & [ & 0.000] \\ \(\epsilon=\infty\) & \(-0.029\) & ( & 0.086) & [ & 0.739] & 0.026 & ( & 0.085) & [ & 0.763] & \(-0.291\) & ( & 0.089) & [ & 0.001] \\ \hline \hline \end{tabular} All results for a single response variable, here: “Antisocial behavior z-score” (fam_asb_lt) (Panel (i) provides estimated coefficients, standard errors, and the associated p-value, using protected data where the continous covariates have been discretized and then protected using \(\zeta=n^{2/3}\) (high precision), for various values of \(\epsilon\). P-values are not adjusted. Panel (ii) displays the same estimated parameters, when \(\zeta=n^{1/3}\) (low precision). See text for further details. **WARNING Row 2 should match High Precision, \(\epsilon=1\), and Row 3 should match High Precision \(\epsilon=\infty\)** \end{table} Table 6: Varying precision and privacy budget this case, reducing precision from the preferred value of \(\zeta=\frac{2}{3}\) would potentially lead to "misleading" inferences. Much of this appears to be driven by changes (biases) in the point estimates, as the (naive) standard errors are not changed much. In our experiments, both the low and high precision discretization maintain the direction of the effect but the magnitude and the significant change. This data-dependent result will in part depend on the treatment variable and their interaction with the continous covariates being discretized. In other applications, this may have different effects. For Table 7, we switch to discussing the variable drugsellever_e ("Usually sells drugs") (a dummy variable), as its optimal imputation method was set to be a logistic regression, whereas the analysis in the Liberia study regresses it on the covariates using a linear regression. In Table 4, the results depicted for this variable in row 2 of panels (ii) and (iii) thus reflect imputation of \(y\) using the logistic regression. As before, the results are quite stable across multiple levels of \(\epsilon\) (Panel (ii) of Table 7). Switching to the method that, in principle, is more congenial to the analysis in the \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{therapy Only} & \multicolumn{3}{c}{Cash Only} & \multicolumn{3}{c}{Both} \\ \cline{2-13} & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value & Estimate & Std. Err & p-value \\ \hline Reference value & & & & & & & & & & & & & \\ \hline (Original) & \(-0.029\) & ( 0.029) & [ 0.316] & 0.025 & ( 0.030) & [ 0.398] & \(-0.086\) & ( 0.030) & [ 0.004] \\ ( Synthetic) & \(-0.032\) & ( 0.033) & [ 0.336] & 0.065 & ( 0.034) & [ 0.060] & \(-0.119\) & ( 0.034) & [ 0.000] \\ ( Non DP Synthetic) & \(-0.026\) & ( 0.026) & [ 0.327] & \(-0.007\) & ( 0.027) & [ 0.793] & \(-0.081\) & ( 0.027) & [ 0.003] \\ \hline \multicolumn{13}{l}{(i) Logistic Regression} \\ \hline \(\epsilon=0.1\) & \(-0.019\) & ( 0.032) & [ 0.562] & 0.079 & ( 0.033) & [ 0.019] & \(-0.092\) & ( 0.033) & [ 0.005] \\ \(e=0.5\) & \(-0.024\) & ( 0.032) & [ 0.467] & 0.074 & ( 0.034) & [ 0.028] & \(-0.096\) & ( 0.033) & [ 0.004] \\ \(e=1\) & \(-0.032\) & ( 0.033) & [ 0.336] & 0.065 & ( 0.034) & [ 0.060] & \(-0.119\) & ( 0.034) & [ 0.000] \\ \(e=\infty\) & \(-0.026\) & ( 0.026) & [ 0.327] & \(-0.007\) & ( 0.027) & [ 0.793] & \(-0.081\) & ( 0.027) & [ 0.003] \\ \hline \multicolumn{13}{l}{(ii) Linear Regression} \\ \hline \(\epsilon=0.1\) & \(-0.056\) & ( 0.028) & [ 0.050] & \(-0.018\) & ( 0.030) & [ 0.546] & \(-0.114\) & ( 0.029) & [ 0.000] \\ \(e=0.5\) & \(-0.055\) & ( 0.028) & [ 0.051] & \(-0.018\) & ( 0.030) & [ 0.540] & \(-0.115\) & ( 0.029) & [ 0.000] \\ \(e=1\) & \(-0.058\) & ( 0.028) & [ 0.040] & \(-0.018\) & ( 0.029) & [ 0.552] & \(-0.116\) & ( 0.029) & [ 0.000] \\ \(e=\infty\) & \(-0.033\) & ( 0.028) & [ 0.236] & 0.046 & ( 0.028) & [ 0.107] & \(-0.080\) & ( 0.029) & [ 0.005] \\ \hline \hline \end{tabular} All results for a single response variable, here: “Usually sells drugs” (drugsellever_e) (Panel (i) provides estimated coefficients, standard errors, and the associated p-value, using protected data where the response variable is generated using (Gaussian) linear regression, for various values of \(\epsilon\). P-values are not adjusted. Panel (ii) displays the same estimated parameters, when the response variable is generated using logistic regression. See text for further details. **WARNING Row 2 should match Logistic Regression, \(\epsilon=1\), and Row 3 should match Logistic Regression \(\epsilon=\infty\)** \end{table} Table 7: Varying response variable generation method and privacy budget paper does lead, as before, to potentially incorrect inferences, as "Therapy only" again displays coefficients that are marginally significant for finite values of \(\epsilon\). The primary inference for treament with "Both" again remains unaffected. To see the average performance of the mechanism across multiple data generations, we compute Metrics 1-5 across 100 independently generated, across values of \(\epsilon\). Tables 8 and 9 show Metrics 1-4 for \(\epsilon=1\) and \(\epsilon=\infty\), respectively, with qualitatively similar results for other values of \(\epsilon\) in Appendix B. Table 10 contains values of Metric 5 across all values of \(\epsilon\), using the MSE for age as Figure 1: Comparison of inference regarding treatment effect in the Liberia study the base. As in the simulation studies, and as already observed in Tables 4 through 7, the differences between the estimates based on the original data and the synthetic data are quite small, implying that the protection mechanism has not significantly affected the inference about the regression parameters. This is true even when \(\epsilon=\infty\), suggesting that adding DP guarantees to the covariates is not coming at much extra cost. Table 10 nevertheless shows that reasonable values of \(\epsilon\) add significantly more distortion, and by extension, protection, to the underlying data, as expected. \begin{table} \begin{tabular}{l l l l l} \hline **Privacy Budget** & **Epsilon 0.1** & **Epsilon 0.5** & **Epsilon 1** & **Non-DP Synthesis** \\ \hline MSE of Variance of Age & 4481.74 & 4508.79 & 4503.16 & 0.9 \\ \hline \end{tabular} \end{table} Table 10: Effect on value of sensitive statistic (based on covariate data) measured using Metric 5 (MSE) for Liberia study, with varying privacy budget \(\epsilon\) and non-DP synthesis, averaged over 100 simulations of the sensitive dataframe, using 20 independently generated synthetic dataframes for each sensitive dataframe. \begin{table} \begin{tabular}{l c c c c} \hline **Variable names** & **Metric 1** & **Metric 2** & **Metric 3** & **Metric 4** \\ \hline (Intercept) & 1.00000 & 0.95000 & 0.80074 & 0.03692 \\ Cash Only & 1.00000 & 0.98000 & 0.81815 & 0.00567 \\ Therapy Only & 1.00000 & 0.96000 & 0.83278 & 0.00478 \\ Both & 1.00000 & 0.94000 & 0.80123 & 0.00676 \\ Therapy Block & 1.00000 & 0.99000 & 0.82447 & 0.00000 \\ Cash Block & 1.00000 & 0.96000 & 0.77586 & 0.00003 \\ Age & 1.00000 & 0.94000 & 0.79168 & 0.00004 \\ Barret ASB index & 1.00000 & 0.93000 & 0.79746 & 0.00103 \\ Drugs Sell indicator & 1.00000 & 0.97000 & 0.80715 & 0.00618 \\ Alcohol self indicator & 1.00000 & 0.97000 & 0.79696 & 0.00447 \\ Grass/Opium self indicator & 1.00000 & 0.92000 & 0.79709 & 0.00470 \\ Hard Drugs indicator & 1.00000 & 0.97000 & 0.78692 & 0.00639 \\ Steal self indicator & 1.00000 & 0.95000 & 0.78054 & 0.00531 \\ \hline \end{tabular} \end{table} Table 9: Effect on inference regarding regression coefficients measured using Metrics 1-4 for Liberia study, averaged over 100 independently generated non-DP synthetic dataframes. Discussion Much of the literature (in economics) publishes replication packages with either weakly protected (de-identified) data (contrary to our stated Aim 2), or withholds the data out of privacy concerns (impeding the ability for others to investigate inference, related to what we called Aim 1). The trigger for the analysis conducted here was the need for privacy protection in the presence of a requirement to publish data underlying an RCT, while maintaining reasonably broad usability of the data. We start with a specific focus on economists and social scientists interested in replication of RCTs. We explore using one of the simplest DP mechanisms (Laplace with perturbed histograms) for generation of the protected sensitive covariate data. We show that we can produce a protected dataset (Aim 2) and that analyses from such data would sufficiently maintain precise inference (Aim 1). Even for low values of the privacy-loss budget (i.e., stronger privacy), we can obtain comparable estimates in our regression models of interest. The mechanism described in Section 3 and evaluated in Sections 5 and 6 is a partially \(\epsilon\)-differentially-private mechanism offering strong privacy protection at conventional levels of \(\epsilon\) for covariate data, while providing relaxed (non-DP) protection for a small number parameters of central interest. Parameters that are not of interest are not released. Outcomes are imputed based on the protected covariates and parameters, and thus inherit the combined protection methods. In the real-world experiment, there are between 14 and 20 parameters (point estimates and associated standard errors), while the square root of the dimension of the protected data matrix, is approximately \(\sqrt{d*N}=83\). The mechanism allows for the release of the protected data and the perturbed parameters as part of a privacy-preserving replication package. Publication of replication package is a requirement in an increasing number of social science journals. The mechanism works quite well to allow for reasonable inferences in both simulations and real-world examples, in general close to the original (unprotected) inferences on population (intent-to-treat) parameters. This is achieved by leveraging the targeted structure, focusing on a small number of parameters of interest. Some caveats apply. Firstly, in our experiments, we did not protect stratifiers used for the random assignment, and thus did not fully protect the data matrix necessary for the full replication analysis. Such stratifiers tend to be indicators of large subsets of the population. Further protecting these may substantially affect the quality of random assignment, and therefore inference quality. On the other hand, they usually split the population into mutually exclusive sub-populations, and may thus compose. We have not further explored this aspect in the present study. Second, we have intentionally constrained the application of the mechanism to be compatible with recent computing hardware, within a reasonable amount of time, and have therefore reduced the number of covariates with respect to the original study. Yet we have not optimized the code computationally. Our primary constraint is memory required to construct and protect the fully expanded histogram. We explored the boundaries of this constraint by reducing the precision of the discretization (Table 6). The results suggests that the reduction in resolution may negatively impact the otherwise reasonable inference validity (in line with the literature). Future work should therefore focus in part on improvements in computational efficiency, in order to increase the size of the releasable protected covariate datasets. Third, we have in our analysis assumed naive standard errors. The original study itself applies corrections that account for multiple inferences. Inferences in the particular context here also need to account for the noise added through the mechanism (Abowd and Schmutte, 2015; Slavkovic and Seeman, 2022). We do not expect that such adjustments would substantially alter our conclusions in the two cases explored in this article, but any more general application will need to account for such adjustments. Furthermore, in our initial analysis, we focused on pure differential privacy (\(\epsilon\)-DP) for the protection of sensitive covariates. The privacy literature is rapidly growing by developing new privacy definitions, methods and algorithms, aiming to improve the privacy-utility-computational trade-offs in many data contexts, and some of the next steps should consider relaxations of privacy definitions (e.g., Desfontaines and Pejo (2022)), and new methods for release of formally private synthetic data and protected estimates alone, including focusing on pre-analysis. Finally, how useful is the privacy-protected replication package for broader questions posed by researchers? Does the privacy-protected replication package allow for robustness tests, for instance through other specifications? The mechanism is tightly bound to the original authors' proposed specification, and may not perform well when other researchers apply non-congenial specifications. In such cases, access to the original data and additional privacy budget may be necessary in order to create additional protected datsets that allow for such investigation. Transparency, however, is increased, as the privacy-preserving mechanism would be released as part of the replication package, allowing for replication in other contexts, for instance by collecting new data. We note that we have relied on some key features of the typical RCT: Randomization is orthogonal to the observed covariates of participants, and thus is non-informative about those covariates. Relatively few parameters are of key interest, and estimated coefficients on other control variables are typically not published. Overall, this reduces the amount of information that needs to be released, greatly facilitating the application of strong privacy protection. Relaxing any of these features may lead to less favorable results. We demonstrate that a simple method with strong privacy guarantees could become an "out-of-the-box" method. While we did reduce the number of covariates in this first study, ongoing work will explore improvements in several dimensions, as noted above. The ideas and results reported here are the first step towards better understanding of feasible privacy-preservation of RCTs-based data, ensuring that privacy of data contributors to RCTs, often from LMIC countries, will be more strongly protected, while maintaining the ability to draw meaningful inferences. While policy-oriented stakeholders are primarily interested in the latter, citizens that contribute their data to RCTs and companies, such as fin-tech providers, that provide key data to researchers are also heavily invested in protecting privacy. Consumer and citizen protection agencies, ethic review boards, and other regulators, should be interested in knowing of the existence of privacy-enhancing methods, possibly facilitating approval of studies in the presence of strong privacy guarantees.
2309.04792
Individual subject evaluated difficulty of adjustable mazes generated using quantum annealing
In this paper, the maze generation using quantum annealing is proposed. We reformulate a standard algorithm to generate a maze into a specific form of a quadratic unconstrained binary optimization problem suitable for the input of the quantum annealer. To generate more difficult mazes, we introduce an additional cost function $Q_{update}$ to increase the difficulty. The difficulty of the mazes was evaluated by the time to solve the maze of 12 human subjects. To check the efficiency of our scheme to create the maze, we investigated the time-to-solution of a quantum processing unit, classical computer, and hybrid solver.
Yuto Ishikawa, Takuma Yoshihara, Keita Okamura, Masayuki Ohzeki
2023-09-09T13:36:27Z
http://arxiv.org/abs/2309.04792v2
# Individual subject evaluated difficulty of adjustable mazes generated using quantum annealing ###### Abstract In this paper, the maze generation using quantum annealing is proposed. We reformulate a standard algorithm to generate a maze into a specific form of a quadratic unconstrained binary optimization problem suitable for the input of the quantum annealer. To generate more difficult mazes, we introduce an additional cost function \(Q_{update}\) to increase the difficulty. The difficulty of the mazes was evaluated by the time to solve the maze of 12 human subjects. To check the efficiency of our scheme to create the maze, we investigated the time-to-solution of a quantum processing unit, classical computer, and hybrid solver. quantum annealing, combinatorial optimization, maze generation, bar-tipping algorithm, time-to-solution 2019 ## 1 Introduction A combinatorial optimization problem is minimizing or maximizing their cost or objective function among many variables that take discrete values. In general, it takes time to solve the combinatorial optimization problem. To deal with many combinatorial optimization problems, we utilize generic solvers to solve them efficiently. Quantum annealing (QA) is one of the generic solvers for solving combinatorial optimization problems Kadowaki and Nishimori (1998) using the quantum tunneling effect. Quantum annealing is a computational technique to search for good solutions to combinatorial optimization problems by expressing the objective function and constraint time requirements of the combinatorial optimization problem by quantum annealing in terms of the energy function of the Ising model or its equivalent QUBO (Quadratic Unconstrained Binary Optimization), and manipulating the Ising model and QUBO to search for low energy states Shu Tanaka and Seki (2022). Various applications of QA are proposed in traffic flow optimization Neukart et al. (2017); Hussain et al. (2020); Inoue et al. (2021), finance Rosenberg et al. (2016); Orus et al. (2019); Venturelli and Kondratyev (2019), logistics Feld et al. (2019); Ding et al. (2021), manufacturing Venturelli et al. (2016); Yonaga et al. (2022); Haba et al. (2022), preprocessing in material experiments Tanaka et al. (2023), marketing Nishimura et al. (2019), steel manufacturing Yonaga et al. (2022), and decoding problems Ide et al. (2020); Arai et al. (2021a). The model-based Bayesian optimization is also proposed in the literature Koshikawa et al. (2021) A comparative study of quantum annealer was performed for benchmark tests to solve optimization problems Oshiyama and Ohzeki (2022). The quantum effect on the case with multiple optimal solutions has also been discussed Yamamoto et al. (2020); Maruyama et al. (2021). As the environmental effect cannot be avoided, the quantum annealer is sometimes regarded as a simulator for quantum many-body dynamics Bando et al. (2020); Bando and Nishimori (2021); King et al. (2022). Furthermore, applications of quantum annealing as an optimization algorithm in machine learning have also been reported Neven et al. (2012); Khoshaman et al. (2018); O'Malley et al. (2018); Amin et al. (2018); Kumar et al. (2018); Arai et al. (2021b); Sato et al. (2021); Urushibata et al. (2022); Hasegawa et al. (2023); Goto and Ohzeki (2023). In this sense, developing the power of quantum annealing by considering hybrid use with various techniques is important, as in several previous studies Hirama and Ohzeki (2023); Takabayashi and Ohzeki (2023). In this study, we propose the generation of the maze by quantum annealing. In the application of quantum annealing to mazes, algorithms for finding the shortest path through a maze have been studied Pakin (2017). Automatic map generation is an indispensable technique for game production, including roguelike games. Maze generation has been used to construct random dungeons in roguelike games, by assembling mazes mok Bae et al. (2015). Therefore, considering maze generation as one of the rudiments of this technology, we studied maze generation using a quantum annealing machine. Several algorithms for the generation of the maze have been proposed. In this study, we focused on maze-generating algorithms. One can take the bar-tipping algorithm Alg (2023a), the wall-extending algorithm Alg (2023b), and the hunt-and-kill algorithm Alg (2023c). The bar-tipping algorithm is an algorithm that generates a maze by extending evenly spaced bars one by one. For the sake of explanation, we will explain the terminology here. A path represents an empty traversable part of the maze and a bar a filled non traversable part. Figure 1 shows where the outer wall, bars, and coordinate \((i,j)\) are in a \(3\times 3\) maze. The maze is surrounded by an outer wall as in Figure 1. It requires the following three constraints. First, each bar can be extended by one cell only in one direction. Second, the first column can be extended in four directions: up, down, left, and right, while the second and subsequent columns can be extended only in three directions: up, down, and right. Third, adjacent bars cannot overlap each other. We explain the detailed process of the bar-tipping algorithm using the \(3\times 3\) size maze. In this study, a maze generated by extending the \(N\times N\) bars is called \(N\times N\) size maze. First, standing bars are placed in every two cells in a field surrounded by an outer wall, as in Figure 1. Second, Figure 2 shows each step of bar-tipping algorithm. Figure 2 (a) shows the first column of bars extended. The bars in the first column are randomly extended in only one direction with no overlaps, as in Figure 2 (a). The bars can be extended in four directions (up, down, right, left) at this time. Figure 2 (b) shows the second column of bars being extended. Third, the bars in the second column are randomly extended in one direction without overlap as in Figure 2 (b). The bars can be extended in three directions (up, down, right) at this time. Figure 2 (c) shows the state in which the bars after the second column are extended. Fourth, the bars in subsequent columns are randomly extended in one direction, likewise the bars in the second column, as in Figure 2 (c). Figure 2 (d) shows the complete maze in its finished state. Following the process, we can generate a maze as in Figure 2 (d). If multiple maze solutions are possible, the maze solution is not unique, simplifying the time and difficulty of reaching the maze goal. These constraints must be followed for the reasons described below. The first constraint prevents a maze from generating a maze with multiples maze solutions and closed circuits. Figure 3 (a) shows a maze state that violates the first constraint. The step violating the first constraint because one bar in the upper right corner is extended in two directions as Figure 3 (a). The second constraint prevents generating a maze from a maze with closed circuits and multiple maze solutions. Figure 3 (b) shows a state that violates the second constraint. The second constraint is violated, it has a closed circuit and multiple maze solutions, as Figure 3 (b). The third constraint prevents maze generation from a maze with multiple maze solutions. Figure 3 (c) shows a state that violates the third constraint. The bars overlap in the upper right corner, making it the third constraint as Figure 3 (c). Next, we describe the wall-extending algorithm. It is an algorithm that generates a maze by extending walls. Figure 4 shows the extension starting coordinates of the wall-extending algorithm. Figure 5 (a) shows the initial state of the wall expansion algorithm. First, as an initial condition, the outer perimeter of the maze is assumed to be the outer wall, and the rest of the maze is assumed to be the path as Figure 5 (a). Coordinate system is different from the bar-tipping algorithm, all cells are labeled coordinates. As Figure 4 shows, the coordinates where both \(x\) and \(y\) are even and not walls are listed as starting coordinates for wall extending. The following process is repeated until all starting coordinates change to walls, as shown in Figure 5(c). Randomly choose the coordinates from the non-wall extension start coordinates. The next extending direction is randomly determined from which the adjacent cell is a path. Figure 5 (b) shows how the path is extended. The extension will be repeated while two cells ahead of the extending direction to be extended is a path as Figure 5 (b). Figure 5 (c) shows all starting coordinates changed to walls. These processes are repeated until all the starting coordinates change to walls as in Figure 5 (c). Figure 5 (d) shows a maze created by wall-extending. Following the process, we can generate a maze as in Figure 5 (d). Figure 1: Positions where outer wall, bars, and coordinate \((i,j)\) are in \(3\times 3\) maze. As a third, the hunt-and-kill algorithm is explained below. It is an algorithm that generates a maze by extending paths. Figure 6 shows the extension starting coordinates of the hunt-and-kill algorithm. Figure 7 (a) shows the initial state of the hunt-and-kill algorithm. The entire surface is initially walled off as Figure 7 (a). Coordinates, where both \(x\) and \(y\) are odd, are listed as starting coordinates for path extension as in Figure 6. As with the wall-extending algorithm, all cells are set to coordinates. Figure 7 (b) shows the state in which the path is extended. A coordinate is chosen randomly from the starting coordinates, and the path is extended from there as in Figure 7 (b). Figure 7 (c) shows the coordinate selection and re-extension after the path can no longer be extended. If the path can no longer be extended, a coordinate is randomly selected from the starting coordinates, which are already paths, and extension starts again from it as in Figure 7 (c). This process is repeated until all the starting coordinates turn into paths to generate the maze. Figure 7 (d) shows the complete maze with the hunt-and-kill algorithm. Following the process, we can generate a maze as in Figure 7 (d). Figure 2: Step of bar-tipping algorithm. **(a)** step1: bars in first column are extended. **(b)** step2: bars in second column are extended. **(c)** step3: bars in subsequent column are extended. **(d)** step4: A complete maze through these steps. Of the three maze generation algorithms mentioned above, the bar-tipping algorithm is relevant to the combinatorial optimization problem. In addition, unlike other maze generation algorithms, the bar-tipping algorithm is easy to apply because it only requires the consideration of adjacent elements. Thus, we have chosen to deal with this algorithm. Other maze generation algorithms could be generalized by reformulating them as combinatorial optimization problems. The wall-extending and hunt-and-kill algorithms will be implemented in future work, considering the following factors. The former algorithm introduces the rule that adjacent walls are extended and so are their walls. The number of connected components will be computed for the latter, and the result will be included in the optimization. Using the bar-tipping algorithm, we reformulated it to solve a combinatorial optimization problem that generates a maze with a longer solving time and optimized it using quantum annealing. Quantum annealing (DW_2000Q_6 from D-Wave), classical computing (simulated annealing, simulated quantum annealing, and algorithmic solution of the bar-tipping algorithm), and hybrid computing were compared with each other according to the generation time of mazes, and their performance was evaluated. The solver used in this experiment is as follows: DW_2000Q_6 from D-Wave, simulated annealer called SASampler and simulated quantum annealer called SQASampler from OpenJij ope (2023), D-Wave's quantum-classical hybrid solver called hybrid_binary_quadratic_model_version2 (BQM) and classical Figure 4: Red cells represent options of starting coordinates for the wall-extending algorithm. Figure 3: Mazes violated the constraints. **(a)** A maze violate the first constraint. **(b)** A maze violate the second constraint. **(c)** A maze violated the third constraint. computer (MacBook Pro(14-inch, 2021), OS: macOS Monterey Version 12.5, Chip: Apple M1 Pro, Memory: 16GB) This comparison showed that quantum annealing was faster. This may be because the direction of the bars is determined at once using quantum annealing, which is several times faster than the classical algorithm. We do not use an exact solver to solve the combinatorial optimization problem. We expect some diversity in the optimal solution and not only focus on the optimal solution in maze generation. Thus, we compare three solvers, which generate various optimal solutions. In addition, we generate mazes that reflect individual characteristics, whereas existing maze generation algorithms rely on randomness and fail to incorporate other factors. In this case, we incorporated the maze solution time as one of the other factors to solve the maze. The maze solving time was defined as the time (in seconds) from the start of solving the maze to the end of solving the maze. The paper is organized as follows. In the next Section, we explain the methods of our experiments. In Sec. 3, we describe the results of our experiments. In Sec. 4, we summarize this paper. Figure 5: **(a)** Initial state for wall-extending algorithm. **(b)** Step 1 for wall-extending algorithm. **(c)** Step 2 for wall-extending algorithm. **(d)** Maze generated using wall-extending algorithm. ## 2 Methods ### Cost function To generate the maze by quantum annealer, we need to set the cost function in the quantum annealer. One of the important features of the generation of the maze is diversity. In this sense, the optimal solution is not always unique. Since it is sufficient to obtain a structure consistent with a maze, the cost function is mainly derived from the necessary constraints of a maze, as explained below. Three constraints describe the basis of the algorithm of the bar-tipping algorithm. The cost function will be converted to a QUBO matrix to use the quantum annealer.To convert the cost function to a QUBO, the cost function must be written in a quadratic form. Using the penalty method, we can convert various constraints written in a linear form into a quadratic function. The penalty method is a method to rewrite the equality constant as a quadratic function. For example, the penalty method can rewrite an equation constant \(x=1\) to \((x-1)^{2}\). Thus, we construct the cost function for generating the maze using the bar-tipping algorithm below. The constraints of the bar-tipping algorithm correlate with each term in the cost function described below. The first constraint of the bar-tipping algorithm is that the bars can be extended in only one direction. It prevents making closed circuits. The second constraint of the bar-tipping algorithm is that the bars of the first column be extended randomly in four directions (up, right, down, and left), and the second and subsequent columns can be extended randomly in three directions (up, right, and down). It also prevents the creation of closed circuits. The third constraint of the bar-tipping algorithm is that adjacent bars must not overlap. Following the constraint in the bar-tipping algorithm, we can generate a maze with only one path from the start to the goal. The cost function consists of three terms to reproduce the bar-tipping algorithm according to the three constraints and to determine the start and goal. \[\begin{split} E(\{x_{i,j,d},X_{m,n}\})=\sum_{i,i^{\prime}}\sum_{ j,j^{\prime}}\sum_{d,d^{\prime}}Q_{(i,j,d),(i^{\prime},j^{\prime},d^{\prime})}x_{ i,j,d}x_{i^{\prime},j^{\prime},d^{\prime}}+\lambda_{1}\sum_{i}\sum_{j}\Biggl{(} \sum_{d}x_{i,j,d}-1\Biggr{)}^{2}\\ +\lambda_{2}\Biggl{(}\sum_{m}\sum_{n}X_{m,n}-2\Biggr{)}^{2}, \end{split} \tag{1}\] Figure 6: Red cells represent options of starting coordinates for the hunt-and-kill algorithm. where \(x_{i,j,d}\) denotes whether the bar in \(i\)-th row, \(j\)-th column extended in direction \(d\left(\mathrm{up}\colon 0,\mathrm{right}\colon 1,\mathrm{down}\colon 2, \mathrm{left}\colon 0\right)\) When the bar in coordinate \((i,j)\) is extended in direction, \(x_{i,j,d}\) takes \(1\), otherwise takes \(0\). Due to the second constraint of the bar-tipping algorithm, the bars after the second column cannot be extended on the left side; only the first column has \((d=3)\). Furthermore, \(Q_{(i,j,d)(i^{\prime},j^{\prime},d^{\prime})}\) in Equation 1 depends on \(i,j,d,i^{\prime},j^{\prime}\), and \(d^{\prime}\) and is expressed as follows \[Q_{(i,j,d),(i^{\prime},j^{\prime},d^{\prime})}=\left\{\begin{array}{ll}1&(i=i ^{\prime}-1,j=j^{\prime},d=2,d^{\prime}=0)\\ 1&(i=i^{\prime}+1,j=j^{\prime},d=0,d^{\prime}=2)\\ 0&\mathrm{otherwise}.\end{array}\right. \tag{2}\] The coefficients of \(\lambda_{1}\) and \(\lambda_{2}\) are constants to adjust the effects of each penalty term. The first term prevents the bars from overlapping and extending each other face-to-face. It represents the third constraint of the bar-tipping algorithm. Here, due to the second constraint, bars in the second and subsequent columns Figure 7: **(a)** Initial state for hunt-and-kill algorithm. **(b)** Step 1 for hunt-and-kill algorithm. **(c)** Step 2 for hunt-and-kill algorithm. **(d)** Maze generated using hunt-and-kill algorithm. cannot be extended to the left. Therefore, the adjacent bars in the same row cannot extend and overlap. This corresponds to the fact that \(d\) cannot take \(3\) when \(j\geq 1\). Thus, there is no need to reflect, considering the left and right. In particular, the first term restricts the extending and overlapping between the up and down adjacent bars. For example, the situation in which one bar in \((i,j)\) extended down \((d=2)\) and the lower bar in \((i+1,j)\) extended up \((d=0)\) is represented by \(x_{i,j,0}x_{i+1,j,2}=1\), and \(Q(i,j,0),(i+1,j,2)\) takes \(1\). In the same way, thinking of the relation between the bar in \((i,j)\) and the upper bar in \((i-1,j)\), \(Q_{(i-1,j,2),(i,j,0)}=1\). Thus, \(Q_{(i-1,j,2),(i,j,0)}x_{i,j,0}x_{i+1,j,2}\) takes 1, and the value of the cost function taken will increase. By doing this, the third constraint is represented as a first term. The second term is a penalty term that limits the direction of extending to one per bar. It represents the first constraint of the bar-tipping algorithm. This means that for a given coordinate \((i,j)\), the sum of \(x_{i,j,d}\left(d=0,1,2(,3)\right)\) must take the value \(1\). Here, the bars in the second and subsequent columns cannot extend to the left by the second constraint. Thus, \(d\) takes (0, 1, 2, 3) when \(j=0\), and \(d\) takes (0, 1, 2) when \(j\geq 1\). The third term is the penalty term for selecting two coordinates of the start and the goal from the coordinates \((m,n)\). This means that a given coordinate \((m,n)\), the sum of \(X_{m,n}\) takes \(2\). The start and the goal are commutative in the maze. They are randomly selected from the two coordinates determined by the third term. \(X_{m,n}\) denotes whether or not to set the start and goal at the \(m\)-th row and \(n\)-th column of options of start and goal coordinates. When the \((m,n)\) coordinate is chosen as the start and goal, \(X_{m,n}\) takes \(1\). Otherwise, it takes \(0\). There are no relations between \(X_{m,n}\) and \(x_{i,j,d}\) in Equation 1. This means that the maze structure and the start and goal determination coordinates have no relations. Figure 8 shows the coordinates \((m,n)\) that are the options of the start and the goal. As Figure 8 shows, \((m,n)\) is different from the coordinate setting bars; it is located at the four corners of the bars, where the bars do not extend. \(X_{m,n}\) and \(x_{i,j,d}\) are different. \(X_{m,n}\) are options of start and goal, and \(x_{i,j,d}\) are options of coordinates and directions to extend the bars. We have shown the simplest implementation of the maze generation following the bar-tipping algorithm by quantum annealer. Following the above, a maze, depending on randomness, is generated. To Generate a unique maze independent of randomness, we add the effect to make the maze more difficult in the cost function, and the difficulty is defined in terms of time (in seconds). Figure 8: Black cells represent outer walls and inner bars \((i,j)\). Red cells represent options of start and goal coordinates \((m,n)\). ### Update rule We propose an additional \(Q_{update}\) term to increase the time to solve the maze. We introduce a random term that takes random elements to change the maze structure. It is added to the Equation 1. First, \(Q_{update}\) term, the additional term which includes the new QUBO matrix \(Q_{update}\), is given by \[\lambda_{update1}\sum_{i,i^{\prime}}\sum_{j,j^{\prime}}\sum_{d,d ^{\prime}}Q_{update(k,k^{\prime})}x_{i,j,d}x_{i^{\prime},j^{\prime},d^{\prime}}\] \[+\lambda_{update1}\sum_{i}\sum_{j}\sum_{d}\sum_{m}\sum_{n}Q_{ update(k,l)}x_{i,j,d}X_{m,n}\] \[+\lambda_{update1}\sum_{i}\sum_{j}\sum_{d}\sum_{m}\sum_{n}Q_{ update(l,k)}X_{m,n}x_{i,j,d} \tag{3}\] \[+\lambda_{update2}\sum_{m,m^{\prime}}\sum_{n,n^{\prime}}Q_{ update(l,l^{\prime})}X_{m,n}X_{m^{\prime},n^{\prime}},\] where \[\left\{\begin{array}{ll}k=d+(3N+1)i&(j=0)\\ k=d+3j+1+(3N+1)i&(j\neq 0)\\ l=(3N+1)N+(N+1)m+n.&\end{array}\right. \tag{4}\] Figure 9 shows the structure of \(Q_{update}\) and roles. Here, \(k^{\prime},l^{\prime}\) are the replacement of \(i,j,m,n\) in \(k,l\) with \(i^{\prime},j^{\prime},m^{\prime},n^{\prime}\). \(N\) in Equation 4 is the size of the maze. The coefficients \(\lambda_{update1}\) and \(\lambda_{update2}\) are constants to adjust the effect of terms. The elements of \(Q_{update}\) related to maze generation, part A in Figure 9 is multiplied by the \(\lambda_{update1}\). The elements of \(Q_{update}\) related to the relation between the start and goal determination and the maze generation, part B, C in Figure 9 is multiplied by the \(\lambda_{update1}\). The elements of \(Q_{update}\) related to the start and goal determination, part D in Figure 9 is multiplied by the \(\lambda_{update2}\). These are to control the maze difficulty without breaking the bar-tipping algorithm's constraints. Equation Figure 9: Structure of \(Q_{update}\). Part A is related to maze generation. Part B and part C are related to the relation between maze generation and the start and goal determination. Part D is related to the start and goal determination. 3 is represented by the serial number \(k\) of each coordinate \((i,j)\) at which bars can extend, and the sum \(l\) of the total number of coordinates at which the bars can extend and the serial number of coordinates \((m,n)\), which are options for the start and the goal. Furthermore, The second term and the third term in Equation 3 allows the maze to consider the relation between the structure of the maze and the coordinates of the start and the goal. Second, \(Q_{update}\), the new QUBO matrix, is given by \[Q_{update}:=p(t)Q_{update}+\big{\{}1-p(t)\big{\}}Q_{random}, \tag{5}\] where \(Q_{random}\) is a matrix of random elements from \(-1\) to \(1\) and \(p(t)\) depends on time \(t\) (in seconds) taken to solve the previous maze and is expressed as follows \[p(t)=\frac{1}{1+e^{-at}}. \tag{6}\] The \(Q_{update}\) is a matrix that was made with the aim of increasing the maze solving time through the maze solving iteration. The initial \(Q_{update}\) used in the first maze generation is a random matrix, and the next \(Q_{update}\) that is used in the second or subsequent maze generation is updated using Equation5, the maze solving time \(t\), and the previous \(Q_{update}\). The longer the solving time \(t\) of the maze is, the higher the percentage of the previous \(Q_{update}\) in the current \(Q_{update}\) and the lower the percentage of \(Q_{random}\); inversely, when \(t\) is small, the ratio of the previous \(Q_{update}\) is small, and the percentage of \(Q_{random}\) is significant. In other words, the longer the solving time \(t\) of the previous maze, the more characteristics of the previous term \(Q_{update}\) remain. Here, \(a\) is a constant to adjust the percentage. The \(p(t)\) is a function that increases monotonically with \(t\) and takes \(0\) to \(1\). Thus, \(Q_{random}\) that is, the random elements in \(Q_{update}\) increase as time \(t\) increases. After the maze is solved, the next maze QUBO is updated by Equation 5 using the time taken to solve the maze. The update is carried out only once before the maze generation. Repetition of the update will make the maze gradually difficult for individuals. The sum of Equation 1 and Equation 3 is always used to generate a new maze annealing from a maximally mixed state. ### Experiments #### 2.3.1 Generation of maze We generate mazes by optimizing the cost function using DW_2000Q_6. Since the generated maze will not be solved, the update term is excluded for this experiment. \(\lambda_{1}=2\) and \(\lambda_{2}=2\) were chosen. #### 2.3.2 Computational cost We compare the generation times of \(N\times N\) maze in DW_2000Q_6 from D-Wave, simulated annealer called SASampler and simulated quantum annealer called SQASampler from OpenJij, D-Wave's quantum-classical hybrid solver called hybrid_binary_quadratic_model_version2 (hereinafter referred to as "Hybrid Solver") and classical computer (MacBook Pro(14-inch, 2021), OS: macOS Monterey Version 12.5, Chip: Apple M1 Pro, Memory: 16GB) based on bar-tipping algorithm coded with Python 3.11.5 (hereinafter referred to as "Classic"). The update term was excluded from this experiment. We set \(\lambda_{1}=2\) and \(\lambda_{2}=2\). DW_2000Q_6 was annealed 1000 times for 20\(\upmu\)s, and its QPU annealed time for maze generation as calculated using time-to-solution (TTS). SASampler and SQASampler were annealed with 1000 sweeps. These parameters were constant throughout this experiment. Regression curves fitted using least squares method were drawn from the results to examine the dependence of computation time on maze size. #### 2.3.3 Effect of update term The solving time of \(9\times 9\) maze generated without \(Q_{update}\) and using \(Q_{update}\) were measured. This experiment asked 12 human subjects to solve mazes one set (30 times). To prevent the players from memorizing maze structure, they can only see the limited \(5\times 5\) cells. In other words, only two surrounding cells can be seen. The increase rate from the first step of simple moving average of ten solving times was plotted on the graph. For this experiment, \(\lambda_{1}=2\), \(\lambda_{2}=2\), \(\lambda_{update1}=0.15\), \(\lambda_{update2}=0.30\) and \(a=0.05\) were chosen. For two \(\lambda_{update}\), we chose larger values that do not violate the constraints of the bar-tipping algorithm. We chose a value in which Equation 6 will be about 0.8 (80%) when \(t=30\) seconds as a constant \(a\). ### 2.4 Applicatons The cost function in this paper has many potential applications by generalizing it. For example, it can be applied to graph coloring and traffic light optimization. Graph coloring can be applied by allowing adjacent nodes to have different colors. Traffic light optimization can address the traffic light optimization problem by looking at the maze generation as traffic flow. Roughly speaking, our cost function can be applied to the problem of determining the next state by looking at adjacent states. \(Q_{update}\) can be applied to the problem of determining the difficulty of the next state from the previous result. The selection of personalized educational materials is one of the examples. Based on the solving time of the previously solved problems, the educational materials can be selected at a difficulty suitable for the individual. This is the most fascinating direction in future studies. As described above, we should emphasize that \(Q_{update}\) proposed in this paper also has potential use in various fields related to training and education. ## 3 Results ### 3.1 Generation of maze Figure 10 shows execution examples of \(9\times 9\) and \(15\times 15\) mazes generated by optimizing the cost function using DW_2000Q_6. ### Computational cost Figure 11: Time to reach the ground state with 99% success probability as a function of the maze size in DW_2000Q_6. The error bars represent a 95% confidence interval. The regression curve is given by \(\big{(}(3.231\pm 0.076)N+(11.40\pm 0.69)\big{)}\) for linear regression and \(\big{(}(7.4\pm 1.8)\cdot 10^{-2}N^{2}+(2.05\pm 0.30)N+(14.8\pm 1.0)\big{)}\) for quadratic regression. Figure 10: Left: \(9\times 9\) maze generated by DW_2000Q_6. Right: \(15\times 15\) maze generated by DW_2000Q_6. Red cells represent a start and a goal for the maze. Figure 12: **(a)** The time to reach the ground state as a function of the maze size in Classic. The error bars represent a 95% confidence interval. The regression curve is \(\big{(}(0.855\pm 0.090)N^{2}+(0.6\pm 1.5)N+(2.2\pm 5.1)\big{)}\). **(b)** Time to reach the ground state as a function of the maze size in SASampler. The error bars represent a 95% confidence interval. The regression curve is \(\big{(}(28.8\pm 1.2)N^{2}+(36\pm 20)N+(129\pm 71)\big{)}\). **(c)** Time to reach the ground state as a function of the maze size in SQASampler. The error bars represent a 95% confidence interval. The regression curve is \(\big{(}(172.8\pm 4.4)N^{2}+(287\pm 73)N-(1.5\pm 2.5)\cdot 10^{2}\big{)}\) Figure 13: Comparison of maze generation time between DW_2000Q_6 and Classic. Fits of the form \(aN^{2}+bN+c\) are applied to each of the datasets using least squares method. The results are as follows. Figure 11 shows the relation between TTS for maze generation and maze size on DW_2000Q_6. DW_2000Q_6 is \(\mathcal{O}(N)\) or \(\mathcal{O}(N^{2})\). Even if it is quadratically dependent on the maze size, its deviation is smaller than the other solvers. Figure 12 shows the relation between maze generation time and maze size on Classic, SASampler, and SQASampler. Classic \(\big{(}(0.855\pm 0.090)N^{2}+(0.6\pm 1.5)N+(2.2\pm 5.1)\big{)}\), SASampler \(\big{(}(28.8\pm 1.2)N^{2}+(36\pm 20)N+(129\pm 71)\big{)}\), and SQASampler \(\big{(}(172.8\pm 4.4)N^{2}+(287\pm 73)N-(1.5\pm 2.5)\cdot 10^{2}\big{)}\) exhibit quadratic dependence on the maze size \(\mathcal{O}(N^{2})\). Most of the solvers introduced here are \(\mathcal{O}(N^{2})\) since they are extending \(N\times N\) bars to generate a maze. Figure 13 shows the comparison of maze generation time between DW_2000Q_6 and Classic. DW_2000Q_6 has a smaller coefficient \(N^{2}\) than the classical algorithm, and after \(N=5\), DW_2000Q_6 shows an advantage over Classic in the maze generation problem. The improvement using quantum annealing occurred because it determines the direction of \(N\times N\) bars at once. Figure 14 shows the relation between maze generation time and maze size on Hybrid Solver. Linear and quadratic fits applied to the dataset indicate the Hybrid Solver is \(\mathcal{O}(1)\) or \(\mathcal{O}(N)\)\(\big{(}(3.29\pm 0.83)\cdot 10^{2}N+(2.99325\pm 0.00090)\cdot 10^{6}\big{)}\big{)}\) between \(N=1\) and \(N=18\) and then shifted to \(\mathcal{O}(N^{2})\)\(\big{(}(6.899\pm 0.065)\cdot 10^{3}N^{2}-(0.4\pm 3.2)\cdot 10^{3}N+(6.90\pm 0.39)\cdot 1 0^{5}\big{)}\). The shift in the computational cost of Hybrid Solver may have resulted from a change in its algorithm. ### Effect of update term Here, 12 human subjects are asked to solve the maze one set (30 times), and the maze is shown to increase in difficulty as it adapts to each human subject. Figure 15 (a) shows the increase rate from the first step of simple moving average of 10 solving time of maze generated without \(Q_{update}\) and individual increase rate. The solving time of the maze without \(Q_{update}\) was slightly getting shorter overall. Figure 15 (b) shows the increase rate from the first step of simple moving average of 10 solving time of maze generated using \(Q_{update}\) and individual increase rate. The solving time of the maze using \(Q_{update}\) was getting longer overall. Most of the players increased their solving time, but some players decreased or didn't change their solving time. In addition, nine players' average of the solving time of the maze Figure 14: Time to reach the ground state as a function of maze size in the Hybrid Solver. The error bars represent a 95% confidence interval. generated using \(Q_{update}\) increased than that of the maze generated without \(Q_{update}\). These show that \(Q_{update}\) has potential to increase the difficulty of the mazes. ## 4 Discussion In this paper, we show that generating difficult (longer the maze solving time) mazes using the bar-tipping algorithm is also possible with quantum annealing. By reformulating the bar-tipping algorithm as the combinatorial optimization problem, we generalize it more flexibly to generate mazes. In particular, our approach is simple but can adjust the difficulty in solving mazes by quantum annealing. In Sec.3.2, regarding comparing computational costs to solve our approach to generating mazes using TTS, DW_2000Q_6 has a smaller coefficient of \(N^{2}\) than the classical counterpart. Therefore, as \(N\) increases, the computational cost of DW_2000Q_6 can be expected to be lower than that of the classical simulated annealing for a certain time. Unfortunately, since the number of qubits in the D-Wave quantum annealer is finite, the potential power of generating mazes by quantum annealing is limited. However, our Figure 15: **(a)** Left: Increase rate from the first step of simple moving average of 10 solving time of \(9\times 9\) maze generated without \(Q_{update}\). The error bars represent standard errors. Right: All players’ increase rate from the first step of simple moving average of 10 solving time of \(9\times 9\) maze generated without \(Q_{update}\). **(b)** Left: Increase rate from the first step of simple moving average of 10 solving time of \(9\times 9\) maze generated using \(Q_{update}\). The error bars represent standard errors. Right: All players’ increase rate from the first step of simple moving average of 10 solving time of \(9\times 9\) maze generated using \(Q_{update}\). insight demonstrates some advantages of quantum annealing against its classical counterpart. In addition, we observed that the hybrid solver's computational cost was constant up to \(N=18\). This indicates that hybrid solvers will be potentially effective if they are developed to deal with many variables in the future. In Sec. 3.3, we proposed \(Q_{update}\) to increase the solving time using quantum annealing. We demonstrated that introducing \(Q_{update}\) increased the time to solve the maze and changed the difficulty compared to the case where \(Q_{update}\) was not introduced. At this time, the parameters (\(\lambda_{update1}\), \(\lambda_{update2}\), and \(a\)) were fixed. Difficult maze generation for everyone may be possible by adjusting the parameters individually. One of the directions in the future study is in applications of our cost function in various realms. We should emphasize that \(Q_{update}\) proposed in this paper also has potential use in various fields related to training and education. The powerful computation of quantum annealing and its variants opens the way to such realms with high-speed computation and various solutions. ## Conflict of Interest Statement Sigma-i employs author Masayuki Ohzeki. The remaining authors declare that the research was conducted without any commercial or financial relationships that could be construed as a potential conflict of interest. ## Author Contributions Y. I., T. Y., and K. O. conceived the idea of the study. M. O. developed the statistical analysis plan, guided how to use the quantum annealing to find the optimal solution, and contributed to interpreting the results. Y. I., T. Y., and K. O. drafted the original manuscript. M. O. supervised the conduct of this study. All authors reviewed the manuscript draft and revised it critically on intellectual content. All authors approved the final version of the manuscript to be published. ## Funding The authors thank financial support from the MEXT-Quantum Leap Flagship Program Grant No. JPMXS0120352009, as well as Public\Private R&D Investment Strategic Expansion PrograM (PRISM) and programs for Bridging the gap between R&D and the IDeal society (society 5.0) and Generating Economic and social value (BRIDGE) from Cabinet Office. ## Acknowledgments The authors thank the fruitful discussion with Reo Shikanai and Yoshihiko Nishikawa on applications of our approach to another application. This paper is the result of research developed from an exercise class held at Tohoku University in Japan in the past called "Quantum Annealing for You, 2nd party!". We want to thank one of the supporters, Rumiko Honda, for supporting the operations. The participants were a diverse group, ranging from high school students to university students, graduate students, technical college students, and working adults. As you can see from the authors' affiliations, this is a good example of a leap from the diversity of the participants to the creation of academic and advanced content.
2309.08816
EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding
Object understanding in egocentric visual data is arguably a fundamental research topic in egocentric vision. However, existing object datasets are either non-egocentric or have limitations in object categories, visual content, and annotation granularities. In this work, we introduce EgoObjects, a large-scale egocentric dataset for fine-grained object understanding. Its Pilot version contains over 9K videos collected by 250 participants from 50+ countries using 4 wearable devices, and over 650K object annotations from 368 object categories. Unlike prior datasets containing only object category labels, EgoObjects also annotates each object with an instance-level identifier, and includes over 14K unique object instances. EgoObjects was designed to capture the same object under diverse background complexities, surrounding objects, distance, lighting and camera motion. In parallel to the data collection, we conducted data annotation by developing a multi-stage federated annotation process to accommodate the growing nature of the dataset. To bootstrap the research on EgoObjects, we present a suite of 4 benchmark tasks around the egocentric object understanding, including a novel instance level- and the classical category level object detection. Moreover, we also introduce 2 novel continual learning object detection tasks. The dataset and API are available at https://github.com/facebookresearch/EgoObjects.
Chenchen Zhu, Fanyi Xiao, Andres Alvarado, Yasmine Babaei, Jiabo Hu, Hichem El-Mohri, Sean Chang Culatana, Roshan Sumbaly, Zhicheng Yan
2023-09-15T23:55:43Z
http://arxiv.org/abs/2309.08816v1
# EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained ###### Abstract Object understanding in egocentric visual data is arguably a fundamental research topic in egocentric vision. However, existing object datasets are either non-egocentric or have limitations in object categories, visual content, and annotation granularities. In this work, we introduce EgoObjects, a large-scale egocentric dataset for fine-grained object understanding. Its Pilot version contains over 9K videos collected by 250 participants from 50+ countries using 4 wearable devices, and over 650K object annotations from 368 object categories. Unlike prior datasets containing only object category labels, EgoObjects also annotates each object with an instance-level identifier, and includes over 14K unique object instances. EgoObjects was designed to capture the same object under diverse background complexities, surrounding objects, distance, lighting and camera motion. In parallel to the data collection, we conducted data annotation by developing a multi-stage federated annotation process to accommodate the growing nature of the dataset. To bootstrap the research on EgoObjects, we present a suite of 4 benchmark tasks around the egocentric object understanding, including a novel instance level- and the classical category level object detection. Moreover, we also introduce 2 novel continual learning object detection tasks. The dataset and API are available at [https://github.com/facebookresearch/EgoObjects](https://github.com/facebookresearch/EgoObjects). ## 1 Introduction Object understanding tasks, such as classification and detection, are arguably fundamental research topics in computer vision. Enormous amount of advances achieved so Figure 1: **EgoObjects dataset.****Left**: It contains videos of objects captured from the first-person viewpoint under 10 diverse conditions (only 5 are shown for clarity). Multiple objects in each video are annotated with instance ID and category label. In each row, we visualize the annotations of one instance track (“can opener”) in one video captured under one set of condition variable choices. For clarity, we use shorthand notations: \(D-\) Distance, \(B-\) Background, \(L-\) Lighting, \(M-\) Camera Motion. Also annotations on other objects are not shown. **Right:** A visualization of a subset of non-leaf nodes in our hierarchical object taxonomy, covering diverse object categories. Leaf nodes and other non-leaf nodes are omitted for clarity. far have been accelerated by the availability of large-scale datasets, such as ImageNet [17], COCO [35], LVIS [23], Open Images [31] and Objectron [2]. Those datasets often contain images captured from a third-person or exocentric viewpoint and curated from given sources (e.g. Flicker). Albeit the large volume, they often only capture individual object instances in a single image or video, and do not capture the same object under diverse settings, which are important for fine-grained object understanding task, such as instance-level object detection. In contrast, object understanding tasks in egocentric vision processes visual data containing objects captured from a first-person or egocentric viewpoint. The approaches to those tasks have wide applications in augmented reality and robotics, such as robustly anchoring virtual content at a real world object under various conditions (e.g. background, lighting, distance), and are often required to perform well from the egocentric viewpoint and distinguish objects at both category- (e.g. mug vs kettle) and instance level (e.g. my mug vs your mug) under various conditions. Therefore, there are clear gaps in adopting existing exocentric datasets for egocentric object understanding. On the other hand, several egocentric datasets containing object annotations have been built. A family of such datasets are focused on capturing human activities and hand-object interactions. Ego4D [22] contains a large number of egocentric videos of human activities. However, according to the PACO-Ego4D [47] which mines the objects from Ego4D, there are only 75 object categories with at least 20 samples, and each object instance often only appears in one video. Epic-Kitchens-100 [14] contains over 700 videos depicting human activities in the kitchen, but only annotates objects within the kitchen. HOI4D [37] is collected for category-level human-object interaction, and only contains 800 different object instances from 16 categories. There are several other datasets that are more object-centric, including TREK-150 [18], FPHA [20] and CO3D [49], but only contain objects from a limited set of categories (\(<\)50). Objects there are often captured in a single setup or few setups with limited variations in surrounding objects, background, distances and camera motions. Moreover, semantic granularity of the object annotations are often limited at category-level, and object instances from the same category are not distinguished, which impedes the development of instance-level object understanding approaches. Therefore, there are still significant gaps with existing egocentric datasets in the dataset scale, visual content variations around individual objects, object semantic diversity, and instance-level object annotation. To address these gaps, we introduce _EgoObjects_, a new large-scale egocentric video dataset for fine-grained object understanding (Figure 1). Unlike prior egocentric datasets which are limited to a small dataset scale, a specific domain or a small number of object categories, EgoObjects includes a large number of videos containing objects from hundreds of object categories commonly seen in the households and offices worldwide. For video capture, 4 wearable devices with various field-of-view are used, including Vuzix Blade smart glasses1, Aria glasses2, Ray-Ban Stories smart glasses3 and mobile phones with ultra-wide lens4, which provide representative media formats of egocentric visual data. Each main object is captured in multiple videos with different choices of nearby secondary objects, background complexity, lighting, viewing distance and camera motion. We annotate both the main and secondary objects in the sampled frames with bounding boxes, category level semantic labels and instance-level object identifiers (ID). In current Pilot version release, it contains over \(9,200\) videos of over 30 hours collected by 250 participants from 50+ countries and regions, and 654K object annotations with 368 object categories and 14K unique object instance IDs from 3.8K hours of annotator efforts. To our best knowledge, EgoObjects is the largest egocentric video dataset of objects in terms of object categories, videos with object annotations, and object instances captured in multiple conditions. Comparisons between EgoObjects and other datasets can be seen in Table 1. Footnote 1: [https://www.vuzix.com](https://www.vuzix.com) Footnote 2: [https://about.meta.com/realitylabs/projectaria](https://about.meta.com/realitylabs/projectaria) Footnote 3: [https://www.meta.com/glasses](https://www.meta.com/glasses) Footnote 4: Participants are asked to hold mobile phone close to their eyes to simulate egocentric viewpoints To bootstrap the research on EgoObjects, we introduce 4 benchmark tasks spanning over both non-continual learning and continual learning settings. For non-continual learning setting, we include a novel instance-level object detection task, largely under-explored previously due to the lack of a dataset with object ID annotations, as well as conventional category-level object detection task. For continual learning setting, we present novel object detection tasks at instance- and category level. Evaluations of different approaches to all tasks are also presented to establish the baseline benchmarks. In particular, for instance-level object detection task, a novel target-aware instance detection approach is proposed and validated to outperform a baseline target-agnostic object detection method. To summarize, we make the following contributions. * We created a large-scale egocentric dataset for object understanding, which features videos captured by various wearable devices at worldwide locations, objects from a diverse set of categories commonly seen in indoor environments, and videos of the same object instance captured under diverse conditions. * We proposed a multi-stage federated annotation process for the continuously growing dataset to accompany the parallel data collection at scale. Rich annotations at video level (e.g. location, background description) and object-level (e.g. bounding box, object instance ID, category level semantic label) are collected from 3.8K hours of human annotator efforts. * We introduced 4 benchmark tasks on EgoObjects, including the novel instance-level and the conventional category-level object detection tasks as well as their continual learning variants. We evaluated multiple approaches on all tasks, and also proposed a novel target-aware approach for instance-level object detection task. ## 2 Related Work **Egocentric object understanding datasets.** Given the growing needs of egocentric object understanding in augmented reality and robotics, several egocentric datasets focused on objects have been built. TEgO [32] contains egocentric images of only 19 distinct objects for training object recognizers. TREK-150 [18] consists of 150 annotated videos for tracking objects from 34 categories merely. Despite the availability of object annotations, other larger egocentric datasets are more focused on human activities and hand-object interactions. For example, Epic-KitchenS-100 [14] captures 700 videos of nearly 100 human activities involving 300 object categories in the kitchen, but is limited to the kitchen scenario. The ADL [45] dataset features people performing everyday activities in kitchens, which has object boxes, object track ID, action labels. However, it only has 42 object categories and the track ID is not used for analysis. The MECCANO [46] is a multimodal dataset of egocentric videos to study humans behavior understanding in industrial-like settings with object, depth, and gaze annotations, supporting a suite of 5 tasks. However, the diversity of participants and locations is limited. FPHA [20] captures 45 different daily hand-object action categories involving only 26 different objects. HOI4D [37] contains videos of human-object interaction with only 800 different object instances from 16 categories. Albeit the large number of human activity videos, the recent Ego4D [22] only contains object annotations from around 75 object categories with at least 20 samples. Object instances often only appear in a single video, and only 870 instances have more than 5 occurrences. Meanwhile, synthetic egocentric datasets are built to scale up the data collection. xR-EgoPose [55] is a large-scale synthetic dataset containing realistic renderings of people in various poses and serves as a benchmark of 3D human pose estimation. It is focused on ego-body and simulates fisheye lens where the surrounding environment, including objects, are largely distorted. EHOI [33] is also a synthetic dataset, consisting of 20K images and 124K object instances from 19 categories with interactions with human hands. Its fidelity is low compared with real data, and it has limited complexities in lighting, background and viewpoints. To summarize, existing egocentric datasets have limitations in the number of object categories, the variations in the setting of capturing the same object, the granularity of object semantic labeling where instance-level object ID is not available and photorealism in synthetic datasets. **Instance-level object detection and datasets.** Being able to localize and recognize different object instances is critical to applications in augmented reality and robotics, such as detecting a specific toy or a custom industrial part. However, such task has been severely less explored due to the lack of object ID annotations at scale in existing datasets. In the cases of a growing number of object instances to detect, which is arguably a realistic setup, instance-level detection approaches are often required to adapt with little-to-no fine-tuning time. Mercier et al [41] proposed a template-based detector that uses example viewpoints of the target object to detect it in query images without extra training, and evaluated it on a small exocentric dataset of 20 object instances only. Hu et al [28] proposed a template-based detection approach, which incorporated a multi-level correlation model and a similarity-refine module, for handling the category-agnostic instance. On the dataset side, T-less [26] is an object dataset with 6D pose annotation for only 30 industry-relevant object instances. In [51], a small dataset of 10K RGBD images of 24 object instances were created for object detection and pose estimation. BOP dataset [27] combines 8 public datasets, and consists of 89 object instances with 3D groundtruth and over 330K RGBD images from different viewpoints. All those datasets aforementioned are not egocentric, and only contains a small number of object instances. In contrast, EgoObjects contains over 14K unique object instances captured under diverse settings. We also propose a target-agnostic baseline approach and a novel target-aware approach, and evaluate them on EgoObjects. \begin{table} \begin{tabular}{l|c c c|c c c c} & \multicolumn{3}{c|}{Exocentric} & \multicolumn{3}{c}{Egocentric} \\ & \multicolumn{2}{c|}{Objectron} & \multicolumn{2}{c|}{CO3D} & \multicolumn{1}{c|}{BOP} & Epic-K\({}^{*}\), HOI4D & Ego4D\({}^{*}\) & EgoObjects \\ \hline \#category & 9 & 50 & 89 & 300\({}^{*}\) & 16 & 75 & 368+ \\ \#participant & int’l. & - & - & 45 & 9 & 859 int’l. & 250 int’l. \\ \#image & 4M & 1.5M & 330K & 20M & 2.4M & 23.9K & 114K+ \\ \#instance & 17K & 19K & 89 & - & 800 & 17K & 14K+ \\ \#bbox & - & - & - & 38M & - & 50K & 654K+ \\ inst ID & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ & ✓ \\ device & M & M & PC,K & G & K,I & G,V,Z,W,PP & R,A,V,M \\ \end{tabular} \end{table} Table 1: _Comparing EgoObjects with other datasets. For EgoObjects, we report statistics of the current Pilot version, which is estimated to account for 10% of the full dataset (thus the “+” notation). \({}^{*}\)Epic-Kitchen-100 [14] only contain object categories in the kitchen. \({}^{**}\)Ego4D statistics are reported by the PACO-Ego4D [47], which annotates objects in the Ego4D [22]. Abbreviation for devices: M=Mobile, K=Kinect, A=Aria, G=GoPro, PC=Primesense Carmine, I=Intel RealSense, V=Vazix Blade, R=Ray-Ban Stories, PP=Pupil, Z=Zetronix zShades, W=Weview._ **Continual learning.** Conventional object understanding approaches build static models incapable of adapting their predictive behaviors over time. In contrast, continual learning models can learn from an infinite stream of data and grow their predicative capabilities while reducing catastrophic forgetting of previous knowledge [40, 8, 4, 12, 43, 3, 48, 11, 56]. Broadly speaking, they can be categorized into 3 classes [15] with increasing complexities and practicalities. In _Task Incremental Learning_, individual tasks with respective training data arrive sequentially, and the model is often built with separate heads for individual tasks. At inference time, a task ID is required for each sample. In the _Class Incremental Learning_, no task ID is provided at any time, and the model often has only one head. In the most general _Data Incremental Learning_[16], more assumptions on the stationary data distribution and the paradigm of sequentially growing tasks and classes are removed. In this work, we use EgoObjects to set up 2 new continual learning tasks, which covers both Class- and Data Incremental Learning paradigms. Moreover, existing approaches are often assessed on small object classification datasets, such as Core50 [39] and OpenLORIS-Object [53]. To our best knowledge, EgoObjects is the first dataset to support the benchmarking of _continual learning of object detection_ at both instance and category level. **Category-level object detection.** Early CNN-based approaches include both two-stage methods, which tackles object proposal generation and object recognition separately [50, 9], and single-stage methods which remove the explicit object proposal generation for simplicity and efficiency [36, 6, 54]. Recent transformer-based methods introduce attention building blocks into both the backbone and the detection head to significantly improve the detection performance [10, 59, 13]. However, those approaches are often only evaluated on exocentric datasets, such as COCO [35] and LVIS [23], while their performance on egocentric datasets are largely unknown. EgoObjects contains nearly 400 object categories, and we assess both CNN and transformer models on it. ## 3 EgoObjects Dataset ### An Overview In current Pilot version, EgoObjects contains over 9K videos collected by 250 participants. A total of 114K frames are sampled and annotated. **Object instances captured under diverse conditions.** A total of 14.4K unique object instances from 368 categories are annotated. Among them, there are 1.3K main object instances from 206 categories and 13.1K secondary object instances (_i.e_., objects accompanying the main object) from 353 categories. On average, each image is annotated with 5.6 instances from 4.8 categories, and each object instance appears in 44.8 images, which ensures diverse viewing di Figure 2: **Dataset statistics.****(a) Left**: the number of instances per category in the log scale. **Right**: the word cloud highlights the head categories, including box, bottle. **(b) Left** the number of annotations per category in the log scale. **Right**: the word cloud is similar to (a), but a few new head categories emerge, including mug, boot. **(c) Spatial distribution of the main objects center coordinates, confirming the diverse locations of main objects. **(d)** Relative bounding box sizes compared between EgoObjects, LVIS, and COCO. EgoObjects has more objects of medium and large sizes in the egocentric view. **(e)** Diverse distribution of participants’ geographic locations in 50+ countries from 5 continents. **(f)** Distribution of video metadata including lighting (left) and background (right). Most objects are collected indoor, where lighting is more likely either artificial or low light. The background is uniformly distributed across rooms within the household. rections for the object. To further break down, for the main object, each instance appears in 95.9 images, whereas each secondary instance _i.e_. 39.8 images on average. See distributions of unique object instances and object annotations in Figure 1(a) and 1(b), respectively. Both figures indicate the long-tailed nature of the dataset, making the benchmark more challenging and closer to the real-world distributions. **Diverse object spatial distribution**. We encourage participants to avoid center bias during the capture of main objects with the moving camera. In Figure 1(c), we confirm both the center coordinates are widely spread in the image. **Object scale distribution in egocentric view**. Figure 1(d) compares EgoObjects with other datasets on the relative size distribution of object bounding boxes. The relative size is defined as the square root of the box-area-over-image-area ratio. Compared to COCO and LVIS, EgoObjects has more medium and large-sized objects, and suits the applications of egocentric object understanding where the users are more likely to interact with closer objects. **Metadata statistics**. We further accumulate the per-video statistics across several metadata tags. As shown in Figure 1(e), our data is geographically diverse, covering 50+ countries from five continents. Finally, as presented in Figure 1(f), our data also has a diverse distribution covering various video capture conditions for lighting and backgrounds. ### Data Collection We work with third-party vendors to recruit participants for capturing videos of common indoor objects at worldwide locations. They use various glasses such as Vuzix Blade, Aria Glasses, and Ray-Ban Stories. They also use the ultra-wide lens on mobile phones and hold the phone close to the eyes for simulating the egocentric viewpoint. Participants are asked to capture videos of objects from a predefined list of 400 object categories, referred as main object categories. Each main object should be unique in the individual location and captured under various conditions. We define 4 variables of capture conditions, including background complexity, camera motion, object distance and lighting. The background complexity can be either "simple" or "busy". The simple background has at least 3 surrounding objects besides the main object, whereas the busy background has at least 5 other objects. In either background, we ask participants to capture the main object in natural settings (vs intentional setting). We also instruct the participants to move the camera around and capture different views of the main object, while avoiding the bias that the main object always stays in the center of the view. We define three 3 of camera motion: 1) "horizontal": move the camera from left to right or right to left. 2) "vertical": move the camera upwards or downwards and 3) "combined": rotate the camera both horizontally and vertically. The object distance also has three levels, i.e. "near", "medium", and "far". We define the object scale and the frame scale as the longer object dimension and the shorter frame edge, respectively. Near distance refer to those that the object-scale/frame-scale ratio is larger than \(30\%\), whereas the medium distance has the ratio fall in between \(20\%\) and \(30\%\). All remaining images are considered as having far object distances to the camera. For lighting conditions, there are two levels bright and dim. Lighting is considered as bright when a light meter reads above 250 lux and dim otherwise. Given these 4 variables, participants are instructed to collect 10 videos of each main object according to 10 predefined configurations (see details in supplement), and each video lasts at least 10 seconds. Finally, videos are further tagged with rich metadata including the associated participant ID, main object category, location, background description and capture time. ### Federated Annotation of the Growing Dataset EgoObjects data collection was planned to operate at large scale and lasted for 14 months. To reduce the overall dataset creation time, we conducted data annotation in parallel to the data collection, which continuously grew the dataset and introduced more complexities to the data annotation. Inspired by LVIS [23], we adopt the idea of federated annotation to achieve a balance between annotation cost and annotation exhaustiveness, and further propose a 3-stage annotation pipeline tailored to the continuously growing nature of our dataset. Figure 3 illustrates our annotation pipeline, which is used to annotate video frames evenly sampled at 1 FPS. **Stage 1: category discovery.** The annotators are instructed to identify object categories from a predefined vocabulary \(\mathcal{V}\) of 600+ categories commonly seen in indoor egocentric view. Annotators are asked to find at least 5 categories per each image if possible, including the main object category and other salient secondary objects. **Stage 2: exhaustive instance labeling.** For each image, 3 annotators exhaustively annotate _all_ object instances of the discovered categories with bounding box and category label \(c\). To enable instance-level object understanding, we further enhance the bounding box annotation with a unique Figure 3: _EgoObjects multi-stage annotation. See text for details at each stage._ object instance ID5 that is consistent across the dataset. To reconcile the work from 3 annotators, we compare the annotations from one annotator to all other annotators to get an averaged IoU based consensus score for each annotator. Then, we select the annotator with the highest consensus score as the source of truth for final annotations. Footnote 5: We exclude objects from categories that have indistinguishable appearances between instances, such as those related to animals and food. **Stage 3: negative category verification.** By the design of federated annotation [23], not all object categories are handled in each image. However, for evaluation purpose for each image we would need to collect a set of negative categories, defined as categories that do not appear in the image. To operationalize this, we randomly sample several categories from the vocabulary \(\mathcal{V}\) as the candidates of the negative categories, and ask annotators to verify. We remove a candidate category from the negative set if any annotator flags any corresponding object instance in the image. Finally, we get a set of negative categories per image. ## 4 Benchmark Tasks on EgoObjects We introduce 4 benchmark tasks on EgoObjects, starting with a novel instance-level object detection task, which has been under-explored due to the lack of object ID annotations on individual objects captured in various conditions in existing datasets. We further present 2 novel continual learning object detection tasks, which are newly enabled by EgoObjects. Finally, we assess the performance of classical category-level object detection models on EgoObjects. ### Instance-Level Detection In the applications of egocentric object understanding in AR and robotics, we are often confronted with the situation where the model is presented with few examples of object instances unseen during training and needs to detect those novel objects on-the-fly. Inspired by this, we introduce the instance-level detection below, and present two models, including a novel target aware- and a baseline target agnostic instance detector. #### 4.1.1 Task Specification At training time, the model has access to instance-level annotations of objects captured under diverse conditions. At inference time, the user can use the model to register a novel target instance \(T\), regardless of whether its category is seen during training, by providing one or more 2D bounding boxes on reference images containing the target instance. After that, on a query image \(I\), the detector needs to predict the bounding box of \(T\) with \(T\)'s ID, or no box if \(T\) is absent. To simulate the realistic setup where model back-propagation is difficult for deployed models, we _disallow_ model fine-tuning on the target object instance annotations. The model is required to allow the user to continuously register more model targets, and all registered targets should be considered during detection. Figure 4 contains an example where the user registers 3 targets sequentially and the model gradually detects more target objects in the same image. **Dataset split.** We divide the dataset into 4 splits: train/target/val/test. The train split contains 9.6k instances with a total of 450k annotations from 79k images. The target, val, test splits share the remaining 4.1k instances which do not appear in the train images, and their categories can also be unseen during training. In the target split, there is a single reference image and one annotation for each instance. The val and test splits have 5.7K and 29.5K images with 3.8K and 4.1K instances, respectively. **Evaluation protocols.** Under various IoU thresholds, we report Average Precision (AP) metrics, which are averaged across instances. Furthermore, we break down the metrics into two buckets for object instances from categories seen and unseen during training to assess the model capability of generalizing to instances from novel categories. #### 4.1.2 Target-aware Instance Detector We propose an instance-level object detector aware of target objects during object localization, and refer to it as _Target-Aware Instance Detector_ (TA-IDet). It supports 2 modes, namely target registration and target detection (Figure 5). **Target registration**. To register a new target object, we feed the reference image into a ResNet based FPN backbone [34], generate target features at different pyramid levels by using ROIAlign operator [24] according to the target bounding box annotation, and average them over pyramid levels. Target features of two different resolutions are obtained from ROIAlign. \(T^{loc}\) feature of resolution \(1\times 1\) is used to efficiently localize the bounding box. \(T^{cls}\) feature has higher resolution \(S\times S\) (\(S=5\) by default), and will be Figure 4: **Instance detection at inference time. Continuously registering more targets leads to more detected objects, while previously registered targets are not forgotten. The targets can be from either seen (target 1 and 2) or unseen (target 3) categories.** used to compute a confidence score of classifying the localized target object. If several reference images per target are provided, the target features are averaged. **Target detection**. At detection time, TA-IDet use the same FPN backbone to extract query image feature map \(F\) of size \(C\times H\times W\) where \(C\) denotes feature channels, and \(\{H,W\}\) feature map size. A feature modulation block will transform \(F\) according to target localization feature \(T^{loc}\) of size \(C\times 1\times 1\), which attenuates the features in regions where the target object is less likely to appear. The detection head takes as input the modulated query feature map, and processes it using a _Score_ module, which consists of 4 convolutional layers with ReLU activations, to gradually reduce the channels from 256 to 1. The resulting score map is normalized by a _Softmax_ operation, and the target object center \((C_{y},C_{x})\) is predicted as the weighted sum of spatial coordinates according to the normalized score map. \[\begin{split} F^{mod}=(T^{loc}\varocc@{}F)\odot F\\ P=\text{Softmax}(\text{Score}(F^{mod}).\text{reshape}(-1))\\ Y^{g}=\text{ls}(0,H-1,\text{steps}=H).\text{view}(H,1).\text{repeat }(1,W)\\ X^{g}=\text{ls}(0,W-1,\text{steps}=W).\text{view}(1,W).\text{repeat }(H,1)\\ C_{y}=\text{sum}(P\odot Y^{g}.\text{reshape}(-1))\\ C_{x}=\text{sum}(P\odot X^{g}.\text{reshape}(-1))\end{split} \tag{1}\] where \(\varocc@{}\) denotes convolution, \(\odot\) element-wise multiplication and \(\text{ls}\) torch.linspace. To refine object center and predict target object size, we sample a target feature at \((C_{y},C_{x})\) in \(F\) via bilinear interpolation, and employ a 3-layer MLP with hidden dimension \(256\) to predict the spatial offset \((\delta C_{y},\delta C_{x})\) and target object size \((S_{y},S_{x})\) with a ReLU activation. After predicting target object box, we use ROIAlign to sample a spatial feature of the resolution \(S\times S\) in \(F\), and compute its dot product with \(T^{cls}\) using a sigmoid activation function as the box confidence score. **Model training**. During training, we sample three images for each instance: one reference image containing the instance, one positive image containing the instance captured in a different setting, and one negative image that does not contain the instance. In positive image, we consider both bounding box localization loss and classification loss. For localization loss, we use a linear combination of \(L_{1}\) loss and generalized IoU loss [52]. For classification loss, we use the binary cross entropy loss between the predicted box confidence score and groundtruth box label, which is positive when IoU is above \(IoU^{pos}\), negative when IoU is below \(IoU^{neg}\) and ignored otherwise. By default, (\(IoU^{pos}\), \(IoU^{neg}\)) = \((0.7,0.3)\). In negative image, only the classification loss is used and groundtruth label is negative. See more studies in the supplement. #### 4.1.3 Baseline Target-agnostic Instance Detector We also consider a simple baseline approach _RPN+SFNet_ which consists of a Region Proposal Network (RPN) [50] for object localization and a SFNet model [57], commonly used in metric learning, for object classification. We briefly review its target registration, detection and model training below, and include more details in the supplement. **Target registration**. We crop the target object from reference images and feed it through the SFNet model to obtain the target object feature, which is then added to an index of target object features. **Target detection**. For a given query image, the RPN generates a large number of object proposals _agnostic_ to the target objects in the index. Each object proposal is cropped from the query image and fed into the SFNet model to extract the feature. These object features are then matched against all the added target features in the index. We pick the target object in the index with the highest matching score. The final confidence score of an object proposal against the top target object is the product of RPN object proposal confidence score and its matching score with the target object. **Model training**. The RPN is trained on the train split using all bounding box annotations. The SFNet model is trained with SphereFace2 [57] loss function using all instance-level annotations, which encourages small distances between features of multiple views of the same instance, and large distances for features of different instances. #### 4.1.4 Results The benchmark results of both models are presented in Table 2. For both approaches _TA-IDet_ and _RPN+SFNet_, we build models with ResNet-50 and ResNet-101 backbones. There are several intriguing observations. First, _TA-IDet_ substantially outperforms the _RPN+SFNet_ on all metrics. Figure 5: _**Architecture of target-aware instance detector TA-IDet. Top:** in target registration, localization and classification feature for each target are generated. Bottom: during target detection, the model predicts 1 bounding box per target and computes a confidence score to decide whether the prediction should be rejected via thresholding._ For example, the gains in AP50 are large (\(+6\%\) on val and \(+10\%\) on test split). We attribute this to the design that _TA-IDet_ localizes the target object by using query image feature maps modulated by the target feature, and does not rely on the target-agnostic RPN to generate object proposals. Second, the best _TA-IDet_ model with ResNet-101 backbone only achieves less than \(23\%\) AP on both val and test split which have around 4K novel instances each, indicating the unique challenges in the large-scale instance-level object detection, such as large changes in viewing direction, lighting, background and distance as well as less distinguishable appearance between instances from the same category. See more examples in the supplement. Third, there are significant gaps between AP\({}_{sc}\) and AP\({}_{un}\), reflecting the challenges in generalizing the models to detect instances from categories unseen during training. ### Continual Learning Existing continual learning (CL) approaches often tackle object classification problem while continual learning object detection task is not well explored due to the lack of a large-scale dataset that is annotated with instance- and category-level labels, and contains individual objects in multiple images captured under diverse conditions. We introduce 2 novel CL tasks, namely _CL instance detection_ and _CL category detection_, on a subset of EgoObjects which contains 100K images with 250K box annotations for 1.1K main object instances from 277 categories. There are 3.4K and 3.1K instances in the train- and test set, respectively. **CL Instance Detection.** In this task, we simulate the setting when a system continuously encounters new batches of instance detection training data, where each batch indexed at \(i\) is called an experience \(E_{i}\). In \(E_{i}\), each image only carries the annotation of its main object with instance ID being the class. The system can only access data in the latest experience, which means no access to previous experiences apart from the use of a limited replay memory. Previous experiences share no common main object instances with later experiences, which makes it a _Class-Incremental Learning_ setup. We evenly split \(1110\) main instances into 5 experiences, hence 222 instances per experience. For evaluation, the system is benchmarked after each experience on a fixed testing set with all the main instances, making it a \(1110\)-way detection problem. The evaluation metric for each experience is mean average precision (mAP). The final performance is the averaged metric across all the experiences. **CL Category Detection.** In this task, the goal is to predict the object category labels instead of instance IDs. We create 5 experiences by applying the class-incremental ordering on the 277 categories of the main object instances. Additionally, we also include the annotations of secondary objects, which makes it a _Data-Incremental Learning_ setup, i.e. previous experiences share no common images or annotations with later ones. This differentiates our task from other CL object detection tasks focusing on annotation incrementality, where the same images are repeatedly encountered in successive experiences but with a different set of annotations (usually class-incremental). We believe our task provides a more realistic setting. The evaluation metric for each experience is also mAP. **Results.** We benchmark the methods from the top submissions of the 3rd CLVision workshop challenge [44] and report the results on above CL tasks in Table 3. In general, these submissions build upon classic 1-stage/2-stage detectors and adopt techniques to mitigate catastrophic forgetting of early experiences when trained on later experiences, such as sampling data from previous experiences cached in a replay buffer, and distilling models from early experiences to the model for the current experience. However, these winning methods still have limitations. They treat the instance detection as a close-set problem same as the category detection, which cannot scale up to flexibly accommodate more instances. Additionally, there is no change to the detector architecture to better tailor to the CL tasks. See more discussions in the supplement. ### Category-Level Detection EgoObjects also supports the classical category-level object detection task given the nearly 400 object categories in the current Pilot version. **Evaluation protocols.** We use the same dataset splits as the instance-level detection task. In total, there are \begin{table} \begin{tabular}{c|c|c c c c c|c c c} \hline \multirow{2}{*}{backbone} & \multirow{2}{*}{method} & \multicolumn{5}{c|}{val} & \multicolumn{5}{c}{test} \\ & & AP & AP50 & AP50\({}_{w}\) & AP50\({}_{un}\) & AP & AP50 & AP50\({}_{un}\) \\ \hline \multirow{2}{*}{R50} & RPN+SFNet & 17.8 & 29.0 & 29.1 & 19.8 & 15.7 & 25.4 & 25.5 & 16.8 \\ & TA-IDet & 18.7 & 35.0 & 35.0 & 21.7 & 18.5 & 35.2 & 35.2 & 24.8 \\ \hline \multirow{2}{*}{R101} & RPN+SFNet & 19.3 & 32.0 & 32.0 & 22.3 & 17.0 & 27.7 & 27.8 & 20.0 \\ & TA-IDet & 22.6 & 37.9 & 38.0 & 28.5 & 21.9 & 37.9 & 38.0 & 26.4 \\ \hline \end{tabular} \end{table} Table 2: _Instance-level detection benchmarking results on EgoObjects. The proposed_ TA-IDet _model significantly outperforms the baseline_ RPN +SFNet _approach. AP50\({}_{sc}\) and AP50\({}_{un}\) are computed for instances with categories seen and unseen during training. On the more challenging test split with more targets object instances, TA-IDet can maintain the performance whereas_ RPN +SFNet _baseline has a significant performance drop. R50 and R101 denote ResNet-50/101 backbones._ \begin{table} \begin{tabular}{c|c c c c c c|c c c c c} \hline \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{CL Instance Detection} & \multicolumn{5}{c}{CL Category Detection} \\ rank & \(E_{0}\) & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(E_{4}\) & \(EAP\) & \(E_{0}\) & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(E_{4}\) & \(EAP\) \\ \hline 1st & 23.3 & 39.5 & 54.6 & 70.2 & 85.6 & 54.7 & 30.6 & 47.2 & 58.1 & 67.5 & 76.2 & 55.9 \\ 2nd & 15.1 & 30.4 & 45.5 & 60.8 & 75.4 & 45.4 & 28.4 & 44.7 & 57.6 & 67.9 & 78.2 & 55.4 \\ 3rd & 14.7 & 29.1 & 42.3 & 55.4 & 66.9 & 41.7 & 19.5 & 34.5 & 43.9 & 52.7 & 61.5 & 42.4 \\ \hline \end{tabular} \end{table} Table 3: _CL detection benchmarks on EgoObjects. We report detection accuracy (mAP) after each experience and final Experience Average Precision (EAP), which is the averaged mAP over experiences._ 447K/31K/164K object annotations from 368 categories in the train/val/test split, respectively. Due to its federated annotation process, we only penalize false positive predictions on an image if the predicted class is in the list of negative categories for that image. **Benchmarking models**. We consider 3 types of performant object detectors. The first one is FasterRCNN [50], which is a two-stage detector. Next, we include the representative single-stage detector FCOS [54] which skips the explicit proposal generation to accelerate the model inference. Finally, we also consider the recent transformer-based detectors (_i.e_., DETR [10]). Specifically, we adopt the Deformable-DETR [59] due to its stable and fast training. For both FasterRCNN and FCOS, we use the ResNet50/101 backbone pretrained on ImageNet-1K, whereas for Deformable-DETR, we use the Swin-Transformers [38] backbone pretrained on ImageNet-22K. **Results**. The results are presented in Table 4. Notably, single stage FCOS models outperform two-stage FasterRCNN detectors particularly for high IOU threshold (_e.g_. AP75), while DeformDETR-Swin models significantly outperform both types of CNN detectors at the cost of large model size and significantly more compute. However, even for the largest DeformDETR-SwinL model, its AP metrics on EgoObjects are still \(10\%\) lower than its AP on LVIS \(43.7\%\) reported in Table 3 of prior work [29]. We hypothesize due to the egocentric view and its data capture setting, EgoObjects contains larger variations in background, viewpoints, lighting and object distances, which together render it more difficult even for category-level detection. We also implemented the metrics of different buckets on experimental conditions (e.g. object scale, lighting, background complexity) in our evaluation API. We observe model's performance is lower under the more challenging conditions (small scale, dim lighting, busy background). ## 5 Conclusions We present EgoObjects, a large-scale egocentric dataset containing tens of thousands of videos, and more than half million object annotations. By design, it captures the same object under diverse conditions while annotating it with both category label and consistent object IDs in multiple images. To stimulate the egocentric object understanding research on it, we introduce 4 tasks and also provide the benchmarking results of various models, including a novel target-aware instance-level detector which largely outperforms an off-the-shelf baseline based on RPN and SFNet.
2309.03957
Orbital magnetization of a metal is not a bulk property in the mesoscopic regime
We find that, in the mesoscopic regime, modification of the material's surface can induce an extensive change of the material's magnetic moment. In other words, perturbation of order $N^2$ atoms on the surface of a 3-dimensional solid can change the magnetic moment proportionally to $N^3$. When the solid's surface is perturbed, it triggers two changes in the magnetization. One arises from variations of the electron wavefunction and energy, while the other arises from a modification in the kinetic angular momentum operator. In the macroscopic regime of our model, these two bulk effects cancel each other, resulting in no impact of the surface perturbation on the magnetization - consistent with prior work. In the mesoscopic regime, we find a departure from this behavior, as the cancelation of two terms is not complete.
Kevin Moseni, Sinisa Coh
2023-09-07T18:06:31Z
http://arxiv.org/abs/2309.03957v3
# Surface sensitivity of magnetization in the mesoscopic regime ###### Abstract Some of the magnetization of a solid originates from orbital currents flowing on its surface. When the solid's surface is perturbed, it triggers two changes in the magnetization. One arises from variations of the electron wavefunction and energy, while the other emerges from a modification in the kinetic angular momentum operator. In the macroscopic regime of our model, these two bulk effects cancel each other, resulting in no impact of the surface perturbation on the magnetization -- consistent with prior work. We find a departure from this behavior in the mesoscopic regime, where the cancelation of two terms is not complete. In this regime, surprisingly, perturbation of the surface of the solid can change the magnetic dipole of the solid in proportion to the size of the entire solid. In a ferromagnet, the magnetic moment primarily arises from the unequal population of electrons with different spin states. A smaller, yet significant contribution, known as orbital magnetization, originates from the spatial motion of electrons. Some of these orbital electron currents flow around individual atoms in the bulk, while other currents traverse the surface of the sample, as demonstrated in Ref. [1] using a localized picture of electronic structure. Although only a fraction of electrons participate in surface currents, their collective effect contributes to the magnetic dipole moment, scaling with the size of the entire sample (area in two dimensions, volume in three dimensions). The question then arises whether the magnetic moment of the ferromagnet could be modified by perturbing these surface currents? For instance, one may wonder if adsorbing different atoms to the surface of a ferromagnet could change the magnitude of surface currents, and consequently the magnetic dipole of the solid, in proportion to the size of the entire solid. Or could one take a non-magnetic system and induce in it a bulk orbital magnetization by modifying its surface? The seminal work from Ref. [1] rigorously demonstrated that these scenarios are not possible for insulating systems. In an insulating system, the surface currents are quite remarkably determined by the material properties deep in the bulk of the material! Intuitively one would expect that such a statement should also extend to metallic cases, but this has not been rigorously demonstrated. Reference [2] gives heuristic reasons for why magnetization in a metal is equally well determined by the properties in the bulk of the material, as in the case of an insulator. (The same was also suggested for topological insulators in Refs. [2; 3; 4].) Additional support is given by the semi-classical formulation of orbital magnetization from Ref. [5] as well as the long-wave perturbation from Ref. [6]. A more recent indication that orbital magnetization in a metal is a bulk property relies on a local measure of the orbital moment from Refs. [7; 8]. In this paper, our focus lies on a distinct range of length and temperature scales, one that complements the scope of previous investigations. When the electron's time of flight across our sample exceeds \(\hbar\)/thermal energy, our findings corroborate the conclusions drawn in Refs. [1; 2; 3; 4; 5; 6; 7; 8]. Specifically, the surface modifications leave the magnetization unaffected. Therefore, within the framework of our model, the prospect of altering the magnetization of a sizable solid, at a non-zero temperature, through surface modifications is unlikely. Nevertheless, an intriguing situation emerges when we shift to the opposite (mesoscopic) regime, marked by either small sample sizes or lower temperatures. Our work shows that in this regime the surface can indeed change the overall magnetic moment of the sample, in proportion to the size of the entire sample. Before introducing our numerical model, we first motivate it by considering a continuous one-particle effective Hamiltonian, denoted \(H_{\rm c}^{0}\), for a periodic infinite solid in two dimensions. To simplify our analysis, throughout this work we neglect self-consistency, many-electron effects, and disorder. Our system is assumed to be in thermal equilibrium. We ignore any temperature effects beyond electron occupation smearing. The complete basis of the eigenstates of \(H_{\rm c}^{0}\) can be expressed in the Bloch form, \(\psi_{\mathbf{k}}(\mathbf{r})=e^{i\mathbf{k}\cdot\mathbf{r}}u_{\mathbf{k}}(\mathbf{r})\). However, not every eigenstate of \(H_{\rm c}^{0}\) has the Bloch form. Generally, we can construct arbitrary linear combinations of states that share the same eigenvalue \(E_{\mathbf{k}}=E\), and the resulting function \[\phi_{E}(\mathbf{r})=\int_{0}^{1}e^{if(s)}\psi_{\mathbf{k}(s)}(\mathbf{r})ds \tag{1}\] is a valid eigenstate of \(H_{\rm c}^{0}\). Here \(s\to\mathbf{k}(s)\) is a continuous parameterization of a curve in the Brillouin zone along which \(E_{\mathbf{k}(s)}=E\). For now we limit \(f(s)\) so that \(f(0)=f(1)\). We choose \(f(s)\) so that \(\phi_{E}(\mathbf{r})\) is as localized as possible in the real space. \(\phi_{E}\) is only algebraically localized due to integration over part of the Brillouin zone, unlike exponential localization of a Wannier function. [9] By selecting a fixed \(f(s)\), we can create a family of func tions, \(\phi_{mE}\), for any integer \(m\), defined as follows, \[\phi_{mE}(\mathbf{r})=\int_{0}^{1}e^{i2\pi ms}e^{if(s)}\psi_{\mathbf{k}(s)}(\mathbf{r})ds. \tag{2}\] Note, trivially, that \(\langle\phi_{mE}|\phi_{m^{\prime}E^{\prime}}\rangle=\delta_{mm^{\prime}}\delta_ {EE^{\prime}}\). Therefore, \(\phi_{mE}\) for all \(m\) and \(E\) span the same vector space as the Bloch states. Let us now take \(H_{\rm c}^{0}\) to correspond to the free-electron system. One can easily show that, in this case, \[\langle\phi_{mE}|\,L_{z}\,|\phi_{mE}\rangle=\hbar m. \tag{3}\] Each \(\phi_{mE}\) state therefore carries angular momentum \(\hbar m\), and orbital magnetic moment \(\mu_{\rm B}m\). Let us now confine our system to a circular region with radius \(R\). States with large enough \(m\) (\(\approx R\frac{\sqrt{2m_{e}E}}{\hbar}\)) are localized near the edge of the sample and carry an angular momentum that scales with \(\sim R\). Since there are order \(\sim R\) states in the region near the edge of the sample, one might now ask whether including the potential \(V^{\rm edge}\) on the edge of the sample could rearrange these states near the surface and induce a net orbital moment that scales as \(\sim R^{2}\)? If one could construct a surface potential satisfying \[\langle\phi_{mE}|\,V^{\rm edge}\,|\phi_{m^{\prime}E}\rangle\sim m\delta_{mm^{ \prime}} \tag{4}\] then this would be a good candidate surface perturbation, as it breaks the time-reversal symmetry by differently acting on state with different \(m\). We now attempt to create surface potential satisfying Eq. 4 in a concrete finite-size model using a numerically convenient tight-binding approach. To construct the tight-binding model, we project our continuous free-electron Hamiltonian \(H_{\rm c}^{0}\) on the basis of a \(N\times N\) square mesh of s-like orbitals, each separated from the others by a distance \(a\). We label the orbital at site \(i\) as \(\ket{i}\). For the position operators \(x\) and \(y\), we assume \(\bra{i}x\ket{j}=x_{i}\delta_{ij}\) and \(\bra{i}y\ket{j}=y_{i}\delta_{ij}\). For convenience, we work with the centered operators \(\tilde{x}=x-\sum_{i}x_{i}/N^{2}\) and \(\tilde{y}=y-\sum_{i}y_{i}/N^{2}\). We also define the following quantity \(\tilde{L}(A)\) for any operator \(A\), \[\tilde{L}(A)=\frac{m_{\rm e}}{\hbar}\left(i\tilde{x}A\tilde{y}-i\tilde{y}A \tilde{x}\right). \tag{5}\] Clearly \(\tilde{L}(H)\) corresponds to the angular momentum operator for a system described by the Hamiltonian \(H\). (Since the angular momentum operator is a cross-product of the position operator and the kinetic linear momentum operator, \(\mathbf{p}=\frac{im_{\rm e}}{\hbar}[H,\mathbf{r}]\).) We start with the simplest case for \(H^{0}\), where \(H_{ij}^{0}=\bra{i}H^{0}\ket{j}\) is a real number \(t<0\) for the nearest-neighbor orbitals \(i\) and \(j\), and \(0\) for any other pair of orbitals. Our goal is now to construct an edge potential with the property given in Eq. 4. At first it is not clear how to satisfy Eq. 4 in our model, as eigenvectors of \(H^{0}\) don't have a well-defined angular momentum (our tight-binding model is projected into a finite square mesh of orbitals). Therefore, before discussing the edge perturbation, we first add a commutator correction term \(H^{\rm comm}\) which ensures that total bulk Hamiltonian, \[H^{\rm bulk}=H^{0}+H^{\rm comm} \tag{6}\] at least approximately commutes with the angular momentum operator, \(\tilde{L}(H^{\rm bulk})\). Ignoring the second-order terms in \(H^{\rm comm}\) and imposing this requirement to our tight-binding model, we arrive at the following system of \(N^{4}\) equations, \[\left[H^{0}+H^{\rm comm},\tilde{L}\left(H^{0}\right)\right]+\left[H^{0}, \tilde{L}\left(H^{\rm comm}\right)\right]\approx 0. \tag{7}\] The unknown matrix elements \(H^{\rm comm}_{ij}\) are further restricted to be zero for distant orbitals \(i\) and \(j\), making \(H^{\rm comm}\) a local operator. To approximately solve the resulting system of \(\sim N^{2}\) equations, we minimize the quadratic norm of the left-hand side of Eq. 7 using the least squares method. This approach produces a purely real \(H^{\rm comm}\) that only includes the first-nearest neighbors. The maximum value of \(|H^{\rm comm}_{ij}|\) is \(0.5|t|\) independent of \(N\). The operator \(H^{\rm comm}_{ij}\) breaks periodicity in the bulk of the sample and resembles the functional form of a parabolic well. The approximate form of \(H^{\rm comm}\) is provided in the supplementary material, obtained by fitting the results of our procedure for low \(N\). The energy spectrum of \(H^{0}\) as a function of \(N\) exhibits some regularity by having spikes in the density of states separated by \(\Delta\sim 1/N\). However, the number of states in between spikes is not strictly zero, and these states don't follow an obvious pattern as a function of increasing \(N\). If we include the \(H^{\rm comm}\) term to \(H^{0}\) we find that it redistributes the spectrum of the system, creating small gaps in the spectrum (scaling as \(\Delta\sim 1/N\)). We find that placing a Fermi level \(E_{\rm F}\) within one of these gaps has the additional benefit of stabilizing the finite-size effects in our calculations. Related finite-size effects for Landau diamagnetism have also been reported in Refs. [10, 11, 12, 13, 14]. The orbital magnetic moment is zero for any thermal occupation of the system described by \(H^{\rm bulk}\) since all \(H^{\rm bulk}_{ij}\) are real. We now induce the desired symmetry-breaking behavior, as in Eq. 4, by introducing a perturbation \(V^{\rm edge}\) at the edge of the sample, \[V^{\rm edge}_{ij}=-\frac{eb}{2m_{\rm e}}S_{ij}\tilde{L}_{ij}(H^{0}). \tag{8}\] If we set \(S_{ij}=1\) then \(V^{\rm edge}_{ij}\) would represent an approximate interaction term of the orbital magnetic moment with a spatially uniform external magnetic field \(b\), as in the study of Landau diamagnetism. Trivially, the matrix element of such a perturbation is proportional to \(m\), as in Eq. 4. However, our objective was to keep this potential non-zero only on the edge of the sample. We achieve this by setting \(S_{ij}\) to zero in the interior of the sample and to a constant function proportional to \(1/N\) at the edge. This choice ensures that the complex phase acquired by an electron traversing a closed loop around the edge plaquette (flux) is nearly independent of \(N\) and its location along the edge. Our choice of \(S_{ij}\) also ensures that the total flux through the entire sample is zero. Instead, as detailed in the supplementary material, \(V^{\rm edge}\) applies an effective flux of alternating sign to the first and second cells closest to the edge of the sample. After diagonalizing our full Hamiltonian, which includes both bulk and edge contribution, \[\left(H^{\rm bulk}+V^{\rm edge}\right)\left|\psi_{n}\right>=E_{n}\left|\psi_{n}\right> \tag{9}\] we obtain a set of eigenstates \(\left|\psi_{n}\right>\). We use even \(N\), although odd \(N\) yields qualitatively similar results with slightly different chemical potential. The largest \(N\) used is 100, corresponding to a system with 10,000 orbitals. We set the Fermi level \(E_{\rm F}\) to \(-2.55\left|t\right|\), placing it within a small energy gap \(\Delta\) in the spectrum. As discussed above, the gap \(\Delta\) scales as \(1/N\). The magnetic dipole moment is computed as follows, \[m_{\rm dip}=\frac{e}{2m_{\rm e}}\sum_{n}\left<\psi_{n}\right|\tilde{L}(H) \left|\psi_{n}\right>f_{n} \tag{10}\] where \(f_{n}\) is the Fermi-Dirac distribution with effective smearing of electron occupation by \(k_{\rm B}T\). Figure 1 shows the calculated \(m_{\rm dip}\) as a function of \(N\). The computed \(m_{\rm dip}\) scales nearly perfectly as \(N^{2}\) with the size of the system. Since \(m_{\rm dip}=0\) when \(V^{\rm edge}=0\) we conclude that our \(m_{\rm dip}\) results solely from surface modification (\(V^{\rm edge}\)). However, the \(N^{2}\) scaling persists only in the mesoscopic regime, when \(k_{\rm B}T\) is small compared to \(\Delta\). Since \(\Delta\) is proportional to the electron bandwidth and inversely proportional to \(N\), one can interpret \(\Delta\) as a characteristic energy scale that corresponds to the inverse time it takes for an electron to travel from one edge of the sample to the other. With the approximate scaling of the gap at \(E_{\rm F}\) shown in the supplement, we determine that our system is in a mesoscopic regime as long as \(k_{\rm B}T/|t|\lesssim 0.6/N\). This observation motivated us to fit the results for \(m_{\rm dip}\) as a function of \(N\) and \(k_{\rm B}T\) to the following function, \[m_{\rm dip}\sim\frac{N^{2}}{1+\exp\left[3.8\frac{k_{\rm B}T}{|t|}\left(N-0.6 \frac{|t|}{k_{\rm B}T}\right)\right]}. \tag{11}\] Clearly, \(m_{\rm dip}\sim N^{2}\) as long as \(N\) and \(k_{\rm B}T\) are in a mesoscopic regime. Since \(\lim_{N\to\infty}\lim_{T\to 0^{+}}\frac{m_{\rm dip}}{N^{2}}\neq 0\) we find that the \(N^{2}\) scaling of the magnetic moment continues for all \(N\), as long as the temperature is small enough. On the other hand, for any finite, positive, temperature \(T\), the limit \(\lim_{N\to\infty}\frac{m_{\rm dip}}{N^{2}}\) is zero. Therefore, for any small positive \(T\) there is an \(N\) beyond which the magnetic dipole no longer scales as \(N^{2}\). In the supplementary material, we provide explicit numerical values of Hamiltonian matrix elements \(H_{ij}\) for different values of \(N\), as well as a computer code that diagonalizes Eq. 9, computes Eq. 10, and performs a range of consistency checks on \(H_{ij}\). Our finding that \(m_{\rm dip}\) in a metal is surface sensitive is perhaps not that surprising considering that a similar surface dependence can be found for the electric dipole \(d_{\rm dip}\) of a metal. [15] However, importantly, the electric dipole \(d_{\rm dip}\) is surface sensitive in a metal even in a macroscopic regime. Therefore, we can naturally ask why, in the macroscopic regime, \(m_{\rm dip}\) from our model behaves differently from \(d_{\rm dip}\)? To establish a parallel between the electric and magnetic dipole it is instructive to construct a surface potential \(V^{\rm edge}\) that changes the bulk _electric_ dipole, in analogy to how \(V^{\rm edge}\) changed the bulk magnetic dipole. By analogy to Eq. 8 we can now use the position operator, instead of the angular momentum operator, to arrive at \(V^{\rm edge}_{ij}\sim S_{i}\tilde{x}_{i}\delta_{ij}\). Here \(S_{i}\) is a constant term \(\sim 1/N\) that is non-zero only on the surface. If we now compute the expectation value of the _electric_ dipole moment, \(d_{\rm dip}\sim\sum_{n}\left<\psi_{n}\right|\tilde{x}\left|\psi_{n}\right>f_{n}\), we find that \(d_{\rm dip}\) induced by \(V^{\rm edge}\) scales as \(\sim N^{2}\) even in the macroscopic regime. We assign a different behavior of an electrical dipole to that of a magnetic dipole due to the fact that for the electric dipole the same operator (\(\tilde{x}\)) appears in the perturbation (\(V^{\rm edge}\)) as in the induced response (\(d_{\rm dip}\)). This is not the case for the orbital magnetization, as our \(V^{\rm edge}\) is constructed from \(\tilde{L}(H^{0})\), while \(m_{\rm dip}\) is computed from \(\tilde{L}(H)=\tilde{L}(H^{\rm bulk})+\tilde{L}(V^{\rm edge})\) which clearly includes the perturbation \(V^{\rm edge}\) itself. (This is analogous to how Figure 1: Magnetic dipole \(m_{\rm dip}\) induced by \(V^{\rm edge}\) is proportional to \(N^{2}\). \(k_{\rm B}T\) in the Fermi-Dirac distribution is set to \(0\). \(b\) is chosen so that \(a^{2}b=0.2h/e\). Fermi level \(E_{\rm F}\) is set to \(-2.55\left|t\right|\) so that electron density is \(\approx 0.12/a^{2}\). Parameters \(t\) and \(a\) are set so that effective mass at low doping is the same as the free-electron mass. Inset shows that the second derivative of \(m_{\rm dip}\) with respect to \(N\) (scaled by \(10^{2}\)) is constant. external magnetic field described by vector potential \(\mathbf{A}\) changes the kinetic linear momentum operator from \(\mathbf{p}\) to \(\mathbf{p}-\frac{e}{c}\mathbf{A}\), but there is no change in the position operator due to external _electric_ field.) Therefore, including surface perturbation has two effects on the induced magnetic dipole, and we can trivially write \(m_{\rm dip}\) as a sum of \[m_{\rm dip}^{\rm st} =\frac{e}{2m_{\rm e}}\sum_{n}\left\langle\psi_{n}\right|\tilde{L} (H^{\rm bulk})\left|\psi_{n}\right\rangle f_{n}\quad\text{and} \tag{12}\] \[m_{\rm dip}^{\rm op} =\frac{e}{2m_{\rm e}}\sum_{n}\left\langle\psi_{n}\right|\tilde{L} (V^{\rm edge})\left|\psi_{n}\right\rangle f_{n}. \tag{13}\] The first term (\(m_{\rm dip}^{\rm st}\)) arises from changes to the electron state (wavefunction and energy) due to surface perturbation. The second term (\(m_{\rm dip}^{\rm op}\)) originates from the change in the angular momentum operator itself, and in the lowest order of perturbation theory, it can be computed from the unperturbed electron wavefunction and energy. While each of these terms is finite in the macroscopic regime, as \[\lim_{N\to\infty}\frac{m_{\rm dip}^{\rm st}}{N^{2}}\neq 0\quad\text{and} \quad\lim_{N\to\infty}\frac{m_{\rm dip}^{\rm op}}{N^{2}}\neq 0 \tag{14}\] for any fixed non-zero temperature \(T\), they exactly cancel each other in the same limit, so that \[\lim_{N\to\infty}\frac{m_{\rm dip}^{\rm st}+m_{\rm dip}^{\rm op}}{N^{2}}=0. \tag{15}\] In contrast, in the case of the electric dipole, there is only one contribution (the one coming from changes in the electron's state), so there is no cancellation, and the electric dipole is surface sensitive in the macroscopic regime. We now briefly comment on the spatial distribution of the orbital currents that cause the \(m_{\rm dip}\sim N^{2}\) scaling in our model. In the supplement, we compute \(m_{\rm dip}\) by projecting into the edge region of the sample, varying the thickness. We find that approximately half of \(m_{\rm dip}\) is recovered from the edge of the sample with thickness \(0.3(Na/2)\). Consequently, the active area of the sample that contributes to \(m_{\rm dip}\) scales as \(N^{2}\). In our work, we focus on the simplest choice of \(H^{0}\), which corresponds to a square lattice with first-neighbor hoppings. However, the procedure presented in this paper can be done for any \(H^{0}\). An interesting case that is the Haldane model in a topologically non-trivial insulator phase with a non-zero Chern number.[16] Here, even when the Fermi level is within the bulk gap and crosses the topologically protected surface states, we find \(m_{\rm dip}\sim N^{2}\). This is numerically robust even without including commutator correction term \(H^{\rm comm}\). Furthermore, for any given \(H^{0}\), we note that \(V^{\rm edge}\) is not the only surface perturbation that can change the magnetization. Generally, we find that surface modification can induce \(m_{\rm dip}\sim N^{2}\) whenever the perturbation allows, by symmetry, circulation of surface currents with a consistent handedness on each edge of the sample. Specifically, for a two-dimensional model, this implies that the surface perturbation must break mirror symmetry along the edge and time-reversal symmetry. However, the product of these two operations can still remain a symmetry. This work was supported by the NSF DMR-1848074 grant. We acknowledge discussions with R. Wilson and L. Vuong on inverse Faraday effect as these discussions have motivated our work.
2303.17979
Robust Detection for Mills Cross Sonar
Multi-array systems are widely used in sonar and radar applications. They can improve communication speeds, target discrimination, and imaging. In the case of a multibeam sonar system that can operate two receiving arrays, we derive new adaptive to improve detection capabilities compared to traditional sonar detection approaches. To do so, we more specifically consider correlated arrays, whose covariance matrices are estimated up to scale factors, and an impulsive clutter. In a partially homogeneous environment, the 2-step Generalized Likelihood ratio Test (GLRT) and Rao approach lead to a generalization of the Adaptive Normalized Matched Filter (ANMF) test and an equivalent numerically simpler detector with a well-established texture Constant False Alarm Rate (CFAR) behavior. Performances are discussed and illustrated with theoretical examples, numerous simulations, and insights into experimental data. Results show that these detectors outperform their competitors and have stronger robustness to environmental unknowns.
Olivier Lerda, Ammar Mian, Guillaume Ginolhac, Jean-Philippe Ovarlez, Didier Charlot
2023-03-31T11:31:18Z
http://arxiv.org/abs/2303.17979v2
# Robust Detection for Mills Cross Sonar ###### Abstract Multi-array systems are widely used in sonar and radar applications. They can improve communication speeds, target discrimination, and imaging. In the case of a multibeam sonar system that can operate two receiving arrays, we derive new adaptive to improve detection capabilities compared to traditional sonar detection approaches. To do so, we more specifically consider correlated arrays, whose covariance matrices are estimated up to scale factors, and an impulsive clutter. In a partially homogeneous environment, the 2-step Generalized Likelihood ratio Test (GLRT) and Rao approach lead to a generalization of the Adaptive Normalized Matched Filter (ANMF) test and an equivalent numerically simpler detector with a well-established texture Constant False Alarm Rate (CFAR) behavior. Performances are discussed and illustrated with theoretical examples, numerous simulations, and insights into experimental data. Results show that these detectors outperform their competitors and have stronger robustness to environmental unknowns. Sonar target detection, Adaptive Normalized Matched Filter, Multiple-Input Multiple-Output, Complex Elliptically Symmetric distributions, Tyler's M-estimator, robustness. ## I Introduction ### _Background and motivations_ Forward-Looking sonars are solutions for perceiving the underwater environment. In the context of a growing need for decision-making autonomy and navigation safety, they have become fundamental tools for understanding, anticipating obstacles and potential dangers, analyzing and identifying threats. They offer efficient results, allowing detection, tracking, and classification of surface [1], water column [2], or bottom targets [3], in civil [4] or military applications such as mine warfare [5]. At the detection level, monovariate statistical tests under the Gaussian or non Gaussian interference assumption, defined a priori, remain the prevalent approaches [6]-[8]. Nevertheless, many works on multivariate statistics have shown their great interest compared to algorithms developed from monovariate statistics in a large number of application fields. Indeed, multivariate statistics allow advanced modeling of propagation environments. By following these precepts, [9] gets a central detector with the total unknown of the noise parameters, [10] first derives a detector under the assumption of a known covariance then substitutes it, through a two-step procedure, by an appropriate estimator, finally, [11] and [12] have shown the relevance of subspace data models in consideration of mismatched signals for which, as an example, the target echo would not come precisely from the center of the main beam. These seminal works are now references in remote sensing, and ground or air surveillance but mainly for radar systems. Moreover, in the radar field, phenomenal progress has also been made in recent decades, guided by increasingly complex systems which required profound changes in concepts and processing. This is especially the case of Space-Time Adaptive Processing (STAP) for airborne radars [13], which bring considerable improvements in the ability to discriminate moving targets at very low speeds, or Multiple-Input Multiple-Output (MIMO) systems that advance detection performance [14], [15] and resolution [16], [17] by exploiting spatial, frequency or waveform diversities. Although some preliminary work has emerged in recent years [18]-[24], these methods still seem to be underused in sonar systems. This paper focuses on the adaptive detection of a point target by a correlated orthogonal arrays sonar system. Inspired by these previous works, we will first show that multibeam systems are perfectly adapted to multivariate formalism. We will then propose two new detectors following the GLRT, and Rao two-step approaches [25]-[27], assuming heterogeneous or partially homogeneous clutter [28], [29]. The performance in a Gaussian environment will first be evaluated. We will show that considering a sonar system with Mills [30] cross arrays leads to a better detectability of targets by reducing the clutter ridge. Nevertheless, complex multi-normality can sometimes be a poor approximation of physics. This is the case for fitting high-resolution clutter, impulsive noise, outliers, and interference. The Complex Elliptic Symmetric (CES) distributions [31] including the well-known compound Gaussian subclass are then natural extensions allowing the modeling of distributions with heavy or light tails in radar [32]-[35] as in sonar [36]. Mixtures of Scaled Gaussian (MSG) distributions [37] are derived and easily tractable approaches. In this context, particular covariance matrix estimators are recommended for adaptive processing such as the Tyler estimator [38] or Huber's M-estimator [39]. Their uses lead to very substantial performance gains in [40], [41], and [35]. In our application, these considerations will allow us to design a new covariance matrix estimator. The performance of the detectors in a non-Gaussian impulsive environment can then be studied. On this occasion, we will show on experimental data, this estimator's interest in the robustness to corruption of training data. ### _Paper organization_ This paper is organized as follows: Section II presents a dual array sonar system and the experimental acquisition conditions on which this work is based. In Section III, the signal model and detection problem are formalized. According to the two-step GLRT and Rao test design, coherent adaptive detectors are derived in Section IV. The performances are evaluated, compared, and analyzed in Sections V and VI. Conclusions are given in Section VII. Proofs and complementary results are provided in the Appendices. _Notations_: Matrices are in bold and capital, vectors in bold. Re(.) and Im(.) stand respectively for real and imaginary part operators. For any matrix \(\mathbf{A}\) or vector, \(\mathbf{A}^{T}\) is the transpose of \(\mathbf{A}\) and \(\mathbf{A}^{H}\) is the Hermitian transpose of \(\mathbf{A}\). \(\mathbf{I}\) is the identity matrix and \(\mathcal{CN}(\boldsymbol{\mu},\boldsymbol{\Gamma})\) is the circular complex Normal distribution of mean \(\boldsymbol{\mu}\) and covariance matrix \(\boldsymbol{\Gamma}\). \(\otimes\) denotes the Kronecker product. ## II Seapix system ### _Generalities_ The SEAPIX system is a three-dimensional multibeam echosounder developed by the sonar systems division of Exail (formerly iXblue) [42]. It is traditionally used by fishing professionals as a tool to assist in the selection of catches and the respect of quotas [43], by hydro-acousticians for the monitoring of stocks and morphological studies of fish shoals [44], by hydrographers for the establishment of bathymetric and sedimentary marine charts [45, 46]. Two uniform linear arrays of 64 elements, arranged in Mills cross, are entirely symmetric, reversible in transmission/reception, and electronically steerable. They generate transverse (i.e. across-track) or longitudinal (along-track) acoustic swaths of 120\({}^{\circ}\) by 1.8\({}^{\circ}\), tilttable on +/-60\({}^{\circ}\), providing a volumic coverage of the water column. ### _FLS experiment_ The SEAPIX system is being experimented with a Forward-Looking Sonar configuration for predictive target detection and identification. In this context of use, the active face is oriented in the "forward" direction rather than toward the seabed. In our study, the sensor is installed on the DriX Uncrewed Surface Vehicle (USV), and inclined by 20\({}^{\circ}\) according to the pitch angle to the sea surface (Figure 2 left). In transmission, the vertical antenna (formerly longitudinal) generates an enlarged beam of 9\({}^{\circ}\) in elevation by 120\({}^{\circ}\) in azimuth. A motion stabilized, and electronically tilted firing angle allows the upper limit of the -3 dB transmit beamwidth to graze the sea surface and the lower limit to reach a 50 m depth bottom at about 300 m range. In receive, the horizontal antenna (formerly transverse) generates beams of 2\({}^{\circ}\) in azimuth by 120\({}^{\circ}\) in elevation and the vertical antenna (which is used again) of 2\({}^{\circ}\) in elevation by 120\({}^{\circ}\) in azimuth. A rigid sphere of 71 cm diameter (Figure 2 right) is also immersed at 25 m depth in the middle of the water column. So after each transmission of 20 ms Linear Frequency Modulation pulses centered at 150 KHz with a 10 KHz bandwidth and a Pulse Repetition Interval of 0.5 s, the sensor signals from the two antennas are simultaneously recorded, allowing an azimuth and elevation representation of the acoustic environment (Figure 3). ### _Pre-processing and data format_ The signals from the 128 sensors provided by the sonar's embedded software are demodulated in baseband (InPhase and Quadrature components) and decimated at 43 KHz. During reception pre-processing, the digitized samples are compensated for the time-varying gain, filtered to the bandwidth of the waveform, pulse compressed, then decimated again to bandwidth. Finally, a ping dataset is a matrix of about 6000 range bins from 15 m to 400 m, by 64 sensors, by two arrays. Fig. 1: Multiswath capabilities seen from a schematic representation (left): transverse swath footprint is blue, a 60\({}^{\circ}\) steered transverse swath is displayed in red, the longitudinal swath is green, and a 45\({}^{\circ}\) steered longitudinal swath is orange. Illustration from the operator software (right): An across-track, an along-track, and a tilted along-track swath are observed, as well as an aggregation of fishes and an already constructed bathymetric map. Fig. 2: Exail’s DriX USV (left): The cross-shaped linear arrays are visible in the gondola. The real target in open water (right): a metallic sphere of target strength \(TS=-15\) dB. ## III Detection Schemes ### _Data model for a single array_ At each time, we, have acquired two digitalized synchronized complex data vectors of \(m=64\) elements, called "snapshots", which can be written as (by omitting the temporal parameterization): \[\mathbf{x}_{i}=\begin{bmatrix}x_{i,1}&x_{i,2}&\cdots&x_{i,m}\end{bmatrix}^{T} \tag{1}\] where \(i=1,2\) is the array identifier (respectively horizontal and vertical). After the pre-processing steps, a point-like target observed on the antenna \(i\) is simply: \[\mathbf{x}_{i}=\alpha_{i}\,\mathbf{p}_{i}+\mathbf{z}_{i} \tag{2}\] where \(\mathbf{x}_{i}\in\mathbb{C}^{m}\) is the received signal, \(\alpha_{i}\) is an unknown complex target amplitude, \(\mathbf{p}_{i}\in\mathbb{C}^{m}\) stands for the known deterministic angular steering vector, \(\mathbf{z}_{i}\in\mathbb{C}^{m}\) is a mixture of scaled Gaussian (MSG) random vector admitting the stochastic representation: \[\mathbf{z}_{i}\stackrel{{ d}}{{=}}\sqrt{\tau_{i}}\,\mathbf{c}_{i}. \tag{3}\] The _texture_\(\tau_{i}\) is an unknown positive deterministic scalar parameter, presumably different for each range bin (i.e. for all time samples). The _speckle_\(\mathbf{c}_{i}\sim\mathcal{CN}(\mathbf{0},\sigma_{i}^{2}\mathbf{M}_{ii})\in \mathbb{C}^{m}\) is a complex circular Gaussian random vector whose covariance matrix \(\mathbf{M}_{ii}\) is known up to a scaling factor \(\sigma_{i}^{2}\). The term _speckle_ should be understood in the sense of CES statistics rather than as the result of a sum of contributions from reflections in a resolution cell. This model is strongly related to the class of compound Gaussian distributions [31], which assumes a speckle-independent random texture with a given density \(p_{\tau}\). The MSG distribution is more robust than the Gaussian one because the relative scaling between samples allows flexibility in the presence of heterogeneities, such as impulsive noise, outliers, and inconsistent data. This model explicitly allows considering the power fluctuation across range bins, especially for heavy-tailed clutter distributions. The detection problem is written as a binary hypothesis test: \[\left\{\begin{array}{lclcl}H_{0}:\mathbf{x}_{i}=\mathbf{z}_{i}&;&\mathbf{x }_{i,k}=\mathbf{z}_{i,k}&k=1\ldots K\\ H_{1}:\mathbf{x}_{i}=\alpha_{i}\,\mathbf{p}_{i}+\mathbf{z}_{i}&;&\mathbf{x}_{i, k}=\mathbf{z}_{i,k}&k=1\ldots K.\end{array}\right. \tag{4}\] In (4) it is assumed that \(K\geq m\) independent and identically distributed (i.i.d.) signal-free secondary data \(\mathbf{x}_{i,k}\in\mathbb{C}^{m}\) are available under both hypotheses for background parameters estimation. We recall that \(\mathbf{z}_{i,k}\stackrel{{ d}}{{=}}\sqrt{\tau_{i,k}}\,\mathbf{c}_ {i,k}\), \(\mathbf{c}_{i,k}\sim\mathcal{CN}(\mathbf{0},\mathbf{M}_{ii})\). Conditionally to the unknown deterministic texture, the densities of \(\mathbf{x}_{i}\) under \(H_{0}\) and \(H_{1}\) are Gaussian: \[p_{\mathbf{x}_{i}}(\mathbf{x}_{i};H_{0})=\frac{1}{\pi^{m}\hat{ \sigma}_{i}^{2m}|\mathbf{M}_{ii}|}\mathrm{exp}\left(-\frac{\mathbf{x}_{i}^{H} \mathbf{M}_{ii}^{-1}\,\mathbf{x}_{i}}{\hat{\sigma}_{i}^{2}}\right)\,, \tag{5}\] \[p_{\mathbf{x}_{i}}(\mathbf{x}_{i};H_{1})=\] \[\frac{1}{\pi^{m}\hat{\sigma}_{i}^{2m}|\mathbf{M}_{ii}|}\mathrm{ exp}\left(-\frac{(\mathbf{x}_{i}-\alpha_{i}\mathbf{p}_{i})^{H}\mathbf{M}_{ii}^{-1} \,(\mathbf{x}_{i}-\alpha_{i}\mathbf{p}_{i})}{\hat{\sigma}_{i}^{2}}\right)\,,\] where \(\hat{\sigma}_{i}=\sigma_{i}\,\sqrt{\tau_{i}}\). ### _Data model for the two arrays_ If we consider the two antennas, this model can be written more appropriately: \[\left\{\begin{array}{lcl}H_{0}:\mathbf{x}=\mathbf{z}&;&\mathbf{x}_{k}= \mathbf{z}_{k}&k=1\ldots K\\ H_{1}:\mathbf{x}=\mathbf{P}\,\boldsymbol{\alpha}+\mathbf{z}&;&\mathbf{x}_{k}= \mathbf{z}_{k}&k=1\ldots K\end{array}\right. \tag{6}\] where \(\mathbf{x}=\begin{bmatrix}\mathbf{x}_{1}\\ \mathbf{x}_{2}\end{bmatrix}\in\mathbb{C}^{2m}\) is the concatenation of the two received signals, \(\boldsymbol{\alpha}=\begin{bmatrix}\alpha_{1}\\ \alpha_{2}\end{bmatrix}\in\mathbb{C}^{2}\) is the vector of the target amplitudes and \(\mathbf{z}=\begin{bmatrix}\mathbf{z}_{1}\\ \mathbf{z}_{2}\end{bmatrix}\in\mathbb{C}^{2m}\) is the additive clutter. The matrix \(\mathbf{P}=\begin{bmatrix}\mathbf{p}_{1}&\mathbf{0}\\ \mathbf{0}&\mathbf{p}_{2}\end{bmatrix}\in\mathbb{C}^{2m\times 2}\) contains the steering vectors. \(\left\{\mathbf{x}_{k}=\begin{bmatrix}\mathbf{x}_{1,k}\\ \mathbf{x}_{2,k}\end{bmatrix}\right\}_{k\in\mathbb{I},K}\in\mathbb{C}^{2m}\) for \(K\geq 2\,m\) are i.i.d. signal-free secondary data. This formulation allows considering the correlation between sensors of arrays. The covariance is \(\mathbf{M}=\begin{bmatrix}\mathbf{M}_{11}&\mathbf{M}_{12}\\ \mathbf{M}_{21}&\mathbf{M}_{22}\end{bmatrix}\), with \(\mathbf{M}_{ii}\) the block-covariance matrix of array \(i\), and \(\mathbf{M}_{ij}=\mathbf{M}_{ji}^{H}\) the cross-correlation block of array \(i\) and \(j\). We further assume that the covariance is known, or estimated, up to two scalars, possibly different on each array. These scalars are conditioning the \(\mathbf{M}_{ii}\) block-covariance, but also all the cross-correlations blocks associated with the array \(i\). We can therefore write: \[\widetilde{\mathbf{C}}=\widetilde{\mathbf{\Sigma}}\,\mathbf{M}\,\widetilde{ \mathbf{\Sigma}}\,, \tag{7}\] with \(\widetilde{\mathbf{\Sigma}}=\begin{bmatrix}\tilde{\sigma}_{1}\,\mathbf{I}_{m}& \mathbf{0}\\ \mathbf{0}&\tilde{\sigma}_{2}\,\mathbf{I}_{m}\end{bmatrix}=\begin{bmatrix} \tilde{\sigma}_{1}&0\\ 0&\tilde{\sigma}_{2}\end{bmatrix}\otimes\mathbf{I}_{m}\) the unknown diagonal matrix of scalars \(\sigma_{i}\) and \(\tau_{i}\). We remind that the Fig. 3: Azimuth (top) and elevation (bottom) view from a single ping, in 50 m depth in La Ciotat bay. \(\sigma_{i}\) parameter drives the partial homogeneity of the data (i.e. the difference in scale factor between the covariance matrices of the primary and secondary data) and \(\tau_{i}\) drives the non-Gaussianity of the data (i.e. the power variation of the observations over time). In this model, each array, although correlated, has a possibly distinct texture and an unknown scaling factor on the covariance matrix which may also be dissimilar. It would therefore become entirely possible to model, as an example, a first array whose observations would be Gaussian (\(\tau_{1}=1\)) and homogeneous (\(\sigma_{1}=1\)), and a second array whose observations would be \(K\)-distributed (for which \(\tau_{2}\) is a realization of a Gamma distribution) and non-homogeneous (\(\sigma_{2}\neq 1\)). The PDFs under each hypothesis can be rewritten as: \[p_{\mathbf{x}}(\mathbf{x};H_{0}) =\frac{1}{\pi^{2m}|\widehat{\mathbf{C}}|}\exp\left(-\mathbf{x}^{ H}\widetilde{\mathbf{C}}^{-1}\mathbf{x}\right)\,,\] \[p_{\mathbf{x}}(\mathbf{x};H_{1}) =\frac{1}{\pi^{2m}|\widetilde{\mathbf{C}}|}\exp\left(-(\mathbf{x }-\mathbf{P}\boldsymbol{\alpha})^{H}\widetilde{\mathbf{C}}^{-1}\left( \mathbf{x}-\mathbf{P}\boldsymbol{\alpha}\right)\right). \tag{8}\] ## IV Robust Detectors We discuss the derivation of detectors using the GLRT and Rao procedures. Following a two-step approach, the covariance matrix \(\mathbf{M}\) will first be assumed known, and then replaced by an appropriate estimator. ### _Detectors' derivation with \(\mathbf{M}\) known (step-1)_ #### Iv-A1 Generalized Likelihood Ratio Test The Generalized Likelihood Ratio Test (GLRT) design methodology proposes to solve the detection problem from the ratio of the probability densities function under \(H_{1}\) and \(H_{0}\), substituting the unknown parameters with their maximum likelihood estimates: \[L_{G}(\mathbf{x})=\frac{\max\limits_{\boldsymbol{\alpha}}\max\limits_{ \widetilde{\mathbf{\Sigma}}}p_{\mathbf{x}}\left(\mathbf{x};\boldsymbol{\alpha },\widetilde{\mathbf{\Sigma}},H_{1}\right)}{\max\limits_{\widetilde{\mathbf{ \Sigma}}}p_{\mathbf{x}}\left(\mathbf{x};\widetilde{\mathbf{\Sigma}},H_{0} \right)}. \tag{9}\] **Proposition IV.1**.: _The GLRT for the hypothesis test (6) is given by:_ \[L_{G}(\mathbf{x})=\frac{\hat{\sigma}_{1_{0}}\,\hat{\sigma}_{2_{0}}}{\hat{ \sigma}_{1_{1}}\,\hat{\sigma}_{2_{1}}} \tag{10}\] _where_ \[\left(\hat{\sigma}_{1_{0}}\,\hat{\sigma}_{2_{0}}\right)^{2}=\frac{ \operatorname{Re}\left(\mathbf{x}_{1}^{H}\mathbf{M}_{12}^{-1}\mathbf{x}_{2} \right)}{m}\\ +\sqrt{\frac{\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1 }}{m}}\,\frac{\mathbf{x}_{2}^{H}\mathbf{M}_{22}^{-1}\mathbf{x}_{2}}{m},\] _and_ \[\left(\hat{\sigma}_{1_{1}}\,\hat{\sigma}_{2_{1}}\right)^{2}=\frac {\operatorname{Re}\left[\mathbf{x}_{1}^{H}\left(\mathbf{M}_{12}^{-1}-\mathbf{ D}_{12}^{-1}\right)\mathbf{x}_{2}\right]}{m}\\ +\sqrt{\frac{\mathbf{x}_{1}^{H}\left(\mathbf{M}_{11}^{-1}-\mathbf{ D}_{11}^{-1}\right)\mathbf{x}_{1}}{m}}\,\frac{\mathbf{x}_{2}^{H}\left( \mathbf{M}_{22}^{-1}-\mathbf{D}_{22}^{-1}\right)\mathbf{x}_{2}}{m}}\] _with \(\mathbf{D}^{-1}=\mathbf{M}^{-1}\mathbf{P}\left(\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1}\)._ Proof.: See Appendix A for a step-by-step derivation and Appendix B for some interesting equivalences. As \(\mathbf{x}_{i}=\sqrt{\tau_{i}}\,\mathbf{c}_{i}\) under \(H_{0}\), it is easily shown that this detector is texture independent (i.e. it has the _texture-CFAR_ property). This detection test will be called _M-NMF-G_ in the following. #### Iv-A2 Rao test The Rao test is obtained by exploiting the asymptotic efficiency of the ML estimate and expanding the likelihood ratio in the neighborhood of the estimated parameters [47]. A traditional approach for complex unknown parameters is to form a corresponding real-valued parameter vector and then use the real Rao test [48], [49]. Specifically, rewriting the detection problem (6) as: \[\left\{\begin{array}{l}H_{0}:\boldsymbol{\xi}_{R}=\mathbf{0},\boldsymbol{\xi }_{S}\\ H_{1}:\boldsymbol{\xi}_{R}\neq\mathbf{0},\boldsymbol{\xi}_{S}\,,\end{array}\right. \tag{11}\] where \(\boldsymbol{\xi}_{R}=\left[\operatorname{Re}\left(\boldsymbol{\alpha}\right)^{T }\,\operatorname{Im}\left(\boldsymbol{\alpha}\right)^{T}\right]^{T}\) is a \(4\times 1\) parameter vector and \(\boldsymbol{\xi}_{S}=[\tilde{\sigma}_{1}\,\tilde{\sigma}_{2}]^{T}\) is a \(2\times 1\) vector of nuisance parameters, the Rao test for the problem (11) is: \[L_{R}(\mathbf{x})=\left.\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x}; \boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})}{\partial\boldsymbol{\xi}_{R}} \right|_{\boldsymbol{\xi}_{R}}^{T} =\hat{\boldsymbol{\xi}}_{R_{0}}\\ \boldsymbol{\xi}_{S}=\hat{\boldsymbol{\xi}}_{S_{0}}\] \[\left[\mathbf{I}^{-1}(\hat{\boldsymbol{\xi}}_{R_{0}},\hat{ \boldsymbol{\xi}}_{S_{0}})\right]_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{R}} \boldsymbol{\xi}_{R}\] \[\left.\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\boldsymbol{\xi} _{R},\boldsymbol{\xi}_{S})}{\partial\boldsymbol{\xi}_{R}}\right|_{\boldsymbol{ \xi}_{R}} \boldsymbol{\xi}_{R} =\hat{\boldsymbol{\xi}}_{R_{0}}\,. \tag{12}\] The PDF \(p_{\mathbf{x}}(\mathbf{x};\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\) is given in (8) and parametrized by \(\boldsymbol{\xi}_{R}\) and \(\boldsymbol{\xi}_{S}\): \(\hat{\boldsymbol{\xi}}_{R_{0}}\) and \(\hat{\boldsymbol{\xi}}_{S_{0}}\) are the ML estimates of \(\boldsymbol{\xi}_{R}\) and \(\boldsymbol{\xi}_{S}\) under \(H_{0}\). \(\mathbf{I}(\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\) is the Fisher Information Matrix that can be partitioned as: \[\mathbf{I}(\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})=\begin{bmatrix} \mathbf{I}_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{R}}(\boldsymbol{\xi}_{R}, \boldsymbol{\xi}_{S})&\mathbf{I}_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{S}}( \boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\\ \mathbf{I}_{\boldsymbol{\xi}_{S}\boldsymbol{\xi}_{R}}(\boldsymbol{\xi}_{R}, \boldsymbol{\xi}_{S})&\mathbf{I}_{\boldsymbol{\xi}_{S}\boldsymbol{\xi}_{S}}( \boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\end{bmatrix}\,,\] and we have: \[\left[\mathbf{I}^{-1}(\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S}) \right]_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{R}}=\left(\mathbf{I}_{ \boldsymbol{\xi}_{R}\boldsymbol{\xi}_{R}}(\boldsymbol{\xi}_{R},\boldsymbol{ \xi}_{S})-\right.\\ \mathbf{I}_{\boldsymbol{\xi}_{R}\boldsymbol{\xi}_{S}}(\boldsymbol{\xi}_{R}, \boldsymbol{\xi}_{S})\,\mathbf{I}_{\boldsymbol{\xi}_{S}\boldsymbol{\xi}_{S}}( \boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\,\mathbf{I}_{\boldsymbol{\xi}_{S} \boldsymbol{\xi}_{R}}(\boldsymbol{\xi}_{R},\boldsymbol{\xi}_{S})\Big{)}^{-1}\,. \tag{13}\] The following proposition can be finally stated: **Proposition IV.2**.: _The Rao test is given by:_ \[L_{R}(\mathbf{x})=2\,\mathbf{x}^{H}\widehat{\mathbf{C}}_{0}^{-1}\,\mathbf{P} \left(\mathbf{P}^{H}\widehat{\mathbf{C}}_{0}^{-1}\,\mathbf{P}\right)^{-1} \mathbf{P}^{H}\widehat{\mathbf{C}}_{0}^{-1}\,\mathbf{x} \tag{14}\] _where \(\hat{\mathbf{C}}_{0}=\hat{\mathbf{\Sigma}}_{0}\,\mathbf{M}\,\hat{\mathbf{\Sigma}}_{0}\) and \(\hat{\mathbf{\Sigma}}_{0}\) is defined in Appendix A._ Proof.: See Appendix C. As for the GLRT, (14) is _texture-CFAR_. This detector will be referred to as _M-NMF-R_ in the sequel. ### _Covariance estimation and adaptive detectors (step-2)_ In practice, the noise covariance matrix is unknown and estimated using the \(K\) available i.i.d. signal-free secondary data. In Gaussian environment, in which PDFs are given by (8) with \(\sigma_{i}=1\) and \(\tau_{i}=1\), the MLE of \(\mathbf{M}\) is the well-known Sample Covariance Matrix (SCM): \[\widehat{\mathbf{M}}_{SCM}=\frac{1}{K}\sum_{k=1}^{K}\mathbf{x}_{k}\mathbf{x}_{ k}^{H} \tag{15}\] which is an unbiased and minimum variance estimator. In the presence of outliers or a heavy-tailed distribution (as modeled by a mixture of scaled Gaussian), this estimator is no longer optimal or robust. This leads to a strong performance degradation. **Proposition IV.3**.: _In MSG environment the MLE of \(\mathbf{M}\) is given by:_ \[\widehat{\mathbf{M}}_{\text{2TYL}}=\frac{1}{K}\sum_{k=1}^{K}\widehat{\mathbf{ T}}_{k}^{-1}\mathbf{x}_{k}\mathbf{x}_{k}^{H}\widehat{\mathbf{T}}_{k}^{-1}\,, \tag{16}\] _where \(\widehat{\mathbf{T}}_{k}=\begin{bmatrix}\sqrt{\hat{\tau}_{1_{k}}}&0\\ 0&\sqrt{\hat{\tau}_{2_{k}}}\end{bmatrix}\otimes\mathbf{I}_{m}\), \(\hat{\tau}_{1_{k}}=t_{1}+\sqrt{\frac{t_{1}}{t_{2}}}t_{12}\), \(\hat{\tau}_{2_{k}}=t_{2}+\sqrt{\frac{t_{2}}{t_{1}}}t_{12}\) and \(t_{1}=\frac{\mathbf{x}_{1,k}^{H}\widehat{\mathbf{M}}_{11}^{-1}\mathbf{x}_{1,k }}{m}\), \(t_{2}=\frac{\mathbf{x}_{2,k}^{H}\widehat{\mathbf{M}}_{22}^{-1}\mathbf{x}_{2,k }}{m}\), \(t_{12}=\frac{\text{Re}\left(\mathbf{x}_{1,k}^{H}\widehat{\mathbf{M}}_{12}^{-1} \mathbf{x}_{2,k}\right)}{m}\)._ Proof.: The demonstration is provided in Appendix D. A key point is that this estimator is independent of the textures, i.e. the power variations, on each array. It can be thought of as a _multi-texture_ generalization of [38]. From a practical point of view \(\widehat{\mathbf{M}}_{\text{2TYL}}\) is the solution of the recursive algorithm: \[\widehat{\mathbf{T}}_{k}^{(n)} =\begin{bmatrix}\sqrt{\hat{\tau}_{1_{k}}^{(n)}}&0\\ 0&\sqrt{\hat{\tau}_{2_{k}}^{(n)}}\end{bmatrix}\otimes\mathbf{I}_{m}\,, \tag{17}\] \[\widehat{\mathbf{M}}_{\text{2TYL}}^{(n)} =\frac{1}{K}\sum_{k=1}^{K}\left(\widehat{\mathbf{T}}_{k}^{(n-1)} \right)^{-1}\mathbf{x}_{k}\mathbf{x}_{k}^{H}\left(\widehat{\mathbf{T}}_{k}^{ (n-1)}\right)^{-1}\,, \tag{18}\] where \(n\in\mathbb{N}\) is the iteration number, whatever the initialization \(\widehat{\mathbf{M}}_{\text{2TYL}}^{(0)}\). The convergence of recursive equations (17) and (18) in the estimation of \(\widehat{\mathbf{M}}_{\text{2TYL}}\) is illustrated in Figure 4 for 500 iterations and \(\widehat{\mathbf{M}}_{\text{2TYL}}^{(0)}=\mathbf{I}_{2m}\). The relative difference between estimates decreases with the number of iterations. From iteration 60, the accuracy becomes limited by the simulation environment. In practice, it is not necessary to go to this limit, and we notice that from iteration 20 the relative deviation becomes lower than \(10^{-6}\). At last, the adaptive versions of the tests (10) and (14) will be simply obtained by replacing the covariance matrix \(\mathbf{M}\) by an appropriate estimate: (15) or (16) according to the environment. Those detectors will be referred as _M-ANMF-G\({}_{SCM}\)_, _M-ANMF-R\({}_{SCM}\)_, _M-ANMF-G\({}_{TYL}\)_, and _M-ANMF-R\({}_{TYL}\)_. ## V Numerical results on simulated data ### _Performance assessment_ Two correlation coefficients \(\rho_{1}\) and \(\rho_{2}\) (\(0<\rho_{1},\rho_{2}<1\)), are used in the construction of a _speckle_ covariance matrix model defined as: \[\left[\mathbf{M}\right]_{jl}=\beta\,\rho_{1}^{[j_{1}-l_{1}]}\times\rho_{2}^{ [j_{2}-l_{2}]}\,,\] where \(j,l\in[1,\,2m]\) are sensor numbers of coordinates \((j_{1},j_{2})\) and \((l_{1},l_{2})\) respectively and \(\beta\) is a scale factor. Thus, denoting \(\mathbf{M}=\begin{bmatrix}\mathbf{M}_{11}&\mathbf{M}_{12}\\ \mathbf{M}_{21}&\mathbf{M}_{22}\end{bmatrix}\), \(\mathbf{M}_{11}\) and \(\mathbf{M}_{22}\) are the covariance matrices of array 1 and 2 and \(\mathbf{M}_{12}=\mathbf{M}_{21}^{H}\) is a cross-correlation block. In the FLS context, the block \(\mathbf{M}_{11}\) is weakly correlated and Toeplitz structured (close to the identity matrix). \(\mathbf{M}_{22}\) is also Toeplitz but more strongly correlated (due to the wider transmission beam). The cross-correlation blocks could be considered null under the uncorrelated arrays assumption. In our case, these will be different from zero because the arrays cross each other in their central part. This results in the general structure displayed in Figure 5, where we visually show the adequacy of this model with the SCM covariance estimator established on real data. We choose \(\sigma_{i}=1\). \(\tau_{i}=1\) for Gaussian clutter and \(\tau_{i}\sim\text{Gam}(\nu,1/\nu)\) with \(\nu=0.5\) for impulsive non-Gaussian _K_-distributed data (that is, the texture variables follow a gamma distribution with shape parameter 0.5 and scale parameter 2). The PFA-threshold curves are established on the basis of 1000 realizations of random vectors. The detection probabilities are statistically estimated by adding a synthetic narrow band far field point target with identical amplitudes on the arrays (\(\alpha_{1}=\alpha_{2}\)). 10000 Monte Carlo iterations are performed and 150 target amplitudes are evaluated. ### _Benchmark tests_ The GLRT for a single array in a partially homogeneous Gaussian environment when \(\mathbf{M}_{ii}\) is known is the _Normalized Matched Filter_[11]: \[\mathrm{NMF}i(\mathbf{x}_{i})=\frac{|\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1} \mathbf{x}_{i}|^{2}}{\left(\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{p}_{i }\right)\left(\mathbf{x}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{x}_{i}\right)}. \tag{19}\] Adaptive versions are obtained by substituting the covariance matrix by a suitable estimate [35] and will be referred to as \(\mathrm{ANMF}_{SCM}i\) in Gaussian case, or \(\mathrm{ANMF}_{TYL}i\) for the non-Gaussian case. When the two arrays are considered, in the very favorable case of a Gaussian homogeneous environment where the covariance matrix \(\mathbf{C}\) is perfectly known, the GLRT is: \[\mathrm{MIMO-MF}(\mathbf{x})=\mathbf{x}^{H}\mathbf{C}^{-1}\mathbf{P}\left( \mathbf{P}^{H}\mathbf{C}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{C}^{ -1}\mathbf{x}_{\mathbf{x}}\,. \tag{20}\] This is the _MIMO Optimum Gaussian Detector_ (R-MIMO OGD) in [50], which is a multi-array generalization of the _Matched Filter_ test. One can note the very strong similarity with (14). Its adaptive version is MIMO-\(\mathrm{AMF}_{SCM}\). It seems useful to specify that this detector is relevant only in a Gaussian and perfectly homogeneous environment. Especially, exactly as in the single-array case [31], the covariance estimator (16) is defined up to a constant scalar. Detectors (10) and (14) are invariant when the covariance is changed by a scale factor. This is a direct result of the partial homogeneity assumption. This is not the case for (20), thus the adaptive MIMO-\(\mathrm{AMF}_{TYL}\) version is not relevant and will not be used in the following. These tests will be considered as performance bounds for the proposed detectors: M-NMF-G and M-NMF-R when \(\mathbf{M}\) is known, M-\(\mathrm{ANMF-G}_{SCM}\), M-\(\mathrm{ANMF-R}_{SCM}\), M-\(\mathrm{ANMF-G}_{TYL}\) and M-\(\mathrm{ANMF-R}_{TYL}\) otherwise. ### _Performance in Gaussian clutter_ We have showed that all developed detectors are texture-CFAR. Unfortunately, the matrix CFAR property (distribution of the test keeps identical whatever the true covariance matrix) is much more difficult to show. Therefore, we propose to perform a study on simulated data to check this matrix CFAR property. Figure 6 experimentally demonstrates the CFAR behavior of the detectors with respect to the covariance matrix in a Gaussian environment. On the left side we represent the false alarm probability of Rao's detector (for known and estimated \(\mathbf{M}\)) as a function of the detection threshold. On the right, these curves are plotted for the GLRT detector. The deviation of the adaptive detectors comes from the covariance estimation process: the red curve converges to the blue one when the number of secondary data used for the estimation of \(\mathbf{M}\) increases. The markers represent curves for different covariances. In solid line \(\beta=3\times 10^{-4}\), \(\rho_{1}=0.4\), \(\rho_{2}=0.9\). The cross markers are established from \(\beta=1\), \(\rho_{1}=0.95\), \(\rho_{2}=0.95\). Circular markers are for \(\beta=100\), \(\rho_{1}=0.1\), \(\rho_{2}=0.1\) and null anti-diagonal blocks. For very distinct covariances, the superposition of curves and markers underlines the _matrix-CFAR_ property of the detectors. Figure 7 illustrates the value of merging arrays with a cross-shaped geometry. The detection probabilities of NMF 1, NMF 2, and M-NMF-R are plotted as a function of azimuth (\(\theta_{1}\)) and elevation (\(\theta_{2}\)) angles at fixed SNR. In the first angular dimension, which depends on the array considered, each ULA has a zone of least detectability close to \(0^{\circ}\). This is due to the correlated nature of clutter. The decrease in detection probabilities from +/-\(60^{\circ}\) to \(0^{\circ}\) is a function of the correlation level (\(\rho_{1}\) or \(\rho_{2}\)). In the second dimension, the detection performances are invariant. Thus, the poor detection performance near \(0^{\circ}\) propagates in a whole direction of space: "vertically" for NMF 1 and "horizontally" for NMF 2. As an illustration, the probability of detection of NMF 1 in \(\theta_{1}=0^{\circ}\) whatever \(\theta_{2}\) is \(PD=0.17\), and the probability of detection of NMF 2 in \(\theta_{2}=0^{\circ}\) for all \(\theta_{1}\) is 0.03. The M-NMF-R detector spatially minimizes this area of least detectability. In this case, the probability of detection at \(\theta_{1}=0^{\circ}\) becomes greater than 0.8 for \(|\theta_{2}|>10.5^{\circ}\) and Fig. 5: Sonar arrays (left): two uniform linear arrays intersect in their centers. Empirical covariance matrix on real sonar data shown in dB (center): The SCM is estimated on a homogeneous area between 265 m and 328 m. Cross-correlation blocks are non-zero. The covariance matrix model used for the data simulation is shown in dB (right): \(\beta=3\times 10^{-4}\), \(\rho_{1}=0.4\), \(\rho_{2}=0.9\). Fig. 6: Pfa-threshold curves of the M-(A)NMF-R\({}_{SCM}\) and M-(A)NMF-G\({}_{SCM}\) detectors in Gaussian environment. Left: the Rao detector for known \(\mathbf{M}\) (blue), for \(\mathbf{M}\) estimated from \(2\times 2m\) secondary data (red), the OGD detector or _Matched Filter_ (black). Right: The GLRT detector for known \(\mathbf{M}\) (blue), and estimated based on \(2\times 2m\) secondary data (red). greater than 0.5 for \(\theta_{2}=0^{\circ}\) and \(|\theta_{1}|>24^{\circ}\). The _Rigde clutter_ is minimized. The probability of detection (PD) is plotted as a function of the signal-to-noise ratio (SNR) in Figure 8 when \(\mathbf{M}\) is known. The MIMO-MF detector (shown in black), which requires a perfect knowledge of the covariance, is logically the most efficient. The single array NMF detectors (NMF 1 and 2, in purple and green) have comparable and the lowest performance. The M-NMF-I detector (in blue) between these curves assumes antenna independence. The proposed M-NMF-G (red) and M-NMF-R (yellow) detectors are both equivalent and superior to the M-NMF-I (0.2 dB at \(PD=0.8\)) and much more efficient than NMF 1 (2.5 dB) and NMF 2 tests (2 dB). The difference with MIMO-MF is slight, around 0.2 dB at \(PD=0.8\). Detection performances by the Rao approach are slightly superior to those obtained by the GLRT. Figure 9 compares the performance of the tests in their adaptive versions. The MIMO-MF curve is shown in black dotted lines for illustrative purposes only (as it assumes the covariance known). It can be seen that when the SCM is evaluated on \(2\times 2m=256\) secondary data, the covariance estimation leads to a loss of 3 dB between the performances of the MIMO-MF test and its adaptive version MIMO-AMF, as expected from the Reed-Mallett-Brennan's rule [51]. In their adaptive versions, the proposed detectors offer equivalent performances to the MIMO-AMF detectors while offering additional robustness and flexibility conditions on a possible lack of knowledge of the covariance (estimated to be within two scale factors). The gain compared to single array detectors, although reduced compared to the case where \(\mathbf{M}\) is known, remains favorable and of the order of 0.5 to 1 dB at \(PD=0.8\). In other words, for an SNR of -11 dB, \(PD=0.75\) for ANMF 1 and 0.85 for M-ANMF-G\({}_{SCM}\) (or M-ANMF-R\({}_{SCM}\)). ### _Performance in impulsive non-Gaussian clutter_ PFA-threshold curves in _K_-distributed environment are displayed in Figure 10 in order to characterize the matrix-CFAR behavior. The detectors based on the new covariance matrix estimator systematically have lower detection thresholds to those obtained with the SCM. While optimal for a Gaussian clutter, the MIMO-MF detector is no longer suitable. The marker overlays highlight the matrix-CFAR behavior of the Rao detector in a non- Fig. 8: Probability of detection in Gaussian environment for known \(\mathbf{M}\) (\(\beta=3\times 10^{-4}\), \(\rho_{1}=0.4\), \(\rho_{2}=0.9\) and \(PFA=10^{-2}\)). Fig. 7: Probability of detection in Gaussian environment for known covariance matrix \(\mathbf{M}\) as a function of \(\theta_{1}\) and \(\theta_{2}\) (\(\beta=3\times 10^{-4}\), \(\rho_{1}=0.4\), \(\rho_{2}=0.9\), \(SNR=-12\) dB and \(PFA=10^{-2}\)). Top left: NMF 1. Top right: NMF 2. Bottom left: M-NMF-R. Fig. 9: Probability of detection in Gaussian environment for unknown \(\mathbf{M}\) (\(PFA=10^{-2}\), SCMs are estimated based on \(2\times 2m\) secondary data). case of the GLRT detector is much less obvious: at this stage, it seems complicated to consider that the false alarm curves are strictly independent of the covariance matrix. The detection performances are compared in Figure 11. For the Rao and GLRT detectors, using the estimator (16) in an impulsive non-Gaussian environment induces a gain in detection performance of the order of 1.2 dB at \(PD=0.8\) with the SCM. Compared to the best competitors the improvement is 3 dB at \(PD=0.8\). ## VI Experimental data with real target The actual sonar measurements were collected during an acquisition campaign in May 2022 in La Ciotat bay, Mediterranean Sea. The experimental conditions are described in II-B, and the dataset of interest is shown in Figure 3. Echoes from the true target (right in Figure 2 ) are observed at the 2040th range cell. ### _Illustrations of detection test outputs_ Figure 12 provides examples of real data outputs from conventional ANMF detectors based on a single array. The real target is observed in range bin 2040 (143 m away from the sonar), at angular bin \(\theta_{1}=26\) (\(-12.4^{\circ}\) azimuth), and angular bin \(\theta_{2}=37\) (\(8.6^{\circ}\) elevation). Figure 13 shows the outputs of the Rao and GLRT detectors applied simultaneously to both arrays at the specific range bin 2040. These subfigures are not directly comparable with each other or with Figure 12. The target is perfectly located in azimuth and elevation. ### _Robustness to training data corruption_ Previously, we assumed the availability of a set of \(K\geq 2m\) i.i.d. secondary data, supposed free of signal components and sharing statistic properties with the noise in the cell under test. In practice, such data can be obtained by processing samples in spatial proximity to the range bin being evaluated, and the absence of signals is not always verified or checkable. In particular, this assumption is no longer valid if another target is present in these secondary data. Fig. 11: Probability of detection in \(K\)-distributed environment for unknown \(\mathbf{M}\) (\(\nu=0.5\), \(PFA=10^{-2}\), \(2\times 2m\) secondary data). In solid lines, the detectors are built with the SCM estimate. In dotted lines, the detectors are built with Tyler’s (single array) or the new (dual arrays) estimator. Fig. 12: ANMF\({}_{SCM}\) detector outputs on real sonar data. A target is observed on array 1 (left) at coordinates (26, 2040) and on array 2 (right) at coordinates (37, 2040). The SCM is built from 256 secondary data. Fig. 10: Pfa-threshold curves in non-Gaussian \(K\)-distributed environment (\(\nu=0.5\)). Left: the Rao detector for known \(\mathbf{M}\) (blue), for \(\mathbf{M}\) estimated with the SCM (red), \(\mathbf{M}\) estimated with the new estimator (yellow), the OGD detector or _Matched Filter_ (black). Right: The GLRT detector for known \(\mathbf{M}\) (blue), \(\mathbf{M}\) estimated based on the SCM (red), based on Tyler’s extension (yellow). Fig. 13: Outputs of the M-ANMF-R\({}_{SCM}\) (left) and M-ANMF-G\({}_{SCM}\) (right) detectors. The real target is at coordinates (37, 26). The SCM is built from 256 secondary data. Figure 14 illustrates the robustness of the covariance matrix estimators to data corruption. A synthetic target is added 100 samples away from the real target and contaminates the secondary dataset. On the left side, the output of M-ANMF-R\({}_{SCM}\) is strongly degraded. The SCM is not robust to the presence of outliers. The target is hardly discernible, and the maximum is wrongly located. Under the same conditions, the response of the M-ANMF-R\({}_{TYL}\) detector is visualized in the right part. Although degraded compared to Figure 13, the behavior is still largely usable. The target presence is easily identified, and the new estimator (16) is much more resistant to data contamination. ## VII Conclusions In this paper, we considered the problem of adaptive point target detection by a correlated multi-arrays Mills Cross sonar system. Using a 2-step approach, we first derived two new detectors that are robust to differences in target amplitudes and to unknown scaling factors on the covariances. Subsequently, we have introduced an innovative estimator of the covariance matrix suitable to any non-Gaussian MSG environment. By these very general assumptions, the framework of the study can therefore concern systems with co-located or remote arrays. Experimental results show that the detection performance is up to 3 dB better than conventional approaches. The detectors cover a larger detection area and are particularly robust to spikes, impulsive noise, and data contamination. Future work will focus on establishing a theoretical demonstration of the matrix-CFAR behavior of these detectors, and on generalizing solutions for different numbers and geometries of arrays. ## Acknowledgments This work has been done thanks to the facilities offered by the Univ. Savoie Mont Blanc - CNRS/IN2P3 MUST computing center. ## Appendix A GLRT's derivation For the following, and for ease of reading, the punctuation mark "tilde" will be omitted. Thus \(\Sigma\) and \(\tilde{\sigma}_{i}\) will simply be denoted as \(\Sigma\) and \(\sigma_{i}\). The same is true for their respective estimates. ### _Maximum Likelihood Estimator of \(\mathbf{\Sigma}\) under \(H_{0}\)_ The MLE \(\mathbf{\hat{\Sigma}}_{0}\) is derived from the log-likelihood function: \[\begin{array}{l}\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\Sigma},H_{0})=\\ -mL\ln\pi+2\ln|\mathbf{\Sigma}^{-1}|-\ln|\mathbf{M}|-\left(\mathbf{x}^{H} \mathbf{\Sigma}^{-1}\mathbf{M}^{-1}\mathbf{\Sigma}^{-1}\mathbf{x}\right)\,, \end{array}\] whose derivative relative to \(\mathbf{\Sigma}^{-1}\) is: \[\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\Sigma},H_{0})}{\partial \mathbf{\Sigma}^{-1}}=2\frac{\partial\ln|\mathbf{\Sigma}^{-1}|}{\partial \mathbf{\Sigma}^{-1}}-\frac{\partial\mathbf{x}^{H}\mathbf{\Sigma}^{-1} \mathbf{M}^{-1}\mathbf{\Sigma}^{-1}\mathbf{x}}{\partial\mathbf{\Sigma}^{-1}}.\] Knowing that ([52] (57)) \(\frac{\partial\ln|\mathbf{\Sigma}^{-1}|}{\partial\mathbf{\Sigma}^{-1}}= \mathbf{\Sigma}\), and ([52] (82)) \[\frac{\partial\mathbf{x}^{H}\mathbf{\Sigma}^{-1}\mathbf{M}^{-1}\mathbf{ \Sigma}^{-1}\mathbf{x}}{\partial\mathbf{\Sigma}^{-1}}=2\mathrm{Re}\left( \mathbf{M}^{-1}\mathbf{\Sigma}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\text{ we have:}\] which leads to: \[\mathbf{\hat{\Sigma}}_{0}=\mathrm{Re}\left(\mathbf{M}^{-1}\mathbf{\hat{\Sigma }}_{0}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\,. \tag{21}\] Expanding this matrix product with \(\mathbf{\hat{\Sigma}}_{0}=\begin{bmatrix}\hat{\sigma}_{1_{0}}\mathbf{I}_{m}& \mathbf{0}\\ \mathbf{0}&\hat{\sigma}_{2_{0}}\mathbf{I}_{m}\end{bmatrix}\), we have: \[\hat{\sigma}_{1_{0}}\,\mathbf{I}_{m}=\mathrm{Re}\left(\mathbf{M}_{11}^{-1} \frac{\mathbf{x}_{1}\,\mathbf{x}_{1}^{H}}{\hat{\sigma}_{1_{0}}}+\mathbf{M}_{1 2}^{-1}\frac{\mathbf{x}_{2}\,\mathbf{x}_{1}^{H}}{\hat{\sigma}_{2_{0}}}\right)\,,\] and using the trace operator: \[\begin{array}{l}m\,\hat{\sigma}_{1_{0}}=\mathrm{tr}\left[\mathrm{Re}\left( \mathbf{M}_{11}^{-1}\frac{\mathbf{x}_{1}\,\mathbf{x}_{1}^{H}}{\hat{\sigma}_{1 _{0}}}+\mathbf{M}_{12}^{-1}\frac{\mathbf{x}_{2}\,\mathbf{x}_{1}^{H}}{\hat{ \sigma}_{2_{0}}}\right)\right]\\ =\frac{\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}}{\hat{\sigma}_{1 _{0}}}+\mathrm{Re}\left(\frac{\mathbf{x}_{1}^{H}\mathbf{M}_{12}^{-1}\mathbf{ x}_{2}}{\hat{\sigma}_{2_{0}}}\right)\,.\end{array}\] We then have: \[\hat{\sigma}_{1_{0}}=\frac{1}{\hat{\sigma}_{1_{0}}}\frac{\mathbf{x}_{1}^{H} \mathbf{M}_{11}^{-1}\mathbf{x}_{1}}{m}+\frac{1}{\hat{\sigma}_{2_{0}}}\frac{ \mathrm{Re}\left(\mathbf{x}_{1}^{H}\mathbf{M}_{12}^{-1}\mathbf{x}_{2}\right) }{m}, \tag{22}\] and \[\begin{array}{l}\hat{\sigma}_{2_{0}}=\frac{1}{\hat{\sigma}_{1_{0}}}\frac{ \mathrm{Re}\left(\mathbf{x}_{2}^{H}\mathbf{M}_{21}^{-1}\mathbf{x}_{1}\right)} {m}+\frac{1}{\hat{\sigma}_{2_{0}}}\frac{\mathbf{x}_{2}^{H}\mathbf{M}_{22}^{-1 }\mathbf{x}_{2}}{m}.\end{array} \tag{23}\] Denoting \(a_{1}=\frac{\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}}{m}\), \(a_{12}=\frac{\mathrm{Re}\left(\mathbf{x}_{1}^{H}\mathbf{M}_{12}^{-1}\mathbf{ x}_{2}\right)}{m}\), \(a_{2}=\frac{\mathbf{x}_{2}^{H}\mathbf{M}_{22}^{-1}\mathbf{x}_{2}}{m}\) and using (22) and (23): \[\left\{\begin{array}{l}\hat{\sigma}_{1_{0}}^{2}=a_{1}+\frac{\hat{\sigma}_{1 _{0}}}{\hat{\sigma}_{2_{0}}}a_{12}\\ \hat{\sigma}_{2_{0}}^{2}=\frac{\hat{\sigma}_{2_{0}}}{\hat{\sigma}_{1_{0}}}a_{12} +a_{2}\,,\end{array}\right. \tag{24}\] or: \[\left\{\begin{array}{l}\hat{\sigma}_{1_{0}}^{2}=a_{1}+\frac{\hat{\sigma}_{1 _{0}}}{\hat{\sigma}_{2_{0}}}a_{12}\\ \hat{\sigma}_{1_{0}}^{2}=\frac{\hat{\sigma}_{1_{0}}}{\hat{\sigma}_{2_{0}}}a_{12} +\frac{\hat{\sigma}_{1_{0}}^{2}}{\hat{\sigma}_{2_{0}}^{2}}a_{2}\,.\end{array}\right.\] Fig. 14: Rao detector outputs based on the SCM (left) or Tyler (right) with secondary data corruption. The covariance matrix estimators are based on 256 secondary data. By equalization of right-hand terms: \[a_{1}=\frac{\hat{\sigma}_{1_{0}}^{2}}{\hat{\sigma}_{2_{0}}^{2}}a_{2}\,,\] and keeping the positive solution: \(\frac{\hat{\sigma}_{1_{0}}}{\hat{\sigma}_{2_{0}}}=\sqrt{\frac{a_{1}}{a_{2}}}\), we obtain from (24): \[\left\{\begin{array}{l}\hat{\sigma}_{1_{0}}^{2}=a_{1}+\sqrt{\frac{a_{1}}{a_{2 }}}a_{12}\\ \hat{\sigma}_{2_{0}}^{2}=a_{2}+\sqrt{\frac{a_{2}}{a_{1}}}a_{12}\,,\end{array}\right. \tag{25}\] and \[\hat{\sigma}_{1_{0}}^{2}\,\hat{\sigma}_{2_{0}}^{2}=\left(\sqrt{a_{1}\,a_{2}}+a _{12}\right)^{2}. \tag{26}\] ### _Maximum Likelihood Estimator of \(\boldsymbol{\alpha}\) under \(H_{1}\)_ The ML estimates \(\hat{\boldsymbol{\alpha}}=\left[\hat{\alpha}_{1}\,\hat{\alpha}_{2}\right]^{T}\) is found minimizing \(\left(\mathbf{x}-\mathbf{P}\boldsymbol{\alpha}\right)^{H}\mathbf{C}^{-1} \left(\mathbf{x}-\mathbf{P}\boldsymbol{\alpha}\right)\) with respect to \(\boldsymbol{\alpha}\) as ([53] (15.50)): \[\begin{split}\hat{\boldsymbol{\alpha}}&=\left( \mathbf{P}^{H}\mathbf{C}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{C}^{- 1}\mathbf{x}\,,\\ &=\left(\mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\boldsymbol{ \Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\,.\end{split} \tag{27}\] ### _Maximum Likelihood Estimator of \(\boldsymbol{\Sigma}\) under \(H_{1}\)_ The derivative of the log-likelihood function under \(H_{1}\) with respect to \(\boldsymbol{\Sigma}^{-1}\) is: \[\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\hat{\boldsymbol{ \alpha}},\boldsymbol{\Sigma},H_{1})}{\partial\boldsymbol{\Sigma}^{-1}}=2\, \frac{\partial\ln|\boldsymbol{\Sigma}^{-1}|}{\partial\boldsymbol{\Sigma}^{-1}} \\ -\frac{\partial\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{ \alpha}}\right)^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^ {-1}\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)}{\partial \boldsymbol{\Sigma}^{-1}}.\] Furthermore: \[\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)^{H} \boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\left(\mathbf{x }-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{x}-\hat{\boldsymbol{\alpha}}^{H}\mathbf{P}^{H }\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{x}-\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1} \mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{P}\] \[\left(\mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\boldsymbol{\Sigma }^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\,.\] If \(\mathbf{Z}=\begin{bmatrix}\sigma_{1}&0\\ 0&\sigma_{2}\end{bmatrix}\), we then notice that \(\boldsymbol{\Sigma}^{-1}\mathbf{P}=\mathbf{P}\mathbf{Z}^{-1}\) and \(\mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}=\mathbf{Z}^{-1}\mathbf{P}^{H}\). Thus: \[\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol {\Sigma}^{-1}\mathbf{P}\left(\mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^ {-1}\boldsymbol{\Sigma}^{-1}\mathbf{P}\right)^{-1}\\ \mathbf{P}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{ \Sigma}^{-1}\mathbf{x}\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \mathbf{P}\mathbf{Z}^{-1}\left(\mathbf{Z}^{-1}\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{P}\mathbf{Z}^{-1}\right)^{-1}\] \[\mathbf{Z}^{-1}\mathbf{P}^{H}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{- 1}\mathbf{x}\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \mathbf{P}\left(\mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{ H}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\,.\] By denoting \(\mathbf{D}^{-1}=\mathbf{M}^{-1}\mathbf{P}\left(\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1}\): \[\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)^{H} \boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1}\left(\mathbf{x }-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)\,,\] \[=\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1} \boldsymbol{\Sigma}^{-1}\mathbf{x}-\mathbf{x}^{H}\boldsymbol{\Sigma}^{-1} \mathbf{D}^{-1}\boldsymbol{\Sigma}^{-1}\mathbf{x}\,.\] So: \[\frac{\partial\left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha} }\right)^{H}\boldsymbol{\Sigma}^{-1}\mathbf{M}^{-1}\boldsymbol{\Sigma}^{-1} \left(\mathbf{x}-\mathbf{P}\hat{\boldsymbol{\alpha}}\right)}{\partial \boldsymbol{\Sigma}^{-1}}\] \[=2\mathrm{Re}\left[\left(\mathbf{M}^{-1}-\mathbf{D}^{-1} \right)\boldsymbol{\Sigma}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right]\,.\] Finally: \[\frac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\hat{\boldsymbol{ \alpha}},\boldsymbol{\Sigma},H_{1})}{\partial\boldsymbol{\Sigma}^{-1}}=2 \boldsymbol{\Sigma}-2\mathrm{Re}\left[\left(\mathbf{M}^{-1}-\mathbf{D}^{-1} \right)\boldsymbol{\Sigma}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right]\,.\] The minimum is given by: \[\hat{\boldsymbol{\Sigma}}_{1}=\mathrm{Re}\left[\left(\mathbf{M}^{-1}-\mathbf{D} ^{-1}\right)\hat{\boldsymbol{\Sigma}}_{1}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right]\,. \tag{28}\] The rest is identical to paragraph A-A. ### _Expression of the GLRT_ As \(\mathbf{x}^{H}\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{M}^{-1}\hat{\boldsymbol{ \Sigma}}_{0}^{-1}\mathbf{x}\) is a real positive scalar, we have: \[\mathbf{x}^{H}\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{M}^{-1} \hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{x} =\mathrm{Re}\left[\mathrm{tr}\left(\hat{\boldsymbol{\Sigma}}_{0}^{-1} \mathbf{M}^{-1}\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{x}\,\mathbf{x}^{H} \right)\right]\,,\] \[=\mathrm{tr}\left[\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathrm{ Re}\left(\mathbf{M}^{-1}\hat{\boldsymbol{\Sigma}}_{0}^{-1}\mathbf{x}\,\mathbf{x}^{H} \right)\right]\,,\] \[=mL\,.\] So the two PDF take the form: \[p_{\mathbf{x}}(\mathbf{x};\hat{\boldsymbol{\Sigma}}_{0},H_{0}) = \frac{1}{\pi^{mL}|\hat{\boldsymbol{\Sigma}}_{0}|^{2}|\mathbf{M}|} \mathrm{exp}\left(-mL\right)\,,\] \[p_{\mathbf{x}}(\mathbf{x};\hat{\boldsymbol{\alpha}},\hat{ \boldsymbol{\Sigma}}_{1},H_{1}) = \frac{1}{\pi^{mL}|\hat{\boldsymbol{\Sigma}}_{1}|^{2}|\mathbf{M}|} \mathrm{exp}\left(-mL\right)\,.\] The GLRT is expressed as: \[L_{G}(\mathbf{x})=\frac{\frac{1}{|\hat{\boldsymbol{\Sigma}}_{1}|^{2}}}{|\hat{ \boldsymbol{\Sigma}}_{0}|^{2}}=\frac{|\hat{\boldsymbol{\Sigma}}_{0}|^{2}}{|\hat{ \boldsymbol{\Sigma}}_{1}|^{2}} \tag{29}\] which can be thought of as a generalized variance ratio. As \(\ Replacing into the likelihood \(L_{G}(\mathbf{x})=\dfrac{|\hat{\mathbf{\Sigma}}_{0}|^{2}}{|\hat{\mathbf{\Sigma}}_{1}|^{2}}= \dfrac{\hat{\sigma}_{1_{0}}^{2m}}{\hat{\sigma}_{1_{1}}^{2m}}\) leads to: \[L_{G}(\mathbf{x})^{1/m} =\dfrac{\hat{\sigma}_{1_{0}}^{2}}{\hat{\sigma}_{1_{1}}^{2}}=\dfrac{ \dfrac{\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}}{m}}{\dfrac{1}{m} \Bigg{(}\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}-\dfrac{|\mathbf{ p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}|^{2}}{\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{- 1}\mathbf{p}_{1}}\Bigg{)}}\] \[=\Bigg{(}1-\dfrac{|\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{ x}_{1}|^{2}}{\left(\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{p}_{1}\right) \left(\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}\right)}\Bigg{)}^{- 1}\,.\] Defining \(l_{G}(\mathbf{x})=\dfrac{L_{G}(\mathbf{x})^{1/m}-1}{L_{G}(\mathbf{x})^{1/m}}= 1-L_{G}(\mathbf{x})^{-1/m}\), we obtain the well-known NMF detector [11]: \[l_{G}(\mathbf{x})=\dfrac{|\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{ 1}|^{2}}{\left(\mathbf{p}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{p}_{1}\right) \left(\mathbf{x}_{1}^{H}\mathbf{M}_{11}^{-1}\mathbf{x}_{1}\right)}. \tag{30}\] ### _Uncorrelated arrays case_ When the two arrays are fully uncorrelated, we have \(\hat{\sigma}_{i_{0}}^{2}=\dfrac{\mathbf{x}_{i}^{H}\mathbf{M}_{ii}^{-1} \mathbf{x}_{i}}{m}\) and \(\hat{\sigma}_{i_{1}}^{2}=\dfrac{1}{m}\Bigg{(}\mathbf{x}_{i}^{H}\mathbf{M}_{ii }^{-1}\mathbf{x}_{i}-\dfrac{|\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{x} _{i}|^{2}}{\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{p}_{i}}\Bigg{)}\). We obtain: \[L_{G}(\mathbf{x}) =\dfrac{|\hat{\mathbf{\Sigma}}_{0}|^{2}}{|\hat{\mathbf{\Sigma}}_{1}|^{2} }=\dfrac{\prod_{i=1}^{2}\hat{\sigma}_{i_{0}}^{2m}}{\prod_{i=1}^{ 2}\hat{\sigma}_{i_{1}}^{2m}}=\prod_{i=1}^{2}\left[\hat{\sigma}_{i_{0}}^{2} \right]^{m}\,, \tag{31}\] \[=\prod_{i=1}^{2}\left[1-\dfrac{|\mathbf{p}_{i}^{H}\mathbf{M}_{ii }^{-1}\mathbf{x}_{i}|^{2}}{\left(\mathbf{p}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{ p}_{i}\right)\left(\mathbf{x}_{i}^{H}\mathbf{M}_{ii}^{-1}\mathbf{x}_{i}\right)} \right]^{-m}\,. \tag{32}\] This corresponds to the MIMO ANMF detector on independent arrays presented in [40]. ### \(\mathbf{\Sigma}=\sigma\,\mathbf{I}_{2m}\) _case_ When \(\mathbf{\Sigma}\) is the identity matrix up to a scalar factor, \(\sigma_{1}=\sigma_{2}\), whose estimators are renamed \(\hat{\sigma}_{0}\) under \(H_{0}\) and \(\hat{\sigma}_{1}\) under \(H_{1}\). \[L_{G}(\mathbf{x}) =\dfrac{|\hat{\mathbf{\Sigma}}_{0}|^{2}}{|\hat{\mathbf{\Sigma}}_{1}|^{2}}, \text{with }\hat{\mathbf{\Sigma}}_{0}=\hat{\sigma}_{0}\mathbf{I}_{mL}\text{ and }\hat{\mathbf{\Sigma}}_{1}=\hat{\sigma}_{1}\mathbf{I}_{mL}\] \[=\dfrac{\hat{\sigma}_{0}^{2mL}}{\hat{\sigma}_{1}^{2mL}}\,.\] From \(\mathbf{\hat{\Sigma}}_{0}=\operatorname{Re}\left(\mathbf{M}^{-1}\mathbf{\hat{\Sigma}} _{0}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\), we have the following relations: \[\operatorname{tr}\left(\mathbf{\hat{\Sigma}}_{0}\right) =\operatorname{tr}\left[\operatorname{Re}\left(\mathbf{M}^{-1} \mathbf{\hat{\Sigma}}_{0}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\right]\,,\] \[\hat{\sigma}_{0}\operatorname{tr}\left(\mathbf{I}_{mL}\right) =\dfrac{1}{\hat{\sigma}_{0}}\!\operatorname{Re}\left[\operatorname{ tr}\left(\mathbf{M}^{-1}\mathbf{x}\,\mathbf{x}^{H}\right)\right]\,,\] \[\hat{\sigma}_{0}^{2} =\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}}{mL},\text{as }\mathbf{M}^{-1}\text{ is positive definite}\,.\] Identically, we have: \[\hat{\sigma}_{1}^{2} =\dfrac{\left(\mathbf{x}-\mathbf{P}\mathbf{\alpha}\right)^{H}\mathbf{ M}^{-1}\left(\mathbf{x}-\mathbf{P}\mathbf{\alpha}\right)}{mL},\] \[=\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}-\mathbf{x}^{H} \mathbf{M}^{-1}\mathbf{P}\left(\mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^ {-1}\mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{x}}{mL}.\] \[L_{G}(\mathbf{x})^{1/mL}=\dfrac{\hat{\sigma}_{0}^{2}}{\hat{ \sigma}_{1}^{2}},\] \[=\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}}{\mathbf{x}^{H} \mathbf{M}^{-1}\mathbf{x}-\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{P}\left( \mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{x}}\] \[=\left(1-\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{P}\left( \mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{x}}{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}}\right)^{-1}\,.\] By defining \(l_{G}(\mathbf{x})=\dfrac{L_{G}(\mathbf{x})^{1/mL}-1}{L_{G}(\mathbf{x})^{1/mL}}\) or \(L_{G}(\mathbf{x})^{1/mL}=[1-l_{G}(\mathbf{x})]^{-1}\), we obtain an equivalent test: \[l_{G}(\mathbf{x})=\dfrac{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{P}\left( \mathbf{P}^{H}\mathbf{M}^{-1}\mathbf{P}\right)^{-1}\mathbf{P}^{H}\mathbf{M}^{-1} \mathbf{x}}{\mathbf{x}^{H}\mathbf{M}^{-1}\mathbf{x}}, \tag{33}\] which corresponds to the subspace version of the ACE test presented in [54]. ## Appendix C Rao's detector derivation The partial derivative of the log-likelihood function is defined as: \[\dfrac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{ \partial\mathbf{\xi}_{R}}=\begin{bmatrix}\dfrac{\partial\ln p_{\mathbf{x}}( \mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{\partial\mathrm{Re}\left(\mathbf{\alpha} \right)}\\ \dfrac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{ \partial\mathrm{Im}\left(\mathbf{\alpha}\right)}\end{bmatrix}.\] From [53] (15.60), we obtain: \[\dfrac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{ \partial\mathbf{\xi}_{R}}=\begin{bmatrix}2\operatorname{Re}\left[\mathbf{P}^{H} \mathbf{C}^{-1}(\mathbf{\xi}_{S})\left(\mathbf{x}-\mathbf{P}\mathbf{\alpha}\right) \right]\\ 2\operatorname{Im}\left[\mathbf{P}^{H}\mathbf{C}^{-1}(\mathbf{\xi}_{S})\left( \mathbf{x}-\mathbf{P}\mathbf{\alpha}\right)\right]\end{bmatrix}\,.\] Thus: \[\dfrac{\partial\ln p_{\mathbf{x}}(\mathbf{x};\mathbf{\xi}_{R},\mathbf{\xi}_{S})}{ \partial\mathbf{\xi}_{R}}\Bigg{|}\mathbf{\xi}_{R}=\mathbf{0}=\begin{bmatrix}2\ Finally, by replacing (34) and (35) into (12) leads to: \[L_{R}(\mathbf{x})=\\ \left[2\operatorname{Re}\left(\mathbf{x}^{H}\mathbf{C}^{-1}(\hat{ \boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right)-2\operatorname{Im}\left(\mathbf{x}^{ H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right)\right]\\ \left[\frac{\left(\mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol {\xi}}_{S_{0}})\mathbf{P}\right)^{-1}}{2}\mathbf{0}\right]\\ \mathbf{0}\] \[\left[\begin{matrix}2\operatorname{Re}\left(\mathbf{P}^{H} \mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{x}\right)\\ 2\operatorname{Im}\left(\mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_ {S_{0}})\mathbf{x}\right)\end{matrix}\right],\] which simplifies to: \[L_{R}(\mathbf{x})=2\,\left[\operatorname{Re}\left(\mathbf{x}^{ H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right)\left( \mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right) ^{-1}\right.\\ \left.\operatorname{Re}\left(\mathbf{P}^{H}\mathbf{C}^{-1}( \hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{x}\right)-\operatorname{Im}\left( \mathbf{x}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\right) \right.\\ \left.\left(\mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_ {S_{0}})\mathbf{P}\right)^{-1}\operatorname{Im}\left(\mathbf{P}^{H}\mathbf{C}^ {-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{x}\right)\right].\] Knowing that \(\mathbf{P}^{H}\mathbf{C}^{-1}(\hat{\boldsymbol{\xi}}_{S_{0}})\mathbf{P}\) is real and positive definite, it can be factorized and incorporated into the real and imaginary parts. After some algebraic manipulation, we obtain (14). ## Appendix D Maximum likelihood estimator of the covariance matrix in Compound-Gaussian clutter The likelihood of \(\left\{\mathbf{x}_{k}\right\}_{k\in[1,K]}\) under \(H_{0}\) can be rewritten as: \[p_{\mathbf{x}}\left(\left\{\mathbf{x}_{k}\right\}_{k\in[1,K]}; \mathbf{M},\mathbf{T}_{k},H_{0}\right)\\ =\prod_{k=1}^{K}\frac{1}{\pi^{2m}|\widetilde{\mathbf{C}}|} \exp\left(-\mathbf{x}_{k}^{H}\widetilde{\mathbf{C}}^{-1}\mathbf{x}_{k}\right) \,,\\ =\frac{1}{\pi^{2m\,K}|\mathbf{M}|^{K}}\prod_{k=1}^{K}\frac{1}{| \mathbf{T}_{k}|^{2}}\exp\left(-\mathbf{x}_{k}^{H}\mathbf{T}_{k}^{-1}\mathbf{M} ^{-1}\mathbf{T}_{k}^{-1}\mathbf{x}_{k}\right)\,,\] where \(\widetilde{\mathbf{C}}=\mathbf{T}_{k}\mathbf{M}\mathbf{T}_{k}\), and \(\mathbf{T}_{k}=\begin{bmatrix}\sqrt{\tau_{1_{k}}}&0\\ 0&\sqrt{\tau_{2_{k}}}\end{bmatrix}\otimes\mathbf{I}_{m}\). The log-likelihood can be written as: \[\ln p_{\mathbf{x}}\left(\left\{\mathbf{x}_{k}\right\}_{k\in[1,K] };\mathbf{M},\mathbf{T}_{k},H_{0}\right)=-2m\,K\ln\pi-K\,\ln|\mathbf{M}|\\ +2\sum_{k=1}^{K}\ln|\mathbf{T}_{k}^{-1}|-\sum_{k=1}^{K}\mathbf{x}_{ k}^{H}\mathbf{T}_{k}^{-1}\mathbf{M}^{-1}\mathbf{T}_{k}^{-1}\mathbf{x}_{k}\,. \tag{36}\] According to [52] (82), the derivative with respect to \(\mathbf{T}_{k}^{-1}\) is: \[\frac{\partial\ln p_{\mathbf{x}}(\left\{\mathbf{x}_{k}\right\}_ {k\in[1,K]};\mathbf{M},\mathbf{T}_{k},H_{0})}{\partial\mathbf{T}_{k}^{-1}}=2\, \mathbf{T}_{k}\\ -2\operatorname{Re}\left(\mathbf{M}^{-1}\mathbf{T}_{k}^{-1} \mathbf{x}_{k}\,\mathbf{x}_{k}^{H}\right)\,. \tag{37}\] Following the same approach as in Appendix A, we obtain the minimum for \(\mathbf{T}_{k}\) for a fixed \(\mathbf{M}\): \[\widehat{\mathbf{T}}_{k}=\operatorname{Re}\left(\mathbf{M}^{-1}\widehat{ \mathbf{T}}_{k}^{-1}\mathbf{x}_{k}\,\mathbf{x}_{k}^{H}\right)\,, \tag{38}\] where \[\widehat{\mathbf{T}}_{k}=\begin{bmatrix}\sqrt{\hat{\tau}_{1_{k}}}&0\\ 0&\sqrt{\hat{\tau}_{2_{k}}}\end{bmatrix}\otimes\mathbf{I}_{m}\,, \tag{39}\] and \[\hat{\tau}_{1_{k}} =\,t_{1}+\sqrt{\frac{t_{1}}{t_{2}}}t_{12}\,, \tag{40}\] \[\hat{\tau}_{2_{k}} =\,t_{2}+\sqrt{\frac{t_{2}}{t_{1}}}t_{12}\,, \tag{41}\] with \[t_{1} =\,\frac{1}{m}\mathbf{x}_{1,k}^{H}\mathbf{M}_{11}^{-1}\mathbf{x} _{1,k}\,, \tag{42}\] \[t_{2} =\,\frac{1}{m}\mathbf{x}_{2,k}^{H}\mathbf{M}_{22}^{-1}\mathbf{x}_ {2,k}\,,\] (43) \[t_{12} =\,\frac{1}{m}\operatorname{Re}\left(\mathbf{x}_{1,k}^{H}\mathbf{ M}_{12}^{-1}\mathbf{x}_{2,k}\right)\,. \tag{44}\] Replacing \(\mathbf{T}_{k}\) by \(\widehat{\mathbf{T}}_{k}\) in (36) and deriving with respect to \(\mathbf{M}^{-1}\) lead to: \[\frac{\partial\ln p_{\mathbf{x}}(\left\{\mathbf{x}_{k}\right\}_ {k\in[1,K]};\mathbf{M},\widehat{\mathbf{T}}_{k},H_{0})}{\partial\mathbf{M}^{-1}} =K\,\mathbf{M}\\ -\sum_{k=1}^{K}\left(\widehat{\mathbf{T}}_{k}^{-1}\mathbf{x}_{k} \right)\left(\widehat{\mathbf{T}}_{k}^{-1}\mathbf{x}_{k}\right)^{H}\,, \tag{45}\] and the minimum in \(\mathbf{M}\) is given by: \[\widehat{\mathbf{M}} =\frac{1}{K}\!\sum_{k=1}^{K}\left(\widehat{\mathbf{T}}_{k}^{-1} \mathbf{x}_{k}\right)\left(\widehat{\mathbf{T}}_{k}^{-1}\mathbf{x}_{k}\right)^{H}\,, \tag{46}\] \[=\frac{1}{K}\!\sum_{k=1}^{K}\widehat{\mathbf{T}}_{k}^{-1}\mathbf{x }_{k}\mathbf{x}_{k}^{H}\widehat{\mathbf{T}}_{k}^{-1}\,. \tag{47}\] The estimator (47) is independent of the textures. This could be shown by substituting \(\mathbf{x}_{k}=\begin{bmatrix}\mathbf{x}_{1,k}\\ \mathbf{x}_{2,k}\end{bmatrix}\) by \(\begin{bmatrix}\sqrt{\tau_{1_{k}}}\,\mathbf{c}_{1,k}\\ \sqrt{\tau_{2_{k}}}\,\mathbf{c}_{2,k}\end{bmatrix}\) in (39) and (47).
2309.10095
A Semi-Supervised Approach for Power System Event Identification
Event identification is increasingly recognized as crucial for enhancing the reliability, security, and stability of the electric power system. With the growing deployment of Phasor Measurement Units (PMUs) and advancements in data science, there are promising opportunities to explore data-driven event identification via machine learning classification techniques. However, obtaining accurately-labeled eventful PMU data samples remains challenging due to its labor-intensive nature and uncertainty about the event type (class) in real-time. Thus, it is natural to use semi-supervised learning techniques, which make use of both labeled and unlabeled samples. %We propose a novel semi-supervised framework to assess the effectiveness of incorporating unlabeled eventful samples to enhance existing event identification methodologies. We evaluate three categories of classical semi-supervised approaches: (i) self-training, (ii) transductive support vector machines (TSVM), and (iii) graph-based label spreading (LS) method. Our approach characterizes events using physically interpretable features extracted from modal analysis of synthetic eventful PMU data. In particular, we focus on the identification of four event classes whose identification is crucial for grid operations. We have developed and publicly shared a comprehensive Event Identification package which consists of three aspects: data generation, feature extraction, and event identification with limited labels using semi-supervised methodologies. Using this package, we generate and evaluate eventful PMU data for the South Carolina synthetic network. Our evaluation consistently demonstrates that graph-based LS outperforms the other two semi-supervised methods that we consider, and can noticeably improve event identification performance relative to the setting with only a small number of labeled samples.
Nima Taghipourbazargani, Lalitha Sankar, Oliver Kosut
2023-09-18T19:07:41Z
http://arxiv.org/abs/2309.10095v2
# A Semi-Supervised Approach for Power System Event Identification ###### Abstract Event identification is increasingly recognized as crucial for enhancing the reliability, security, and stability of the electric power system. With the growing deployment of Phasor Measurement Units (PMUs) and advancements in data science, there are promising opportunities to explore data-driven event identification via machine learning classification techniques. However, obtaining accurately-labeled eventful PMU data samples remains challenging due to its labor-intensive nature and uncertainty about the event type (class) in real-time. Thus, it is natural to use semi-supervised learning techniques, which make use of both labeled and unlabeled samples. We evaluate three categories of classical semi-supervised approaches: (i) self-training, (ii) transductive support vector machines (TSVM), and (iii) graph-based label spreading (LS) method. Our approach characterizes events using physically interpretable features extracted from modal analysis of synthetic eventful PMU data. In particular, we focus on the identification of four event classes (load loss, generation loss, line trip, and bus fault) whose identification is crucial for grid operations. We have developed and publicly shared a comprehensive Event Identification package which consists of three aspects: data generation, feature extraction, and event identification with limited labels using semi-supervised methodologies. Using this package, we generate and evaluate eventful PMU data for the South Carolina 500-Bus synthetic network. Our evaluation consistently demonstrates that graph-based LS outperforms the other two semi-supervised methods that we consider, and can noticeably improve event identification performance relative to the setting with only a small number of labeled samples. Event identification, machine learning, Semi-supervised learning, phasor measurement units, mode decomposition. ## I Introduction Power systems are inherently complex dynamical systems, primarily due to the involvement of diverse components such as generators, buses, lines, and loads with varying sizes, all exhibiting non-linear behavior and intricate interactions. Given their extensive geographical coverage and scale, power systems are susceptible to various classes of events (for example, generation loss, load loss, line trips). Event identification methods can be used in real-time operations to guide control actions, as well as for off-line analysis of past events to make the system more reliable, secure, and stable in the future. Numerous studies have explored event detection in power systems, focusing on determining whether an event has occurred [1, 2, 3]. However, event identification, which involves discerning the specific class of event that occurred, presents even greater challenges since it requires learning the unique signatures of different events. Our primary focus here is on addressing the event identification problem in power systems. To this end, our analysis in the sequel assumes that an event has been detected with knowledge of its precise time. The increasing deployment of Phasor Measurement Units (PMUs) throughout the grid, coupled with advancements in machine learning techniques, presents invaluable opportunities for exploring advanced data-driven event identification methods. These methods offer the distinct advantage of differentiating between various classes of power system events based on high-dimensional spatio-temporally correlated time-synchronized phasor measurements with high resolution, without heavily relying on dynamic modeling of power system components. The majority of the recent literature in the field of data-driven event identification (e.g., [4, 5, 6, 7, 8, 9, 10, 11, 12]) employs machine learning and pattern recognition techniques for making statistical inferences or decisions using system measurements. A significant portion of these studies predominantly falls within the supervised learning paradigm, which requires accurate labeled data. However, acquiring expert knowledge for labeling various classes of events can be expensive and laborious. Consequently, the manual labeling of events by operators constitutes only about 2% of the events [13]. Such limitations motivate researchers to leverage simulation-based synthetic eventful PMU data for investigating and evaluating the performance of their proposed event identification methods (e.g., [8, 14, 15]). Despite the availability of several resources providing access to synthetic test cases with transmission and/or distribution network models for dynamic simulations [16, 17], conducting a fair comparison of diverse event identification methods might pose a significant challenge. This challenge primarily stems from the numerous parameters associated with dynamic models of system components and simulation settings, coupled with the diverse characteristics of simulated events, such as class, duration, and location, among others. While certain recent publications may have access to significant real and/or synthetic PMU data (e.g., [18, 19]), the lack of publicly available properly labeled eventful PMU data continues to be a persistent concern for numerous researchers. Unsupervised and semi-supervised learning are common practices in machine learning when dealing with no or limited labeled data. Unsupervised learning aims to infer the under lying structure within the unlabeled data. Although they can distinguish between clusters of events [20, 21, 22, 23, 24], they do not possess the ground truth to associate each cluster with its real-world meaning. Furthermore, when there is access to even a small amount of labeled data, supervised learning has been shown to perform better than unsupervised learning methods [2, 24]. Semi-supervised learning approaches, on the other hand, aim to label unlabeled data points using knowledge learned from a small number of labeled data points which can significantly enhance the performance of a classification task [25]. Reference [18] presents a framework for event detection, localization, and classification in power grids based on semi-supervised learning. A pseudo labeling (PL) technique is adopted to classify events using the convolutional neural network (CNN) backbone with cross-entropy loss. A semi-supervised event identification framework is proposed in [26] which utilizes a hybrid machine learning-based method to reduce biases of different classifiers. In [27], the authors explore the application of deep learning techniques and PMU data to develop real-time event identification models for transmission networks. This is achieved by leveraging information from a large pool of unlabeled events, while also taking into account the class distribution mismatch problem. In [28, 29], the authors proposed hidden structure semi-supervised machine (HS3M), a novel data-driven event identification method that combines unlabeled and partially labeled data to address limitations in supervised, semi-supervised, and hidden structure learning. The approach introduces a parametric dual optimization procedure to enhance the learning objective and improve event identification accuracy. The learning problem involves optimizing a non-smooth function that may be convex or concave. The existing literature on neural network-based event identification methods is marked by certain limitations and challenges. These encompass the scarcity of suitable historical labeled event data, restricted interpretability in feature extraction, elevated computational intricacy, and the necessity for meticulous parameter calibration. Additionally, pseudo labeling approaches such as [28] confront uncertainty regarding the attainability of global optimality. Moreover, it is worth noting that, to the best of the authors' knowledge, a thorough investigation into the ramifications arising from the initial distribution of labeled and unlabeled samples has not been undertaken. Building upon the promising results of our previous work on event identification in the supervised setting [30], this paper introduces a semi-supervised event identification framework. The aim of this study is to explore the potential benefits of incorporating unlabeled samples in enhancing the performance of the event identification task. To this end, we thoroughly investigate and compare the performance of various semi-supervised algorithms, including: (i) self-training with different base classifiers (i.e., support vector machine with linear kernel (SVML) as well as with radial basis function kernel (SVMR), gradient boosting (GB), decision trees (DT), and k-Nearest Neighbors (kNN)), (ii) transductive support vector machines (TSVM), and (iii) graph-based label spreading (LS) to explore their effectiveness. We chose these classical semi-supervised models for two primary reasons: firstly, the wide array of proposed semi-supervised classification algorithms in the past two decades (see, [31], and references therein) necessitates a comprehensive understanding of which models are most suitable and efficient for event identification; and secondly, they provide a clearer illustration and intuition of the impact of incorporating unlabeled samples compared to more advanced methods. Although there may not be a one-size-fits-all solution, each method has its own advantages and disadvantages, and it is important to evaluate their suitability. Notably, our experiments consistently illustrate the superior performance of the graph-based LS method compared to other approaches. Even in worst-case scenarios where the initial distribution of labeled and unlabeled samples does not necessarily reflect the true distribution of event classes, the graph-based LS method stands out in robustly and significantly enhancing event identification performance. Our key contributions are as follows: * Introduction of a semi-supervised event identification framework that leverages physically interpretable features derived from modal analysis of PMU data. * Thorough exploration of the influence of the initial distribution of labeled and unlabeled samples, along with the quantity of unlabeled samples, on the efficacy of diverse semi-supervised event identification techniques. * Development of an all-inclusive Event Identification package comprising of an event generation module based on the power system simulator for engineering (PSS\({}^{\#}\)E) Python application programming interface (API), a feature extraction module utilizing methodologies from our previous research [30], and a semi-supervised classification module. The remainder of the paper is organized as follows. Section. II describes the simulation process to generate the synthetic eventful PMU data. We explain the proposed semi-supervised event identification framework in Section. III. In Section. IV, we further elaborate on the pseudo labeling process of the unlabeled samples, and the classification models. We discuss the simulation results in Section. V. Finally, Section. VI concludes the paper. ## II Generation of the Synthetic Eventful Time-series PMU Data Consider an electric grid composed of set of loads, generators, lines, and buses. We investigate four distinct event classes denoted as \(\mathcal{E}\in\{\text{LL},\,\text{GL},\,\text{LT},\,\text{BF}\}\), representing load loss, generation loss, line trip, and bus fault events, respectively. Each PMU provides multiple measurement channels relative to its installation bus. In this study, we focus on voltage magnitude (\(V_{m}\)), corresponding angle (\(V_{a}\)), and frequency (\(F\)) channels for clarity, with potential inclusion of other channels. For any channel \(c\in\mathcal{C}=\{V_{m},V_{a},F\}\), let \(y_{i}^{c}(n)\in\mathbb{R}\) represent the \(n^{\text{th}}\) measurement, \(n=0,\ldots,N-1\), where the total number of samples is \(N\), from the \(i\)th PMU. Assuming PMU sampling period of \(T_{s}\), we thus collect eventful data for \(t_{s}=NT_{s}\) seconds. These measurements, for the \(c^{\text{th}}\) channel, are collated from \(m\) PMUs to form a matrix \(\,\mathbf{y}^{c}=[\cdots,\mathbf{y}_{i}^{c},\cdots]^{T}\in\mathbb{R}^{m\times N}\) where \(\mathbf{y}_{i}^{c}\) is a \(N\)-length (column) vector for the \(i^{\text{th}}\) PMU with entries \(y_{i}^{e}(n)\), for all \(n\). We use superscript \(T\) to denote the tranpose operator. Finally, for each event, we define \(\mathcal{M}=[[\mathcal{Y}^{V_{n}}]^{T},[\mathcal{Y}^{V_{2}}]^{T},[\mathcal{Y}^{ F}]^{T}]^{T}\in\mathbb{R}^{|\mathcal{C}|m\times N}\) by aggregating all the phasor measurements from \(m\) PMUs, \(3\) channels, and for \(N\) samples. Within this setting, we develop a publicly available Python code which leverages PSS@E software Python Application Program Interface (API) to generate synthetic eventful PMU data. To ensure realistic and diverse dataset, we consider the following two steps: Firstly, we linearly adjust all loads within a range of 95% to 105% of their normal loading conditions. Secondly, we add zero-mean random fluctuations, ranging from \(\pm 2\%\) of the adjusted loads, to simulate unpredictable variations observed in real-world power systems. 1 To generate eventful data, for each system component and loading condition considered, we employ the following systematic approach: (i) We begin by applying a new initial loading condition to each load in the system; a power flow analysis for this setting then gives us the initial state conditions for the next step. (ii) We use this initial condition to initiate a \(t_{f}\)-second flat run dynamic simulation. (iii) At the \(t_{f}\) second, we introduce a disturbance (i.e., LL, GL, and LT) to a selected component. For BF events, we clear the disturbance after \(t_{\text{clr}}\) seconds. (iv) Finally, we model the event simulation for additional \(t_{s}\) seconds which then allows us create the data matrix \(\mathcal{M}\), representing the PMU measurements associated with the simulated event. We repeat this procedure to generate a desired number of events for each event type. Footnote 1: The load change intervals specified in this paper can be adjusted depending on the stability of the system under study, ensuring that the system can return to an acceptable state of equilibrium following a disturbance. ### _Generating Event Features Using Modal Analysis_ The first step in identifying a system event is to extract a set of delineating features that are likely to contain information regarding the event class. Using the fact that temporal effects in a power system are driven by the interacting dynamics of system components, we use mode decomposition to extract features. More specifically, we assume that each PMU data stream after an event consists of a superposition of a small number of dominant dynamic modes. The resulting features then include frequency and damping ratio of these modes, as well as the residual coefficients indicating the quantity of each mode present. We briefly summarize the mathematical model and refer readers to our recent work [30] for additional details. We assume that \(y_{i}^{e}(n)\) after an event consists of a superposition of \(p\) common damped sinusoidal modes as \[y_{i}^{e}(n)=\sum_{k=1}^{p}R_{k,l}^{e}\times(Z_{k}^{e})^{n}+e_{i}^{e}(n),\quad i \in\{1,\cdots,m\},\quad c\in\mathcal{C} \tag{1}\] where for any given channel \(c\in\mathcal{C}\), \(e_{i}^{e}(n)\) represents the noise in the \(i^{\text{th}}\) PMU measurement and \(Z_{k}^{e}\) is the \(k^{\text{th}}\) mode associated with the event. We represent each mode as \(Z_{k}^{e}=\exp(\lambda_{k}^{e}T_{s})\) where \(\lambda_{k}^{e}=\sigma_{k}^{e}\pm j\sigma_{k}^{e}\) and \(\sigma_{k}^{e}\) and \(\omega_{k}^{e}\) are the damping factor and angular frequency of the \(k^{\text{th}}\) mode, respectively. The residue \(R_{k,l}^{e}\) of the \(k^{\text{th}}\) mode for the \(i^{\text{th}}\) PMU is defined by its magnitude \(|R_{k,l}^{e}|\) and angle \(\theta_{k,l}^{e}\). For any given channel \(c\), typically a small subset of the PMUs (\(m^{\prime}<m\)) capture the dynamic response of the system after an event. Thus, we only keep the residues of a set of \(m^{\prime}\) PMUs with the largest magnitudes. Note that the \(m^{\prime}\) PMUs are not necessarily the same PMUs for different events (see, [30] for further details). Using the above procedure, for each channel \(c\), we define a row vector of features, \(\mathcal{F}^{c}\), of length \(2p(m^{\prime}+1)\) as: \[\mathcal{F}^{c}=\left[\{\omega_{k}^{e}\}_{k=1}^{p},\{\sigma_{k}^{e}\}_{k=1}^{ p},\{|R_{k,l}^{e}|\}_{k=1}^{p},\{\theta_{k,l}^{e}\}_{k=1}^{p}\right]_{i\in \{1,\cdots,m^{\prime}\}} \tag{2}\] which consists of \(p\) angular frequencies, \(p\) damping factors and the corresponding magnitude and angle of the residues for each of the \(m^{\prime}\) PMUs (with the largest residue magnitudes) and the \(p\) modes. ### _Generating the overall dataset_ Let \(n_{D}\) be the total number of events generated over all event classes. Following modal analysis on the PMU measurements as described above, we can represent the \(i^{\text{th}}\) event, \(i\in\mathbb{\mathbb{\mathbb{missing}}}_{D}=\{1,...,n_{D}\}\), as a \(d=2p|\mathcal{C}|(m^{\prime}+1)\)-length vector \(x_{l}^{T}=[\mathcal{F}^{V_{n}},\mathcal{F}^{V_{a}},\mathcal{F}^{F}]\). Each event is associated with a positive integer label \(y_{i}\in\{1,\cdots,|\mathcal{E}|\}\) where \(|\mathcal{E}|\) is the total number of event classes. Collating the events and labels from all event classes, we obtain a large data matrix \(\mathbf{D}=\{\mathbf{X}_{D},\mathbf{Y}_{D}\}\) where \(\mathbf{X}_{D}=[x_{1},...,x_{n_{D}}]^{T}\in\mathbb{R}^{n_{D}\times d}\) and \(\mathbf{Y}_{D}=[y_{1},...,y_{n_{D}}]^{T}\in\mathbb{R}^{n_{D}}\). Finally, to highlight the possible choices for labeled and unlabeled events from \(\mathbf{D}\), we henceforth write \(\mathbf{D}=\{(x_{i},y_{i})\}_{i\in\mathbb{\mathbb{\mathbb{missing}}}_{D}}\). ## III Proposed Framework to Investigate the Impact of Unlabeled Data The efficacy of numerous semi-supervised algorithms is significantly impacted by the initial distribution of labeled and unlabeled samples. Consequently, a thorough investigation of the robustness of diverse semi-supervised learning techniques in the face of various initial labeled and unlabeled sample distributions becomes imperative. Furthermore, the effectiveness of semi-supervised learning is not universally guaranteed to enhance supervised models; its success relies on specific foundational assumptions. Among these assumptions are the smoothness and cluster principles [31], which posit that densely populated regions tend to have similar labels, and that samples within the same cluster often belong to the same class. To investigate the impact of incorporating unlabeled samples on event identification performance, and to ensure a fair comparison among various inductive (i.e., self-training) and transductive semi-supervised approaches (i.e., TSVM, LS), we utilize the k-fold cross-validation technique. First, we shuffle \(n_{\text{D}}\) samples in \(\mathbf{D}\) and partition the data into \(n_{K}\) equally sized folds. We use \(n_{K}-1\) folds as a training set, denoted as \(\mathbf{D}_{T}^{(k)}=\{(x_{i},y_{i})\}_{i\in\mathbb{\mathbb{\mathbb{missing}}}_{T}^{(k)}}\) with \(n_{T}=[(n_{K}-1)n_{D}/n_{K}]\) samples, and reserve the remaining fold as a validation set, denoted as \(\mathbf{D}_{V}^{(k)}=\{(x_{i},y_{i})\}_{i\in\mathbb{\mathbb{missing}}_{T}^{(k)}}\) with \(n_{V}=n_{D}-n_{T}\) samples, and \(k=1,...,n_{K}\). Here, \(\mathcal{I}_{T}^{(k)}\), and \(\mathcal{I}_{V}^{(k)}\) represents a subset of samples in the training set, and the validation set of the \(k^{\text{th}}\) fold, respectively, and \(\mathcal{I}_{T}^{(k)}\cup\mathbf{I}_{V}^{(k)}=\mathbb{\mathbb{missing}}_{D}\). We repeat this process \(K\) times, with each fold serving as the validation set once. To further investigate how the distribution of labeled and unlabeled samples affects the performance of various semi-supervised algorithms, we shuffle the samples in the training set for \(n_{Q}\) times and split it into a subset of \(n_{L}\) labeled samples, denoted as \(\mathbf{D}_{L}^{(k,q)}=\{(x_{i},y_{i})\}_{i\in I_{L}^{(k,q)}}\) and a subset of \(n_{U}\) unlabeled samples by ignoring their ground truth labels, denoted as \(\mathbf{D}_{U}^{(k,q)}=\{(x_{i},\cdot)\}_{i\in I_{U}^{(k,q)}}\) where \(\mathcal{I}_{L}^{(k,q)}\cup\mathcal{I}_{U}^{(k,q)}=\mathcal{I}_{T}^{(k)}\), and \(q=1,\cdots,n_{Q}\). To ensure the inclusion of samples from every class within the labeled subset, we verify the condition \(B_{\min}\leq\frac{n_{L}^{c}}{n_{L}}\leq B_{\max}\) where \(n_{L}^{c}\) is the number of samples corresponding to class \(c\), and \(B_{\min},B_{\max}\) are the specified balance range. To illustrate the impact of increasing the number of unlabeled samples, we propose the following procedure. Given the number of samples that we want to add at each step, denoted as \(\Delta_{U}\), we randomly select \(n_{U}^{(s)}=s\Delta_{U}\) from the pool of \(n_{U}\) samples where \(s=0,\cdots,n_{S}\), and \(n_{S}=[n_{U}/\Delta_{U}]+1\) represents the number of steps. To further investigate the impact of the initial distribution of the labeled samples along with the unlabeled samples, the random selection of the \(n_{U}^{(s)}\) samples at each step \(1\leq s\leq n_{S}-1\), is performed \(n_{R}\) times. Concatenating the labeled training samples, \(\mathbf{D}_{L}^{(k,q)}\), in the \(k\)-th fold and \(q\)-th split, with a subset of \(n_{U}^{(s)}\) unlabeled samples in the \(s\)-th step and \(r\)-th random selection (\(r\leq n_{R}\)), denoted as \(\mathbf{D}_{U}^{(k,q,s,r)}=\{(x_{i},\cdot)\}_{i\in I_{U}^{(k,q,s,r)}}\), where \(\mathbf{I}_{U}^{(k,q,s,r)}\subseteq\mathbf{I}_{U}^{(k,q)}\), we obtain a training dataset with mixed labeled and unlabeled samples, denoted as \(\mathbf{D}_{M}^{(k,q,s,r)}=\{(x_{i},y_{i})\}_{i\in I_{L}^{(k,q)}}\cup\{(x_{i}, \cdot)\}_{i\in I_{U}^{(k,q,s,r)}}\). To account for the semi-supervised learning assumptions, we sort the \(n_{U}^{(s)}\) unlabeled samples in the \(T_{U}^{(k,q,s,r)}\) based on their proximity to the nearest labeled sample. To improve clarity, for the given \(k\), \(q\), and \(r\), we will modify the superscripts of the training (labeled and unlabeled) and validation samples throughout the remainder of this paper, i.e., \(\mathbf{D}_{L}\), \(\mathbf{D}_{U}^{(s)}\), \(\mathbf{D}_{M}^{(s)}\), and \(\mathbf{D}_{V}\) represent the subsets of \(n_{L}\) labeled, \(n_{U}^{(s)}\) unlabeled, \(n_{M}^{(s)}=n_{L}+n_{U}^{(s)}\) mixed, and \(n_{V}\) validation samples, respectively. A visual representation of the outlined approach is depicted in Fig. 1. We can alternatively represent the labeled and unlabeled training samples in matrix format as described below. We define the matrix of event features with labeled samples as \(\mathbf{X}_{L}=[\ldots,x_{i},...]^{T}\) and the corresponding matrix of labels as \(\mathbf{Y}_{L}=[\ldots,y_{i},...]^{T}\) where \(i\in I_{L}^{(k,q)}\). Similarly, for the subset of unlabeled samples, we define \(\mathbf{X}_{U}=[\ldots,x_{i},...]^{T}\), \(i\in I_{U}^{(k,q,s,r)}\). For the sake of notation coherency as well as implementation considerations (e.g., learning the classification models), we assign value \(-1\) to the unlabeled samples, i.e., \(\mathbf{Y}_{U}=[-1,...,-1]^{T}\in\mathbb{R}^{n_{U}^{(0)}}\). Hence, the mixed labeled and unlabeled training set can be expressed as \[\mathbf{D}_{M}=\{\mathbf{X}_{M},\mathbf{Y}_{M}\} \tag{3}\] where \[\mathbf{X}_{M} =[\mathbf{X}_{L}{}^{T},\mathbf{X}_{U}{}^{T}]^{T}, \tag{4}\] \[\mathbf{Y}_{M} =[\mathbf{Y}_{L}{}^{T},\mathbf{Y}_{U}{}^{T}]^{T}.\] Similarly, the validation \(\mathbf{D}_{V}\) in the \(k^{\text{th}}\) fold can be represented in the matrix format as \(\mathbf{D}_{V}=\{\mathbf{X}_{V},\mathbf{Y}_{V}\}\) where \(\mathbf{X}_{V}=[\ldots,x_{i},...]^{T}\) and \(\mathbf{Y}_{V}=[\ldots,y_{i},...]^{T}\), and \(i\in I_{V}^{(k)}\). ## IV Semi-supervised Event Identification: Model Learning and Validation Our procedure to test semi-supervised methods consists of three steps: (i) pseudo-labeling of unlabeled samples in the training set with mixed labeled and unlabeled samples, \(\mathbf{D}_{M}^{(s)}\), (ii) training a classifier using the combined labeled and pseudo-labeled samples, and (iii) evaluating the classifier's performance on previously unseen data in the validation set, \(\mathbf{D}_{V}\). The overview of the proposed approach is shown in Fig. 1. Given semi-supervised model \(\mathcal{F}_{1}\) and a classifier \(\mathcal{F}_{2}\), we start with the labeled samples within the \(k^{\text{th}}\) fold and the \(q^{\text{th}}\) split of the training set. Using these labeled samples, we perform grid search [32] to obtain hyper-parameters for the models \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\), denoted as \(\theta_{1}^{*}\) and \(\theta_{2}^{*}\). (Note that these hyper-parameters will differ based on \(k\) and \(q\).) Subsequently, we use the matrix of event features and the corresponding matrix of labels in the \(\mathbf{D}_{M}^{(s)}\) to assign pseudo labels on the unlabeled samples using \(\mathcal{F}_{1}\). Utilizing the obtained labeled and pseudo labeled samples, \(\mathbf{D}_{M}^{(s)}\), we then use model \(\mathcal{F}_{2}\) to assign labels to the events in the validation dataset \(\mathbf{D}_{V}\). In the subsequent subsections, we will describe which models we use as \(\mathcal{F}_{1},\mathcal{F}_{2}\) in this procedure. ### _Semi-Supervised Models for Pseudo Labeling (\(\mathcal{F}_{1}\))_ #### Iv-A1 Self-training Self-training, which dates back to the 1990s [33], has proven to be effective in leveraging unlabeled data to improve supervised classifiers [34, 35, 36, 37, 38]. Self-training works by assigning pseudo labels to unlabeled samples based on the model's predictions and then training the model iteratively with these pseudo labeled samples. More specifically, for any given base classifier, we learn a model Fig. 1: Overview of the proposed semi-supervised pipeline \(F_{1}\in\{\text{SVMR, SVML, GB, DT, kNN}\}\) from the labeled samples in the \(\mathbf{D}_{M}^{(s)}\). Then using the learned model, we predict the labels for each \(n_{U}^{(s)}\) unlabeled samples to obtain the augmented labeled and pseudo labeled samples, denoted as \(\widehat{\mathbf{D}}_{M}^{(s)}\). Algorithm 1 outlines the steps involved in this procedure. Note that the parameter \(\delta_{U}\) in this algorithm specifies the number of unlabeled samples (among the \(n_{U}^{(s)}\) samples) that will be assigned pseudo-labels in each iteration. ``` 1:Input:\(\mathbf{D}_{M}^{(s)}\) 2:Output:\(\mathbf{D}_{M}^{(s)}\) 3:Initialize:\(\{\text{f : }t\}=[1:\delta_{U}]\)\(\triangleright\) from sample f to sample t 4:\(\hat{\mathbf{X}}_{L}\leftarrow\mathbf{X}_{L},\hat{\mathbf{Y}}_{L}\leftarrow \mathbf{Y}_{L},\hat{\mathbf{X}}_{U}\leftarrow\mathbf{X}_{U}[\text{f : t}]\) 5:while\(\text{t }\leq n_{U}^{(s)}\)do 6:\(F_{1}:\hat{\mathbf{Y}}_{L}\leftarrow\hat{\mathbf{X}}_{L}\)\(\triangleright\) Learning the model 7:\(\hat{\mathbf{Y}}_{U}=F_{1}(\hat{\mathbf{X}}_{U})\)\(\triangleright\) Pseudo Labeling 8:\(\hat{\mathbf{X}}_{L}\leftarrow[\hat{\mathbf{X}}_{L}^{T},\hat{\mathbf{X}}_{U}^{ T}]^{T}\), \(\hat{\mathbf{Y}}_{L}\leftarrow[\hat{\mathbf{Y}}_{L}^{T},\hat{\mathbf{Y}}_{U}^{T}]^{T}\)\(\triangleright\) Augmentation 9:\(f\gets f+\delta_{U},\quad t\gets t+\delta_{U}\) 10:if\(\text{t }>n_{U}^{(s)}\): 11:\(\text{t }=n_{U}^{(s)}\) 12:\(\hat{\mathbf{X}}_{U}-\mathbf{X}_{U}[\text{f : t}]\) 13:endwhile 14:\(\hat{\mathbf{Y}}_{M}\leftarrow\hat{\mathbf{Y}}_{L}\) 15:Return:\(\mathbf{D}_{M}^{(s)}=\{\mathbf{X}_{M},\hat{\mathbf{Y}}_{M}\}\) ``` **Algorithm 1** Self-Training (for a given \(k,q,s\), and \(r\)). #### Iii-A2 Transductive Support Vector Machine (TSVM) The TSVM approach is a modification of the SVM formulation that addresses the challenge of limited labeled data in classification tasks [39, 31, 40]. The TSVM optimization problem is given by \[\min_{\mathbf{w},b,\boldsymbol{\zeta},\mathbf{z}}\quad C\ \left[\sum_{i\in I_{L}}\eta_{i}+\sum_{j\in I_{U}}\min(\zeta_{j},z_{j}) \right]+\|\mathbf{w}\|\] (5a) subject to: \[y_{i}(\mathbf{w}^{T}x_{i}-b)+\eta_{i}\geq 1,\quad\eta_{i}\geq 0, \quad i\in I_{L} \tag{5b}\] \[\mathbf{w}^{T}x_{i}-b+\zeta_{j}\geq 1,\quad\zeta_{j}\geq 0, \quad j\in I_{U}\] (5c) \[-(\mathbf{w}^{T}x_{i}-b)+z_{j}\geq 1,\quad z_{j}\geq 0, \quad j\in I_{U} \tag{5d}\] where \(\mathbf{w}\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}\) represent the direction of the decision boundary and the bias (or intercept) term, respectively. It introduces two constraints (i.e., (5c), and (5d)) for each sample in the training dataset calculating the misclassification error as if the sample belongs to one class or the other. The objective function aims to find \(\mathbf{w}\) and \(b\) that, while maximizing the margin and reducing the misclassification error of labeled samples (i.e., \(\boldsymbol{\eta}\)), minimize the minimum of these misclassification errors (i.e., \(\boldsymbol{\zeta}\) and \(\mathbf{z}\)). This enables the TSVM to utilize both labeled and unlabeled samples for constructing a precise classification model. Subsequently, it assigns pseudo labels to the unlabeled samples. For brevity, we refer readers to [40] for more comprehensive details. #### Iii-A3 Label Spreading (LS) In the realm of semi-supervised learning, label spreading (LS) falls within the category of graph-based semi-supervised (GSSL) models [41]. It involves constructing a graph and inferring labels for unlabeled samples where nodes represent samples and weighted edges reflect similarities. Consider a graph \(G_{\boldsymbol{M}}=(\mathcal{V}_{\boldsymbol{M}},\mathcal{W}_{\boldsymbol{M}})\) which is constructed over the mixed labeled and unlabeled training set. Each sample, \(x_{i},\forall i\in I_{L}\cup I_{U}\), in the \(\mathbf{X}_{\boldsymbol{M}}\) can be represented as a graph node, \(v_{i}\), i.e., \(v_{i}\in\mathcal{V}_{\boldsymbol{M}}\leftarrow x_{i}\). Furthermore, the edge weights matrix defined as \(\mathcal{W}_{\boldsymbol{M}}\in\mathbb{R}^{h_{\boldsymbol{M}}^{(s)}\mathcal{W}_{ \boldsymbol{M}}^{(s)}}\). Considering the Euclidean distance \(\mathfrak{D}_{ij}=||x_{i}-x_{j}||^{2}\), the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column of \(\mathcal{W}_{\boldsymbol{M}}\), denoted as \(w_{ij}\), can be obtained as \(w_{ij}=\exp(-\mathfrak{D}_{ij}/2\sigma^{2})\) if \(i\neq j\), and \(w_{ii}=0\). As a result, the closer the samples are, they will have larger weights. Then the intuition is that similar samples (i.e., with closer distance) have similar labels, and labels propagate from the labeled samples to unlabeled ones through weighted edges where the weights carry the notion of similarity. A pseudo code for the LS algorithm based on [42] is shown in Algorithm. 2. Note that, in updating the labels (line 7 in Algorithm. 2), samples receive information from their neighbors (first term) while preserving their initial information (second term). The parameter \(\alpha\) determines the weighting between neighbor-derived information and the sample's original label information. ``` 1:Input:\(G=(\mathcal{V},\mathcal{W})\leftarrow\mathbf{D}_{M}^{(s)}=\{\mathbf{X}_{ \boldsymbol{M}},\mathbf{Y}_{M}\}\) 2:Output:\(\widehat{\mathbf{D}}_{M}^{(s)}\) 3:Compute:\(\mathcal{D}_{ii}=\sum_{j}w_{ij},\quad\forall i\in I_{L}\cup I_{U}\) 4:Compute:\(\mathbf{Z}=\mathcal{D}^{-1/2}\mathcal{W}_{\boldsymbol{M}}\mathcal{D}^{-1/2}\) 5:Initialize:\(\begin{bmatrix}\mathbf{Y}_{L}|_{t=0}\\ \mathbf{Y}_{U}|_{t=0}\end{bmatrix}\leftarrow\begin{bmatrix}\mathbf{Y}_{L}\\ \mathbf{Y}_{U}\end{bmatrix}\) 6:while\(\begin{bmatrix}\mathbf{Y}_{L}|_{t}\\ \mathbf{Y}_{U}|_{t}\end{bmatrix}\) converges do\(\triangleright\) Based on some threshold 7:\(\begin{bmatrix}\mathbf{Y}_{L}|_{t+1}\\ \mathbf{Y}_{U}|_{t+1}\end{bmatrix}\gets a\mathbf{Z}\begin{bmatrix}\mathbf{Y}_{L}|_{t }\\ \mathbf{Y}_{U}|_{t}\end{bmatrix}+(1-\alpha)\begin{bmatrix}\mathbf{Y}_{L}\\ \mathbf{Y}_{U}\end{bmatrix}\) 8:\(t\gets t+1\) 9:endwhile 10:\(\widehat{\mathbf{Y}}_{M}\leftarrow\begin{bmatrix}\mathbf{Y}_{L}|_{t}\\ \mathbf{Y}_{U}|_{t}\end{bmatrix}\) 11:Return:\(\widehat{\mathbf{D}}_{M}^{(s)}=\{\mathbf{X}_{\boldsymbol{M}},\widehat{\mathbf{Y}}_{ \boldsymbol{M}}\}\) ``` **Algorithm 2** Label spreading (for a given \(k,q,s\), and \(r\)). ### _Pseudo Labeling Evaluation_ Using any of the abovementioned semi-supervised models, we obtain the augmented labeled and pseudo labeled samples, i.e., \(\widehat{\mathbf{D}}_{M}^{(s)}\), and learn a new classifier \(F_{2}\in\{\text{SVMR, SVML, GB, DT, kNN}\}\) to assess the model's performance on the previously unseen validation fold, \(\mathbf{D}_{V}\). Within this setting, to ensure a fair comparison among various inductive and transductive semi-supervised approaches, we consider two distinct approaches: * **Approach 1 (Inductive semi-supervised setting):** \(F_{1}\in\{\text{SVMR, SVML, GB, DT, kNN}\}\) represents the base classifier utilized in self-training for pseudo labeling, and the same type of classifier will be used as \(\mathcal{F}_{2}\). * **Approach 2 (Transductive semi-supervised setting): \(\mathcal{F}_{1}\in\{\text{TSVM, LS}\}\)** represents a semi-supervised method used for pseudo labeling, and \(\mathcal{F}_{2}\in\{\text{SVMR, SVML, GB, DT, kNN}\}\). ## V Simulation Results In order to investigate the performance of various semi-supervised learning algorithms, we first generate eventful synthetic PMU data, following the procedure described in Section II. Our simulations were carried out on the South-Carolina 500-Bus System [16, 17]. We allow the system to operate normally for \(t_{f}=1\) second and then we immediately apply a disturbance. We then run the simulation for an additional \(t_{s}=10\) seconds, and record the resulting eventful measurements at the PMU sampling rate of 30 samples/sec. We assume that 95 buses (which are chosen randomly) of the Carolina 500-bus system are equipped with PMU devices and extract features for each such bus from the \(V_{m}\), \(V_{a}\), and \(F\) channels. We thus collect \(N=300\) samples after the start of an event for each channel. We use the modal analysis methodology as outlined in our recent prior work [30] to extract features using modal analysis. In total, we simulated 1827 events including 500 LL, 500 GL, 500 LT, and 327 BF events. Figure 2 illustrates the measurements (i.e., \(V_{m}\), \(V_{a}\), and \(F\)) recorded from a single PMU after applying LL, GL, LT, and BF events, To quantitatively evaluate and compare the performance of different semi-supervised learning algorithms across various scenarios, we employ the area under curve (AUC) of the receiver operator characteristic (ROC). This metric enables the characterization of the accuracy of classification for different discrimination thresholds [43]. The ROC AUC value, which ranges from 0 to 1, provides an estimate of the classifier's ability to classify events. A value of AUC closer to 1 indicates a better classification performance. For a specified set of parameters \(k\), \(q\), \(s\), and \(r\), we evaluate the performance of a given classifier \(\mathcal{P}_{2}\) by assessing its ROC-AUC score in predicting event classes within the hold-out fold. This evaluation is based on the model learned from the augmented labeled and pseudo-labeled samples, which are obtained using the pseudo-labeling model \(\mathcal{F}_{1}\). Given that the aim of this study is to provide insight into the robustness of various semi-supervised models, we compare them by evaluating the average, \(5^{\text{th}}\) percentile, and \(95^{\text{th}}\) percentile of the AUC scores based on the accuracy of the assigned pseudo labels on the unlabeled samples and assess the impact of incorporating the assigned pseudo labels on the accuracy of a generalizable model in predicting the labels of validation samples. We use the \(5^{\text{th}}\) percentile of the AUC scores as our primary target performance metric for robustness, as it provides a (nearly) worst-case metric across different selections of the initial labeled and unlabeled samples. That is, if a method yields a high \(5^{\text{th}}\) percentile performance, then it is likely to lead to accurate results, even if the initial set of labeled and unlabeled samples are unfavorable. As discussed in Section IV-B, we investigate and compare the performance of various semi-supervised algorithms, including self-training with different base classifiers (SVML, SVMR, GB, DT, and kNN), TSVM, and LS to assess their effectiveness. In our evaluation process, we take into account \(n_{K}=10\) folds and \(n_{Q}=30\) random splits of the training samples into labeled and unlabeled subsets. Other simulation parameters are provided in Table. I. As depicted in Figure 3, the comparative performance of diverse classifiers (namely, SVML, SVMR, kNN, DT, and GB) is presented across distinct semi-supervised models (self-training, TSVM, and LS). The outcomes of this analysis highlight that the integration of additional unlabeled samples and the utilization of LS for pseudo labeling surpasses the outcomes achieved by the self-training and TSVM approaches. Moreover, the LS algorithm consistently enhances the performance of all classifiers more robustly. The following subsections provides further insight on the performance of each semi-supervised model. ### _Approach 1 - Inductive semi-supervised setting_ The simulation results for the \(5^{\text{th}}\) percentile of the AUC scores of the SVML, SVMR, kNN, DT, and GB classifiers in predicting the labels of validation samples are shown in 3a. It is clear that using a limited number of labeled samples, results in poor performance for the self-training method when utilizing SVMR, SMVL, and kNN base classifiers. Moreover, the utilization of GB and DT as base classifiers does not \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Parameter** & **Description** & **Value** \\ \hline \(n_{D}\) & Total No. of samples & 1827 \\ \hline \(n_{K}\) & No. of folds & 10 \\ \hline \(n_{T}\) & No. of training samples & 1644 \\ \hline \(n_{V}\) & No. of validation samples & 183 \\ \hline \(n_{Q}\) & No. of random splits of training samples into labeled and unlabeled samples & 20 \\ \hline \((B_{\text{mix}},B_{\text{max}})\) & Class balance range in the labeled samples & (0.2, 0.8) \\ \hline \(n_{L}\) & No. of labeled samples & 24 \\ \hline \(n_{U}\) & No. of Unlabeled samples & 1620 \\ \hline \(\tilde{v}_{U}\) & No. of unlabeled samples in each step & 100 \\ \hline \(n_{S}\) & Total No. of steps & 18 \\ \hline \(n_{R}\) & No. of random selection of \(n_{U}^{(a)}\) samples at each step & 20 \\ \hline \end{tabular} \end{table} TABLE I: Parameters used in the simulations for semi-supervised event identification Fig. 2: PMU measurements necessarily lead to an improvement in event identification accuracy. This primarily arises from the disparity between the pseudo labels and the initial subset of labeled samples. Training with biased and unreliable pseudo labels can result in the accumulation of errors. In essence, this pseudo label bias exacerbates particularly for classes that exhibit poorer behavior, such as when the distribution of labeled samples does not accurately represent the overall distribution of both labeled and unlabeled samples, and is further amplified as self-training continues. Another noteworthy observation is that self-training employing SVML or SVMR as the classifiers exhibits a high sensitivity to the distribution of both labeled and unlabeled samples. Due to the constraint of having a limited number of labeled samples, these techniques struggle to generate dependable pseudo-label assignments. On the other hand, although self-training with kNN as the base classifier performs better than SVML and SVMR cases, its performance deteriorates as we increase the number of the unlabeled samples. For the self-training with DT and GB base classifiers, it is evident that, although they exhibit more robust performance compared to other types of base classifiers, increasing the number of unlabeled samples does not enhance their performance. ### _Approach 2 - Transductive semi-supervised setting_ The simulation results for the second approach in which TSVM is employed as the semi-supervised method for pseudo-labeling are illustrated in Fig. 2(b). The weak performance of TSVM could be attributed to the specific characteristics of the dataset and the method's sensitivity to the distribution of labeled and unlabeled samples. If the distribution of these samples is unbalanced or exhibits complex patterns, the TSVM might struggle to accurately capture this distribution. As a result, it could assign inaccurate pseudo labels. Furthermore, it becomes evident that the integration of pseudo-labels acquired through the TSVM algorithm, although yielding an overall performance advantage for SVML and SVMR when compared to the same models utilizing pseudo-labels from the self-training algorithm involving SVMR and SVML, still exhibits substantial sensitivity. This sensitivity is particularly apparent when assessing the 5% AUC scores, highlighting that the accuracy of assigned pseudo-labels remains highly contingent on the initial distribution of labeled and unlabeled samples. This phenomenon is also observable in the diminish Fig. 3: Comparison between various classifiers based on the Pseudo labels obtained from (a) self-training method with various base classifiers, (b) TSVM, (C) LS. (d) comparison between the (GB, GB) and (LS, kNN). ing performance of the kNN, GB, and DT classifiers, which, surprisingly, deteriorates to a level worse than their utilization as base classifiers within the self-training framework. On the contrary, as shown in Fig. 2(c), the results demonstrate that utilizing the augmented labeled and pseudo labeled samples obtained from LS can significantly enhance the performance of event identification, as compared to the self-training and TSVM approaches. Furthermore, the performance of the event identification task improves with a higher number of unlabeled samples, which is particularly significant since labeled eventful PMU data is often scarce in practice. The principal advantage of the LS method, when compared to self-training and TSVM, primarily arises from its ability to leverage information from both labeled and unlabeled samples, as well as their inherent similarities, during the assignment of pseudo labels. For some classifiers (specifically GB and DT), we find that LS improves the 5th percentile line with more unlabeled samples, even though the average performance stays roughly unchanged. On the other hand, for the KNN classifier (as shown in Fig. 2(d)), the average, 5th, and 95th percentile lines all improve with more unlabeled samples. Indeed, LS with KNN seems to be the best overall classifier. ## VI Conclusion Given the practical scenario where a relatively small number of events are labeled in comparison to the total event count, we have introduced a semi-supervised event identification framework to explore the potential benefits of incorporating unlabeled samples in enhancing event identification performance. This framework comprises three core steps: (i) assigning pseudo-labels to unlabeled samples within the training set, which encompasses a mixture of labeled and unlabeled samples, (ii) training a classifier using the augmented set of labeled and pseudo-labeled samples, and (iii) evaluating the classifier's efficacy on the holdout fold. This proposed pipeline is deployed to scrutinize the effectiveness of three classical semi-supervised methods: self-training, TSVM, and LS. Our simulation results suggests that using a limited number of labeled samples, the self-training and TSVM methods perform poorly and does not necessary improve the accuracy of event identification. The study underscores the robust performance of GB and DT classifiers, though augmenting unlabeled samples does not enhance their performance. Conversely, using the augmented labeled and pseudo-labeled samples obtained from LS consistently outperform the self-training and TSVM approaches, and can significantly improve event identification performance. The performance also improves with a higher number of unlabeled samples, which is important given the scarcity of labeled eventful PMU data. ## Acknowledgments This work was supported by the National Science Foundation under Grants OAC-1934766, CCF-2048223, and CCF-2029044, in part by the Power System Engineering Research Center (PSERC) under Project S-87, and in part by the U.S.-Israel Energy Center managed by the Israel-U.S. Binational Industrial Research and Development (BIRD) Foundation.
2309.16382
RLLTE: Long-Term Evolution Project of Reinforcement Learning
We present RLLTE: a long-term evolution, extremely modular, and open-source framework for reinforcement learning (RL) research and application. Beyond delivering top-notch algorithm implementations, RLLTE also serves as a toolkit for developing algorithms. More specifically, RLLTE decouples the RL algorithms completely from the exploitation-exploration perspective, providing a large number of components to accelerate algorithm development and evolution. In particular, RLLTE is the first RL framework to build a complete and luxuriant ecosystem, which includes model training, evaluation, deployment, benchmark hub, and large language model (LLM)-empowered copilot. RLLTE is expected to set standards for RL engineering practice and be highly stimulative for industry and academia.
Mingqi Yuan, Zequn Zhang, Yang Xu, Shihao Luo, Bo Li, Xin Jin, Wenjun Zeng
2023-09-28T12:30:37Z
http://arxiv.org/abs/2309.16382v1
# RLLTE: Long-Term Evolution Project of Reinforcement Learning ###### Abstract We present RLLTE: a long-term evolution, extremely modular, and open-source framework for reinforcement learning (RL) research and application. Beyond delivering top-notch algorithm implementations, RLLTE also serves as a toolkit for developing algorithms. More specifically, RLLTE decouples the RL algorithms completely from the exploitation-exploration perspective, providing a large number of components to accelerate algorithm development and evolution. In particular, RLLTE is the first RL framework to build a complete and luxurian ecosystem, which includes model training, evaluation, deployment, benchmark hub, and large language model (LLM)-empowered copilot. RLLTE is expected to set standards for RL engineering practice and be highly stimulative for industry and academia. ## 1 Introduction Reinforcement learning (RL) has emerged as a highly significant research topic, garnering considerable attention due to its remarkable achievements in diverse fields, including smart manufacturing and autonomous driving (Mnih et al., 2015; Duan et al., 2016; Schulman et al., 2017; Haarnoja et al., 2018; Yarats et al., 2021). However, the efficient and reliable engineering implementation of RL algorithms remains a long-standing challenge. These algorithms often possess sophisticated structures, where minor code variations can substantially influence their practical performance. Academia requires a stable baseline for algorithm comparison, while the industry seeks convenient interfaces for swift application development (Raffin et al., 2021). However, the design and maintenance of an RL library prove costly, involving substantial computing resources, making it prohibitive for most research teams. To tackle this problem, several open-source projects were proposed to offer reference implementations of popular RL algorithms (Liang et al., 2018; D'Eramo et al., 2021; Fujita et al., 2021; Raffin et al., 2021; Huang et al., 2022). For instance, Raffin et al. (2021) developed a stable-baselines3 (SB3) framework, which encompasses seven model-free deep RL algorithms, including proximal policy optimization (PPO) (Schulman et al., 2017) and asynchronous actor-critic (A2C) (Mnih et al., 2016). SB3 prioritizes stability and reliability, and rigorous code testing has been conducted to minimize implementation errors and ensure the reproducibility of results. Weng et al. (2022) introduced Tianshou, a highly modularized library emphasizing flexibility and training process standardization. Tianshou also provides a unified interface for various algorithms, such as offline and imitation learning. In contrast, Huang et al. (2022) introduced CleanRL, which focuses on single-file implementations to facilitate algorithm comprehension, new features prototyping, experiment analysis, and scalability. Despite their achievements, most of the existing benchmarks have not established a long-term evolution plan and have proven to be short-lived. Firstly, the consistent complexity of RL algorithms naturally results in distinct coding styles, posing significant obstacles to open-source collaborations. Complete algorithm decoupling and modularization have yet to be well achieved, making maintenance challenging and limiting extensibility. Secondly, these projects are deficient in establishing a comprehensive application ecosystem. They primarily concentrate on model training, disregarding vital aspects like model evaluation and deployment. Furthermore, they frequently lack exhaustive benchmark testing data, including essential elements like learning curves and trained models. This deficiency makes replicating algorithms demanding in terms of computational resources. Inspired by the discussions above, we propose **RLLTE**, a long-term evolution, extremely modular, and open-source framework of RL. We summarize the highlighted features of RLLTE as follows: * **Module-oriented**. RLLTE decouples RL algorithms from the _exploitation-exploration_ perspective and breaks them down into minimal primitives, such as _encoder_ for feature extraction and _storage_ for archiving and sampling experiences. RLLTE offers a rich selection of modules for each primitive, enabling developers to utilize them as building blocks for constructing algorithms. As a result, the focus of RLLTE shifts from specific algorithms to providing more handy modules like PyTorch. In particular, each module in RLLTE is customizable and plug-and-play, empowering users to develop their own modules. This decoupling process also contributes to advancements in interpretability research, allowing for a more in-depth exploration of RL algorithms. * **Long-term evolution**. RLLTE is a long-term evolution project, continually involving advanced algorithms and tools in RL. RLLTE will be updated based on the following tenet: (i) generality; (ii) improvements in generalization ability and sample efficiency; (iii) excellent performance on recognized benchmarks; (iv) promising tools for RL. Therefore, this project can uphold the right volume and high quality resources, thereby inspiring more subsequent projects. * **Data augmentation.** Recent approaches have introduced data augmentation techniques at the _observation_ and _reward_ levels to improve the sample efficiency and generalization ability of RL agents, which are cost-effective and highly efficient. In line with this trend, RLLTE incorporates built-in support for data augmentation operations and offers a wide range of observation augmentation modules and intrinsic reward modules. * **Abundant ecosystem**. RLLTE considers the needs of both academia and industry and develops an abundant project ecosystem. For instance, RLLTE designed an evaluation toolkit to provide statistical and reliable metrics for assessing RL algorithms. Additionally, the deployment toolkit enables the seamless execution of models on various inference devices. Figure 1: Overview of the architecture of RLLTE. In particular, RLLTE attempts to introduce the large language model (LLM) to build an intelligent copilot for RL research and applications. * **Comprehensive benchmark data.** Existing RL projects typically conduct testing on a limited number of benchmarks and often lack comprehensive training data, including learning curves and test scores. While this limitation is understandable, given the resource-intensive nature of RL training, it hampers the advancement of subsequent research. To address this issue, RLLTE has established a data hub utilizing the Hugging Face platform. This data hub provides extensive testing data for the included algorithms on widely recognized benchmarks. By offering complete and accessible testing data, RLLTE will facilitate and accelerate future research endeavors in RL. * **Multi-hardware support.** RLLTE has been thoughtfully designed to accommodate diverse computing hardware configurations, including graphic processing units (GPUs) and neural network processing units (NPUs), in response to the escalating global demand for computing power. This flexibility enables RLLTE to support various computing resources, ensuring optimal trade-off of performance and scalability for RL applications. ## 2 Architecture Figure 1 illustrates the overall architecture of RLLTE, which contains the core layer, application layer, and tool layer. The following sections will detail the design concepts and usage of the three layers. ### Core Layer In the core layer, we decouple an RL algorithm from the _exploitation-exploration_ perspective and break them down into minimal primitives. Figure 2 illustrates a typical forward and update workflow of RL training. At each time step, an encoder first processes the observation to extract features. Then, the features are passed to a policy module to generate actions. Finally, the transition will be inserted into the storage, and the agent will sample from the storage to perform the policy update. In particular, we can use data augmentation techniques such as observation augmentation and intrinsic reward shaping to improve the sample efficiency and generalization ability. We categorize these fundamental components into two parts: xploit and xplore, and Table 1 illustrates their architectures. The modules within the xploit component primarily focus on exploiting the current collected experiences. For instance, the storage module defines the methods for storing and sampling experiences, while the policy module is updated based on the sampled data. In contrast, modules in xplore focus on exploring unknown domains. When policy is stochastic, distribution specifies the methods for sampling actions from the action space. In the case of Figure 2: Forward and update workflow of an RL algorithm. **Aug.**: Augmentation. **Dist.**: Distribution for sampling actions. **Int.**: Intrinsic. **Obs.**: Observation. a deterministic policy, the distribution module introduces noise to the current action to enhance the exploration of the action space. The augmentation and reward modules contribute to exploring the state and action space by augmenting observations and providing additional intrinsic rewards, respectively. Each submodule in Table 1 is accompanied by many pre-defined components, which are listed in Appendix A. ### Application Layer Equipped with modules of the core layer, we can efficiently develop RL algorithms and applications with simple steps, and Table 2 illustrates the architecture of the application layer. See all the corresponding code examples in Appendix C. #### 2.2.1 Fast Algorithm Construction Developers only need three steps to implement an RL algorithm with RLLTE: (i) select an algorithm prototype; (ii) select desired modules; (iii) define an update function. Currently, RLLTE provides three algorithm prototypes: OnPolicyAgent, OffPolicyAgent, and DistributedAgent. Figure 3 demonstrates how to write an A2C agent for discrete control tasks with RLLTE: \begin{table} \begin{tabular}{l l l} \hline \hline **Module** & **Submodule** & **Remark** \\ \hline \multirow{3}{*}{rllte.xploit} & policy & Policies for interaction and learning. \\ & encoder & Encoders for feature extraction. \\ & storage & Storages for collected experiences. \\ \hline \multirow{3}{*}{rllte.xplore} & distribution & Distributions for sampling actions. \\ & augmentation & Observation augmentation modules. \\ & reward & Intrinsic reward modules. \\ \hline \hline \end{tabular} \end{table} Table 1: Six primitives in RLLTE. Note that the action noise is implemented via a distribution manner to keep unification in RLLTE. \begin{table} \begin{tabular}{l l} \hline \hline **Module** & **Remark** \\ \hline \multirow{3}{*}{rllte.agent} & Top-notch implementations of highly-recognized RL algorithms, in which convenient interfaces are designed to realize fast application construction. In particular, the module-oriented design allows developers to replace settled modules of implemented algorithms to make performance comparisons and algorithm improvements. \\ \hline \multirow{3}{*}{Pre-training} & Since RLLTE is designed to support intrinsic reward modules natively, developers can conveniently realize pre-training. The pre-trained weights will be saved automatically after training, and it suffices to perform fine-tuning by loading the weights in the.train() function. \\ \hline \multirow{3}{*}{Deployment} & A toolkit that helps developers run their RL models on inference devices, which consistently have lower computational power. RLLTE currently supports two inference frameworks: NVIDIA TensorRT and HUAWEI CANN. RLLTE provides a fast API for model transformation and inference, and developers can invoke it directly with their models. \\ \hline \multirow{3}{*}{Copilot} & A promising attempt to introduce the LLM into an RL framework. The copilot can help users reduce the time required for learning frameworks and assist in the design and development of RL applications. We are developing more advanced features to it, including RL-oriented code completion and training control. \\ \hline \hline \end{tabular} \end{table} Table 2: Architecture of the application layer in RLLTE. As shown in this example, developers can effortlessly choose the desired modules and create an.update() function to implement a new algorithm. At present, the framework includes a collection of 13 algorithms, such as data-regularized actor-critic (DrAC) (Raileanu et al., 2021) and data regularized Q-v2 (DrQ-v2), and the detailed introduction can be found in Appendix B. #### 2.2.2 Module Replacement For an implemented algorithm, developers can replace its settled modules using the.set() method to realize performance comparisons and algorithm improvements. Moreover, developers can utilize custom modules as long as they inherit from the base class, as demonstrated in the code example in Appendix C.2. By decoupling these elements, RLLTE also empowers developers to construct prototypes and perform quantitative analysis of algorithm performance swiftly. #### 2.2.3 Copilot Copilot is the first attempt to integrate an LLM into an RL framework, which aims to help developers reduce the learning cost and facilitate application construction. We follow the design of (Toro, 2023) that interacts privately with documents using the power of GPT, and Figure 4 illustrates its architecture. The source documents are first ingested by an instructor embedding tool to create a local vector database. After that, a local LLM is used to understand questions and create answers based on the database. In practice, we utilize Vicuna-7B (Chiang et al., 2023) as the base model and build the database using various corpora, including API documentation, tutorials, and RL references. The powerful understanding ability of the LLM model enables the copilot to accurately answer questions about the use of the framework and any other questions of RL. Moreover, no additional training is required, and users are free to replace the base model according to their computing power. In future work, we will further enrich the corpus and add the code completion function to build a more intelligent copilot for RL. Figure 4: **Left**: The workflow of the copilot. **Right**: A conversation example of training an PPO agent using RLLTE. Figure 3: **Left**: Implement A2C algorithm with dozens of lines of code, and the complete code example can be found in Appendix C.1. **Right**: Simple interface to invoke implemented RL algorithms. ### Tool Layer The tool layer provides practical toolkits for task design, model evaluation, and benchmark data. rllte.env allows users to design task environments following the natural Gymnasium pattern without additional effort. All the environments in RLLTE are set to be vectorized to guarantee sample efficiency, and many different observation and action spaces (e.g., box, discrete, multi-binary, etc.) are supported. In particular, users can also use EnvPool (Weng et al., 2022) to realize ultra-fast operational acceleration. See code example in Appendix D.1. Beyond providing efficient task design and training interfaces, RLLTE further investigates the model evaluation problem in RL and develops a simple evaluation toolkit. RLLTE reconstructs and improves the code of (Agarwal et al., 2021) to realize a more convenient and efficient interface. Figure 5 illustrates several metrics computed and visualized by the toolkit. Finally, rllte.hub can accelerate academic research by providing practically available benchmark data, including training data and trained models. This toolkit will save much time and computational resources for researchers, and the code example can be found in Appendix D.3. RLLTE is the first open-source RL project that aims to build a complete ecosystem. Developers can perform task design, model training, model evaluation, and model deployment within one framework. As a result, RLLTE is highly stimulative for both industry and academia. ## 3 Project Evolution As a long-term evolution project, RLLTE is expected to consistently provide high-quality and timely engineering standards and development components for RL. To that end, RLLTE sets the following tenet for updating new features: * Generality is the most important; * Improvements in sample efficiency or generalization ability; * Excellent performance on recognized benchmarks; * Promising tools for RL. Firstly, RLLTE only accepts general algorithms that can be applied in many distinct scenarios and tasks. For example, PPO is a general RL algorithm that can solve tasks with arbitrary action spaces, and random network distillation (RND) (Burda et al., 2019) is a general intrinsic reward module that can be combined with arbitrary RL agents. This rule can effectively control the volume of the \begin{table} \begin{tabular}{l l} \hline \hline **Toolkit** & **Remark** \\ \hline \multirow{6}{*}{rllte.env} & Provides a large number of packaged environments (e.g., Atari games) \\ & for fast invocation. RLLTE is designed to natively support Gymnasium (Towers et al., 2023), which is a maintained fork of the Gym library of OpenAI (Brockman et al., 2016). Moreover, developers are allowed to use their custom environments with built-in wrappers in RLLTE. \\ \hline \multirow{6}{*}{rllte.evaluation} & Provides reasonable and reliable metrics for algorithm evaluation following (Agarwal et al., 2021). Performance module for evaluating a single algorithm. Comparison module for comparing multiple algorithms. Visualization for visualizing computed metrics. \\ \hline \multirow{6}{*}{rllte.hub} & Provides a large number of reusable datasets (.datasets) and trained \\ & models (.models) of supported RL algorithms. Developers can also \\ \cline{1-1} & reproduce the training process via the pre-defined RL applications (. applications). \\ \hline \hline \end{tabular} \end{table} Table 3: Architecture of the tool layer in RLLTE. Code example for each toolkit can be found in Appendix D. project while ensuring its adaptability to a wide range of requirements. Moreover, generality exemplifies the potential for future enhancements (e.g., the various variants of PPO), which can also reduce the difficulty of open-source collaboration and maintain community vitality. Furthermore, the algorithm is expected to improve sample efficiency or generalization ability (e.g., better intrinsic reward shaping approaches), two long-standing and critical problems in RL. Accordingly, the algorithm must be evaluated on multiple recognized benchmarks like Atari (Bellemare et al., 2013) and Progen games (Cobbe et al., 2020) to guarantee practical performance across tasks. In particular, RLLTE also accepts various promising tools (e.g., operational efficiency optimization, model evaluation, and deployment) to maintain a comprehensive ecosystem. In summary, RLLTE will keep evolving to adapt to changing needs and produce a positive impact on the RL community. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline **Framework** & **Modularized** & **Parallel** & **Decoupling** & **Backend** & **License** \\ \hline \hline Baselines & ✓ & ✗ & - & TensorFlow & MIT \\ SB3 & ✓ & ✗ & - & PyTorch & MIT \\ CleanRL & - & ✗ & ✗ & PyTorch & MIT \\ Ray/rllib & ✓ & ✓ & - & TF/PyTorch & Apache-2.0 \\ rlpyt & ✓ & ✓ & ✗ & PyTorch & MIT \\ Tianshou & ✓ & ✓ & - & PyTorch & MIT \\ ElegantRL & ✓ & ✓ & - & PyTorch & Apache-2.0 \\ SpinningUp & ✗ & ✗ & ✗ & PyTorch & MIT \\ ACME & ✗ & ✓ & ✗ & TF/JAX & Apache-2.0 \\ RLLTE & ✓ & ✓ & ✓ & PyTorch & MIT \\ \hline \hline \end{tabular} \end{table} Table 4: Architecture comparison with existing projects. **Modularized**: The project adopts a modular design with reusable components. **Parallel**: The project supports parallel learning. **Decoupling**: The project supports algorithm decoupling and module replacement. **Backend**: Which machine learning framework to use? **License**: Which open-source protocol to use? Note that the short line represents partial support. Figure 5: Performance metrics computed and visualized by rllte.evaluation, and the code example can be found in Appendix D.2. ## 4 Related Work We compare RLLTE with eleven representative open-source RL projects, namely Baselines (Dhariswal et al., 2017), SB3 (Raffin et al., 2021), CleanRL (Huang et al., 2022), Ray/rlib (Liang et al., 2018), and rlpyt (Stooke and Abbeel, 2019), Tianshou (Weng et al., 2022), ElegantRL (Liu et al., 2021), SpinningUp (Achiam, 2018), and ACME (Hoffman et al., 2020), respectively. The following comparison is conducted from three aspects: architecture, functionality, and engineering quality. This project references some other open-source projects and adheres to their open-source protocols. ## 5 Discussion In this paper, we introduced a novel RL framework entitled RLLTE, which is a long-term evolution, extremely modular, and open-source project for advancing RL research and applications. With a rich and comprehensive ecosystem, RLLTE enables developers to accomplish task design, model training, evaluation, and deployment within one framework seamlessly, which is highly stimulative for both academia and industry. Moreover, RLLTE is an ultra-open framework where developers can freely use and try many built-in or custom modules, contributing to the research of decoupling \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Framework** & **Documentation** & **Code Coverage** & **Type Hints** & **Last Update** & **Used by** \\ \hline Baselines & ✗ & ✗ & 01/2020 & 508 \\ SB3 & ✓ & 96\% & ✓ & 09/2023 & 3.3k \\ CleanRL & ✓ & - & ✗ & 09/2023 & 27 \\ Ray/rlib & ✓ & - & ✗ & 09/2023 & - \\ rlpyt & ✓ & 15\% & ✗ & 09/2020 & - \\ Tianshou & ✓ & 91\% & ✓ & 09/2023 & 169 \\ ElegantRL & ✓ & - & ✓ & 07/2023 & 256 \\ SpinningUp & ✓ & ✗ & ✗ & 02/2020 & - \\ ACME & ✓ & - & ✗ & 07/2023 & 149 \\ RLLTE & ✓ & 97\% & ✓ & 09/2023 & 2\(\nearrow\) \\ \hline \hline \end{tabular} \end{table} Table 6: Engineering quality comparison with existing projects. Note that the short line represents unknown. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline **Framework** & **Number** & **Custom** & **Custom** & **Data** & **Data** & **Dela** & **Deploy.** & **Eval.** & **Multi-Device** \\ \hline Baselines & 9 & ✓(gym) & - & ✗ & - & ✗ & ✗ & ✗ \\ SB3 & 7 & ✓(gymnasium) & - & - & ✓ & ✗ & ✗ & ✗ \\ CleanRL & 9 & ✗ & ✓ & - & ✓ & ✗ & ✗ & ✗ \\ Ray/rlib & 16 & ✓(gym) & - & - & - & ✗ & ✗ & ✗ \\ rlpyt & 11 & ✗ & - & ✗ & - & ✗ & ✗ & ✗ \\ Tianshou & 20 & ✓(gymnasium) & ✗ & - & - & ✗ & ✗ & ✗ \\ ElegantRL & 9 & ✓(gym) & ✗ & ✗ & - & ✗ & ✗ & ✗ \\ SpinningUp & 6 & ✓(gym) & ✗ & ✗ & - & ✗ & ✗ & ✗ \\ ACME & 14 & ✓(dm\_env) & ✗ & ✗ & - & ✗ & ✗ & ✗ \\ RLLTE & 13\(\nearrow\) & ✓(gymnasium) & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 5: Functionality comparison with existing projects. **Custom Env.**: Support custom environments? Since Gym (Brockman et al., 2016) is no longer maintained, it is critical to make the project adapt to Gymnasium (Towers et al., 2023). **Custom Module**: Support custom modules? **Data Aug.**: Support data augmentation techniques like intrinsic reward shaping and observation augmentation? **Data Hub**: Have a data hub to store benchmark data? **Deploy.**: Support model deployment? **Eval.**: Support model evaluation? **Multi-Device**: Support hardware acceleration of different computing devices (e.g., GPU and NPU)? Note that the short line represents partial support. and interpretability of RL. As a long-term evolution project, RLLTE will keep tracking the latest research progress and provide high-quality implementations to inspire more subsequent research. In particular, there are some remaining issues that we intend to work on in the future. Firstly, RLLTE plans to add more algorithm prototypes to meet the task requirements of different scenarios, including multi-agent RL, inverse RL, imitation learning, and offline RL. Secondly, RLLTE will enhance the functionality of the pre-training module, which includes more prosperous training methods and more efficient training processes, as well as providing downloadable model parameters. Thirdly, RLLTE will further explore the combination of RL and LLM, including using LLM to control the construction of RL applications and improving the performance of existing algorithms (e.g., reward function design and data generation). Finally, RLLTE will optimize the operational efficiency of modules at the hardware level to reduce the computational power threshold, promoting the goal of RL for everyone. #### Acknowledgments This work is supported, in part, by NSFC under Grant No. 62102333 and Grant No. 62342246, and HKSAR RGC under Grant No. PolyU 25211321, and ZJNSFC under Grant LQ23F010008, and GDSTC under Grant No. 2023A1515010592. We thank the HPC center of the Eastern Institute for Advanced Study (EIAS) for providing their GPU computing platform for testing and HUAWEI Ascend for their NPU computing platform.
2310.00483
Prompting Code Interpreter to Write Better Unit Tests on Quixbugs Functions
Unit testing is a commonly-used approach in software engineering to test the correctness and robustness of written code. Unit tests are tests designed to test small components of a codebase in isolation, such as an individual function or method. Although unit tests have historically been written by human programmers, recent advancements in AI, particularly LLMs, have shown corresponding advances in automatic unit test generation. In this study, we explore the effect of different prompts on the quality of unit tests generated by Code Interpreter, a GPT-4-based LLM, on Python functions provided by the Quixbugs dataset, and we focus on prompting due to the ease with which users can make use of our findings and observations. We find that the quality of the generated unit tests is not sensitive to changes in minor details in the prompts provided. However, we observe that Code Interpreter is often able to effectively identify and correct mistakes in code that it writes, suggesting that providing it runnable code to check the correctness of its outputs would be beneficial, even though we find that it is already often able to generate correctly-formatted unit tests. Our findings suggest that, when prompting models similar to Code Interpreter, it is important to include the basic information necessary to generate unit tests, but minor details are not as important.
Vincent Li, Nick Doiron
2023-09-30T20:36:23Z
http://arxiv.org/abs/2310.00483v1
# Prompting Code Interpreter to Write Better Unit Tests on Quixbugs Functions ###### Abstract Unit testing is a commonly-used approach in software engineering to test the correctness and robustness of written code. Unit tests are tests designed to test small components of a codebase in isolation, such as an individual function or method. Although unit tests have historically been written by human programmers, recent advancements in AI, particularly LLMs, have shown corresponding advances in automatic unit test generation. In this study, we explore the effect of different prompts on the quality of unit tests generated by Code Interpreter, a GPT-4-based LLM, on Python functions provided by the Quixbugs dataset, and we focus on prompting due to the ease with which users can make use of our findings and observations. We find that the quality of the generated unit tests is not sensitive to changes in minor details in the prompts provided. However, we observe that Code Interpreter is often able to effectively identify and correct mistakes in code that it writes, suggesting that providing it runnable code to check the correctness of its outputs would be beneficial, even though we find that it is already often able to generate correctly-formatted unit tests. Our findings suggest that, when prompting models similar to Code Interpreter, it is important to include the basic information necessary to generate unit tests, but minor details are not as important. _Keywords: Large Language Models, Unit Testing, Code Interpreter, Quixbugs, Prompting_ ## 1 Introduction In software engineering, testing the correctness of written code, especially before deployment, is of utmost importance, since it greatly reduces the possibility of unexpected errors and crashes. A common approach to software testing is _unit testing_, in which code is broken down into smaller components whose correctness can be tested individually. Often, this is done by individually testing a _focal method_ or a _focal class_ in isolation. The advantage of such an approach is that breaking down code into smaller components reduces its complexity, making it easier for human programmers to construct a comprehensive unit test suite that includes a diverse set of edge cases. Furthermore, it allows human programmers to more easily pinpoint the location and cause of errors and discrepancies between the expected and actual output of the code, thus facilitating the debugging process. However, writing unit tests is often a time-consuming process that therefore demands a large portion of a developer's time and energy. In recent years, with the rise of Large Language Models (LLMs) such as ChatGPT [7], there has been an increasing focus on the application of LLMs to the task of writing unit tests, as they have the potential to drastically reduce the time necessary to properly and sufficiently test written code before deployment. Therefore, the study of the unit test generation capabilities of LLMs has the potential to provide a valuable tool for developers and to greatly increase the speed at which developers can produce correct code that handles edge cases well. ### Related Work In recent years, several previous works have attempted to tackle the problem of automatic unit test generation, with many opting to use LLMs for this purpose. For instance, AthenaTest [9] and A3Test [1] both use transformer-based language models to generate unit tests. Many previous methods, such as TestPilot (based on Codex) [8], ChatUnTest [10], and ChatTester [11], (both based on ChatGPT) make use of an iterative algorithm, in which a code generation model is initially prompted to generate unit testing code, and if there are errors in the generated code, it is repeatedly prompted with the goal of inducing it to fix the errors. Non-iterative methods include differential prompting [3], in which the model is prompted to find failing test cases for a given focal method by generating multiple reference implementations and finding the test cases in which the method under test and the reference implementations produce different results. **Common Failure Modes of Generated Unit Tests.** Although unit tests generated by LLMs have the advantage of being less time-consuming to generate, as compared to those generated by humans, they have the disadvantage of being more likely to contain errors that prevent successful compilation of the code. Indeed, such syntax and compilation errors are a significant source of ineffective unit testing code [8, 10, 11]. Other sources of ineffective unit tests include the unit testing code running for longer than the enforced timeout or asserting incorrect values, the latter of which may cause correct focal functions to be marked as incorrect and incorrect focal functions to be marked as correct [8]. However, due to the prevalence of compilation errors and the necessity of compilable code, it is imperative that a reliable method of fixing or correcting these errors be found. **Prompting.** There are several components of the prompts given to LLMs to generate unit tests that have been investigated by previous works. Due to the prevalence of iterative methods in the literature [8, 10, 11], the investigation of these methods sheds much light on the variety of different variables, enumerated below, that may be modified to alter the quality of the generated unit tests. Iterative methods work by first generating an initial prompt, and then subsequently repronting the model to correct errors or otherwise achieve a better result. Even though not all methods are iterative, we will, to the ends of readability and ease of characterization, simply treat them as iterative methods that only generate an initial prompt, without any reprompting. 1. _Amount of Code Context Given in the Initial Prompt._ It is important that the user provide the model with enough information (_i.e._ code context, such as the definition of the focal function or focal method, the signature of the focal function, etc.) to write proper unit tests, and in general, it is better to give more information, while keeping in mind the limited size of the model's context window [8, 9]. In particular, it is common practice to include information about the focal method's body in the initial prompt, whether that information be included directly, in the prompt itself [9, 10], or indirectly, such as by using that information to write an NL (natural language) description of the focal method's intention, which is then used as the actual prompt [11, 3]. It follows from the evidence in previous studies that this leads to better results. Aside from the information about the focal method's signature and body, other relevant pieces of information include the _focal class_ (the class that the focal method is a part of, if any), the fields and methods of the focal class, and the _dependencies_ of the focal method (the functions called by the focal method). Here, there is less consensus among previous studies. Some include that information in the initial prompt directly, whether it be with dependencies [10] or without [9]; some include it indirectly (via the generation of an intermediate NL prompt describing the intention) [11]; and others do not include it at all, in the initial prompt (although they may or may not include it in subsequent prompts) [3, 8]. 2. _Code Documentation._ TestPilot [8] also draws upon the documentation of the focal methods used. In particular, it, at times, prompts the model with doc comments (NL comments in the documentation that describe the focal method and its intention) and examples of the focal method's usage. However, the drawback of this approach is that it is only possible when there exists documentation from which to draw this information. For many focal methods, the relevant documentation does not exist. However, we may instead inquire about the effectiveness of including documentation information of the dependencies, especially if the dependencies include functions from well-known and widely-used libraries. Perhaps this may be an effective strategy. Alternatively, if a focal method or one or more of its dependencies does not have human-written documentation available online, it may possibly be beneficial to prompt the model (or a different model) to write documentation-style comments and usage examples for those functions itself, and then prompt the model with the output. Even though past studies have explored the possibility of having the model write an NL comment describing the focal method's purpose [11], the question of how the other prompting methods would affect the quality of the generated unit tests remains, though it is beyond the scope of this study. 3. _Followup Prompting._ In iterative methods, it is important to design the subsequent prompts to the model to maximize their effectiveness. Due to the prevalence of compilation errors in the generated unit test code, the primary objective of reprompting is to lead the model to repair errors in its code. In the literature, we have found three distinct factors that previous studies have experimented with: (1) the amount of information, or additional information, about the code context to include, (2) whether error messages are directly fed back into the model, or whether they are converted to feedback given in NL, and (3) the model involved in reprompting, if any. With respect to information included in subsequent prompts, previous models have either included only information related to the error message [11, 10], even for general coding applications outside of unit testing [5], or opted to include additional code context information due to finding it to be a more effective strategy [8]. Of the studies whose proposed reprompting algorithms include only the information about the error messages, some directly include the error message as part of the prompt [10, 11]. Other studies experiment with using LLMs or human feedback to turn the error message into NL before reprompting the code-generating model, albeit on general coding tasks [5]. If a subsequent prompt is constructed by taking the error message, as is, and taking that as part of the subsequent prompt, then it does not invoke an LLM to generate that prompt. However, evidence suggests that using more advanced models, such as GPT-4, to turn the error message into an NL explanation of the error may be more effective [5]. **Prompt Factors.** Through a combination of reviewing the literature and independently brainstorming, we come up with and present a list of factors that may cause prompts to result in different outputs in Appendix A. It is our hope that future researchers may find this list useful during literature review; however, we include it in the appendix for the sake of readability. **The Present Study.** Although there exist a variety of different factors, such as model size, training data quality, and code quality, that affect the quality of generated unit tests, the present study focuses on the effect of prompting on the quality of the generated tests. There are several reasons for this. Firstly, the prompt is one of the elements of the model over which the user is able to exercise a great deal of control. Therefore, our findings may be applied by any user, rather than only those with access to the internals of the model and with the ability to change them. Secondly, the ease at which users can change prompts means that they are able to tailor the prompts to suit their own unique needs and preferences. Thirdly, a study of the kinds of prompts that are effective in eliciting high-quality unit tests may provide useful insight into the nature of the model, since it may elucidate helpful general principles for eliciting high-quality responses. Such studies are also relatively uncommon in the literature. In the present study, we study the ability of Code Interpreter [6]1, a GPT-4 model with the ability to generate code, to generate unit tests for functions in the Quixbugs dataset [4]. In particular, we focus on how to engineer the prompt to achieve optimal results. We find evidence to suggest that, when prompt-engineering with Code Interpreter, the quality of the model's outputs, according to multiple metrics, is not sensitive to changes in minor, less relevant details in the prompt. Therefore, our findings suggest that users of Code Interpreter need not consider the details, so long as the basic information necessary for the model to perform its task are contained in the prompt. Footnote 1: Note that the usage is capped like with the default version of GPT-4 (the message displayed is that the usage of GPT-4 has reached the limit), and that the user interface ([https://chat.openai.com/?model=gpt-4-code-interpreter](https://chat.openai.com/?model=gpt-4-code-interpreter), as of August 2, 2023) classifies it under GPT-4, despite the blog post calling the underlying model a ChatGPT model, so we assume it to be a GPT-4 model. ## 2 Methodology The purpose of this study is to investigate how using different prompts affects the quality of generated unit tests. Therefore, we first generate the contents of the prompt that is fed into Code Interpreter, and then evaluate the output. Rather than giving subsequent prompts after the initial prompt, we only give it an initial prompt and evaluate the output from that prompt. This noniterative workflow of giving one prompt and receiving one reply per conversation simplifies the evaluation process and allows us greater freedom in experimenting with having the model regenerate its response to the same prompt multiple times, thus giving a more balanced and comprehensive view of its response to any given prompt. ### Prompt Generation When generating the content of the prompts, we consider the following dimensions along which the prompts may vary (for more details, see Appendices B and C): 1. _Format of Code Context._ We experiment by giving it the code context in either one of two formats: NL format, as a description of the function, or as the code of the focal function itself. The NL description is generated by Code Interpreter itself from the code of the incorrect implementation of the function body and simply copied and pasted into the prompt for generating the unit tests. In order to prevent the model from using that code when generating unit tests, we ask it to generate unit tests in a separate conversation window. The code itself comes from the Quixbugs dataset [4], which gives both a correct and incorrect implementation of each focal function. Because unit tests are conducted for the purpose of finding implementation errors, we give the incorrect implementation of the function when we give context in code format. All code is given in Python. Regardless of the format of the code context, we always include the function signature, which includes the function name and the names of the inputs. 2. _Number of Example Unit Tests Provided._ We experiment with giving the model formatting examples of zero, one, or two separate unit test cases. Regardless of the number of unit test case examples provided, we always provide adequate instruction for the expected format of the unit test cases, and in the prompt, we encourage the model to generate unit tests that are comprehensive and cover edge cases. 3. _Different Focal Functions._ Although it would be ideal to prompt with all of the focal functions in the dataset, we removed some for various reasons, such as: to ensure that the remaining functions were amenable to testing solely their inputs and outputs (_i.e._ they did not require testing of the intermediate processes, as was the case in functions such as depth- and breadth-first search, where the order of search is relevant), to filter out those for which a given input could have multiple correct outputs, to exclude those for which we suspected that the given correct implementation may not have been correct, to avoid functions whose inputs or outputs contained formatting not compatible with the testing framework producing accurate results (such as difficult-to-parse tuple or string expressions), etc. The subset of functions that remains can be found in Appendix D. 4. _Miscellaneous NL Comments._ We test whether the model produces better-written unit tests and catches the mistake in the incorrect implementation more often if the prompt contains comments such as "You are an expert programmer", which thus creates two distinct possibilities for the kinds of NL comments in the prompt. To see the code that was used to generate the prompts, and to view the prompt-generation process in more detail, please refer to Appendices B and C. We probe the model using every combination of the above 4 dimensions to generate prompts. For each prompt, we sample the model's output a total of 5 times. In cases when the model's output length exceeds the length of the preset token limit for a single response, we simply allow it to continue generating until it finishes its response. We collect data from August 1-16, 2023, inclusive. ### Output Evaluation When evaluating the output produced by the model, we check several components of the output: 1. _Correctness of Format._ We check whether the format, whether given as a downloadable file or an embedded code block, conforms to the format specified in the prompt. If it does not, then we do not use those test cases, as they are incompatible with the testing framework. Therefore, all data about the provided test cases comes solely from those generated with the correct format. 2. _Whether the Mistake is Corrected._ We observe that the model will sometimes catch the mistake in the incorrect implementation. We consider the model to have corrected the mistake if and only if it either correctly re-implements the function or it points out the error and the exact changes necessary to fix it. 3. _Correctness of Test Cases._ We check whether the expected output that the model gives in the test cases matches the output of the correct implementation of the function. If so, then we consider the test case to be correct. 4. _Coverage of Test Cases._ We examine whether the given test cases are _failing_, which we define as a test case in which the correct and incorrect implementations give different outputs. Therefore, failing test cases are those that are capable of uncovering errors in the incorrect implementation, if the correct output is known. Note that a failing test case is failing, regardless of whether the expected output, given in the test case, is correct; the input is the important variable for determining whether a test case is failing. 5. _Correct Coverage._ Because it is optimal for failing test cases to also be correct, we also combine the above two points by checking which test cases are both correct and failing. _Error Handling_ When the function throws an error as a result of the input from a test case, we handle it as follows: if the correct implementation throws an error, regardless of the behavior of the incorrect implementation, then we assume that the input does not satisfy the function's precondition, which means the function's behavior is undefined. Therefore, we consider the test case to be correct, since any output is correct, and not failing, because causing undefined behavior does not reveal the incorrectness of the incorrect implementation. If the correct implementation does not throw an error, but the incorrect implementation throws, then we consider the test case to be a failing test case. ## 3 Results We present data about all of the test cases, as whole, in Table 1. Based on the data, in order to have at least 10 expected correct failing test cases, it is advisable to resample at least 4 times from the same prompt. Upon performing t-tests on the resultant data, we find that, with a few exceptions, between any two types of prompt, the variability from reprompting and the diversity of focal functions provided was substantially greater than that provided by differences in prompting style. In other words, for most prompts, there was no significant evidence of a difference in any of the measured quantities arising from changes to the prompting style2. Footnote 2: The exceptions are that prompts that included the code, did not include miscellaneous NL comments, and included 1 or 2 output examples showed significantly more correctly formatted outputs than prompts that include an NL function description, miscellaneous NL comments, and no output examples (p = 0.000039 if the former prompt has 1 example; p = 0.000042 if the former prompt has 2 examples, which is significant at the alpha = 0.05 level). However, despite the low p-values of these differences, we do not focus much attention on their significance because the mean difference is small, and because we do not believe that this will bring much of a change to the end user, due to it being a difference in only a single metric. However, despite this, in general, we find that prompts that give code context directly (_i.e._ as code), do not include miscellaneous NL comments, \begin{table} \begin{tabular}{l||l l l l} \hline \hline & \multicolumn{4}{c}{Standard} & \\ & Mean & \multicolumn{1}{c}{Deviation} & \\ & No. of & in No. of & Mean & St. Dev. \\ & TCs per & TCs per & Frac. of & in Frac. \\ & Response & Response & TCs & of TCs \\ \hline CFORM & 7.78 & 3.95 & 1.00 & 0.00 \\ CORR & 6.13 & 4.18 & 0.79 & 0.54 \\ FAIL & 4.43 & 3.72 & 0.57 & 0.48 \\ CORR & & & & \\ and & 3.11 & 3.59 & 0.40 & 0.46 \\ FAIL & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Data collected about the test cases together, as a whole. The term fraction refers to the number of test cases with the given characteristic, per correctly-formatted test case. For example, the mean fraction of test cases per response for correct test cases refers to the proportion of test cases that were correct (on average, considering only correctly-formatted responses), which is computed by first aggregating the data and then averaging, rather than in the opposite order. Abbreviations used: CORR: correct test cases; FAIL: failing test cases; CFORM: correctly formatted; No.: number; frac.: fraction; st. dev.: standard deviation; TC: test case. and include 2 output examples are more generally associated with better performance metrics, while prompts that include the code context as an NL description, include miscellaneous NL comments, and do not have output examples have the opposite effect, though we note that the difference is not large. Thus, we advise that future Code Interpreter users employ the former prompting style when creating prompts. For the complete data collected, please see Appendix E. ### Observations In addition to the data above, we also make several qualitative observations about the responses produced by Code Interpreter, especially in regards to the format of the responses generated. Firstly, we observe that, when the model generates test cases that are not in the format that we specified, it is often in a similar, but more standard, format. In fact, the model response will sometimes point out that the format we specify is nonstandard JSON. Therefore, it appears that the model has an inclination towards working with more standard formats, suggesting that it would be beneficial to ask for standard, rather than nonstandard, formats, unless there is strong reason to use a nonstandard format, such as testing the model's ability to generalize to nonstandard formats that do not appear often in the training data. Secondly, the model is often not able to format test cases with multiple or nested sets of brackets correctly. Often, this happens in the test cases for sorting functions, since their input is a list of numbers (_i.e._[1, 2, 3]), rather than multiple numbers (_i.e._ 1, 2, 3). More generally, the model often produces test cases that are marked as incorrect by the testing framework, but would have been correct if they were correctly formatted. Thus, it sometimes misunderstands the expected test case format, despite the formatting instructions that we provide. Thirdly, and most interestingly, when the model itself writes Python code containing an error, it will often be able to correctly explain the error message and make the corresponding correction because it has access to its own internal Python environment. Therefore, to increase the probability of the model producing correctly-formatted test cases, we conjecture that it would be helpful to provide Python code designed to check whether the test cases are correctly formatted, allowing the model to check and correct the format of the cases that it generates. Even though providing explicit and specific formatting instructions helps the model produce correctly-formatted test cases, we conjecture that providing the format-checking code will be more effective, especially considering that the model does not always follow the formatting instructions. Alternatively, asking the model to provide the unit test as Python code may also be effective, since it would be able to run that code in its integrated Python environment. However, when the focal function causes an assertion error in the unit test code (which could happen when the actual output does not match the expected output) it would be important for the model to be able to differentiate that kind of error from an error in the unit test code itself. ## 4 Discussion Although previous works with previous LLMs have found that the quality of those models' outputs are noticeably influenced by the modification of details in the prompts, our findings are to the contrary, suggesting that the present model under test is more robust to such changes. Even though we do not test for all of the factors suggested in previous studies, we posit that Code Interpreter will not be as sensitive to such details as previous models are. However, it is important to keep in mind that there exist several basic elements that have always been present in all of our prompts. For example, we give clear instructions for how we expect the output to be formatted, a description of the focal function derived from its code (if not the code of the focal function itself), and resampled multiple outputs from the same initial prompt. We believe these elements to be necessary for the model to produce an output that conforms to the user's requirements. We speculate that this change from previous models is due to a difference in the training process that GPT-4 underwent, as compared to previous models, which thus allowed it to become robust to less relevant details and only be affected by more important details, much like a human programmer would. Because unit test generation is a well-defined and relatively common task for programmers, the abundance of available training data would only have served to accelerate this effect. However, without knowing more about the training process, it is difficult to make a definitive statement. As AI nears or surpasses human levels of intelligence in increasingly many domains, it is ever more important to pay close attention to the risks inherent in AI. ### Limitations As with any study, there exist limitations to ours, which we describe and address below. First, we focus primarily on resampling outputs from the same initial prompt, rather than using followup prompts to correct or enhance the existing output. However, we note that our approach does not require that we tailor a followup prompt specifically suited to induce the model to correct the existing output. Furthermore, the need to generate a followup response in order to repair the original response is also fulfilled by regenerating more responses from the initial prompt. This is because, of the multiple responses generated, it is exceedingly likely that at least one is of the correct format, since generating more responses will increase the chances that at least one is in the correct format. Second, our testing framework is limited by the kinds of unit tests that it can run. Therefore, we picked a subset of the functions that were compatible with the testing framework. This subset was characterized by functions that were amenable to simple input/output testing. However, a study on how the results might change if a more diverse set of functions (such as those requiring an in-depth examination of the intermediate stages, like breadth- and depth-first search) and test cases (such as those in which one of multiple possible answers could be correct) were included would be quite informative. Furthermore, an improvement in the way that our testing framework handled errors thrown by the focal function (either the correct or incorrect implementation) would also have provided a more accurate view of the behavior of both implementations under the generated test cases. ### Future Work An interesting future research direction to explore would be to investigate how followup prompts affect the quality of generated unit tests, and how such prompts can be designed to most efficiently do so. Furthermore, it would be worth investigating how models fare against more complex codebases containing multiple, interdependent functions or functions that are not commonly in the training data, especially if their doc comments are sparse or nonexistent. Additionally, a study on the degree to which the test cases were memorized from the training data, or how the prompt could be improved if the model were asked to improve it, would be informative. ### Alignment and Risks With the rise of AI capabilities, AI developers and researchers must take care to prevent the misuse and misalignment of increasingly powerful AI systems, especially as they near the level of artificial general intelligence (AGI). If AGI is realized, then misaligned agents have the potential to cause great, possibly even existential, harm to society, especially considering the increasing reliance on AI systems. Our work seeks to address this by shedding light on best practices for prompting AI models to construct more rigorous and comprehensive unit tests. Although there is still much room for improvement before certifiably safe and aligned AGI systems can be made, it is our hope that our work can make and inspire progress towards reaching that goal by laying the foundations for the automatic generation of safety checks through disseminating widely-applicable knowledge of unit test prompting best practices. However, it is also important to be mindful of unexpected capabilities that emerge from large models and to be prepared for the unforeseen challenges that may arise when attempting to make AI systems aligned. To deal with these challenges as they arise will be crucial for successfully working towards the goal of creating safe and aligned AGI systems. ## 5 Conclusion In this study, we investigate the effect of prompting on the quality of unit tests generated by Code Interpreter. In particular, we vary the format in which the code context was provided, the number of formatting examples for the expected output, and whether we include NL miscellaneous comments in the prompt telling the model that it is an expert code. Although these factors do not have a significant effect on the quality of the generated unit tests, we make several observations about the outputs of Code Interpreter, the most interesting of which is that its ability to run and correct its own Python code is quite effective and suggests that including code to check for the correctness of the output format would be useful. Future work could explore the effect of followup prompts on the quality of generated unit tests. However, as AI continues to advance, it is important that researchers increasingly focus their attention on preventing harms from AI-related issues, such as misalignment. ## 6 Acknowledgements We would like to express gratitude to Zachary Rudolph and the University of Chicago Existential Risk Laboratory (XLab) for providing the funding and facilities necessary to conduct this research. In particular, Zack's mentorship, feedback, and support proved invaluable for this project, and VL feels that the XLab Summer Research Fellowship has imparted on him a substantial amount of knowledge about existential risk, which is an important concern in today's times, even if he also thinks that Zack calls on him too often.
2309.05473
Machine learning the dimension of a Fano variety
Fano varieties are basic building blocks in geometry - they are `atomic pieces' of mathematical shapes. Recent progress in the classification of Fano varieties involves analysing an invariant called the quantum period. This is a sequence of integers which gives a numerical fingerprint for a Fano variety. It is conjectured that a Fano variety is uniquely determined by its quantum period. If this is true, one should be able to recover geometric properties of a Fano variety directly from its quantum period. We apply machine learning to the question: does the quantum period of X know the dimension of X? Note that there is as yet no theoretical understanding of this. We show that a simple feed-forward neural network can determine the dimension of X with 98% accuracy. Building on this, we establish rigorous asymptotics for the quantum periods of a class of Fano varieties. These asymptotics determine the dimension of X from its quantum period. Our results demonstrate that machine learning can pick out structure from complex mathematical data in situations where we lack theoretical understanding. They also give positive evidence for the conjecture that the quantum period of a Fano variety determines that variety.
Tom Coates, Alexander M. Kasprzyk, Sara Veneziale
2023-09-11T14:13:30Z
http://arxiv.org/abs/2309.05473v1
# Machine learning the dimension of a Fano variety ###### Abstract. Fano varieties are basic building blocks in geometry - they are 'atomic pieces' of mathematical shapes. Recent progress in the classification of Fano varieties involves analysing an invariant called the quantum period. This is a sequence of integers which gives a numerical fingerprint for a Fano variety. It is conjectured that a Fano variety is uniquely determined by its quantum period. If this is true, one should be able to recover geometric properties of a Fano variety directly from its quantum period. We apply machine learning to the question: does the quantum period of \(X\) know the dimension of \(X^{2}\) Note that there is as yet no theoretical understanding of this. We show that a simple feed-forward neural network can determine the dimension of \(X\) with \(98\%\) accuracy. Building on this, we establish rigorous asymptotics for the quantum periods of a class of Fano varieties. These asymptotics determine the dimension of \(X\) from its quantum period. Our results demonstrate that machine learning can pick out structure from complex mathematical data in situations where we lack theoretical understanding. They also give positive evidence for the conjecture that the quantum period of a Fano variety determines that variety. Key words and phrases:Fano varieties, quantum periods, mirror symmetry, machine learning 2020 Mathematics Subject Classification: 14J45 (Primary); 68T07 (Secondary) ## 1. Introduction Algebraic geometry describes shapes as the solution sets of systems of polynomial equations, and manipulates or analyses a shape \(X\) by manipulating or analysing the equations that define \(X\). This interplay between algebra and geometry has applications across mathematics and science; see e.g. [3, 57, 22, 53]. Shapes defined by polynomial equations are called _algebraic varieties_. Fano varieties are a key class of algebraic varieties. They are, in a precise sense, atomic pieces of mathematical shapes [45, 46]. Fano varieties also play an essential role in string theory. They provide, through their 'anticanonical sections', the main construction of the Calabi-Yau manifolds which give geometric models of spacetime [6, 55, 30]. The classification of Fano varieties is a long-standing open problem. The only one-dimensional example is a line; this is classical. The ten smooth two-dimensional Fano varieties were found by del Pezzo in the 1880s [19]. The classification of smooth Fano varieties in dimension three was a triumph of 20th century mathematics: it combines work by Fano in the 1930s, Iskovskikh in the 1970s, and Mori-Mukai in the 1980s [51, 52, 38, 24, 39, 40]. Beyond this, little is known, particularly for the important case of Fano varieties that are not smooth. A new approach to Fano classification centres around a set of ideas from string theory called Mirror Symmetry [31, 35, 7, 15]. From this perspective, the key invariant of a Fano variety is its _regularized quantum period_[8] \[\widehat{G}_{X}(t)=\sum_{d=0}^{\infty}c_{d}t^{d} \tag{1}\] This is a power series with coefficients \(c_{0}=1\), \(c_{1}=0\), and \(c_{d}=r_{d}d!\), where \(r_{d}\) is a certain Gromov-Witten invariant of \(X\). Intuitively speaking, \(r_{d}\) is the number of rational curves in \(X\) of degree \(d\) that pass through a fixed generic point and have a certain constraint on their complex structure. In general \(r_{d}\) can be a rational number, because curves with a symmetry group of order \(k\) are counted with weight \(1/k\), but in all known cases the coefficients \(c_{d}\) in (1) are integers. It is expected that the regularized quantum period \(\widehat{G}_{X}\) uniquely determines \(X\). This is true (and proven) for smooth Fano varieties in low dimensions, but is unknown in dimensions four and higher, and for Fano varieties that are not smooth. In this paper we will treat the regularized quantum period as a numerical signature for the Fano variety \(X\), given by the sequence of integers \((c_{0},c_{1},\ldots)\). _A priori_ this looks like an infinite amount of data, but in fact there is a differential operator \(L\) such that \(L\widehat{G}_{X}\equiv 0\); see e.g. [8, Theorem 4.3]. This gives a recurrence relation that determines all of the coefficients \(c_{d}\) from the first few terms, so the regularized quantum period \(\widehat{G}_{X}\) contains only a finite amount of information. Encoding a Fano variety \(X\) by a vector in \(\mathbb{Z}^{m+1}\) given by finitely many coefficients \((c_{0},c_{1},\ldots,c_{m})\) of the regularized quantum period allows us to investigate questions about Fano varieties using machine learning. In this paper we ask whether the regularized quantum period of a Fano variety \(X\) knows the dimension of \(X\). There is currently no viable theoretical approach to this question. Instead we use machine learning methods applied to a large dataset to argue that the answer is probably yes, and then prove that the answer is yes for toric Fano varieties of low Picard rank. The use of machine learning was essential to the formulation of our rigorous results (Theorems 5 and 6 below). This work is therefore proof-of-concept for a larger program, demonstrating that machine learning can uncover previously unknown structure in complex mathematical datasets. Thus the Data Revolution, which has had such impact across the rest of science, also brings important new insights to pure mathematics [18, 21, 34, 49, 58, 59]. This is particularly true for large-scale classification questions, e.g. [1, 10, 14, 17, 47], where these methods can potentially reveal both the classification itself and structural relationships within it. ## 2. Results ### Algebraic varieties can be smooth or have singularities Depending on their equations, algebraic varieties can be smooth (as in Figure 1(a)) or have singularities (as in Figure 1(b)). In this paper we consider algebraic varieties over the complex numbers. The equations in Figures 1(a) and 1(b) therefore define complex surfaces; however, for ease of visualisation, we have plotted only the points on these surfaces with co-ordinates that are real numbers. Most of the algebraic varieties that we consider below will be singular, but they all have a class of singularities called _terminal quotient singularities_. This is the most natural class of singularities to allow from the point of view of Fano classification [46]. Terminal quotient singularities are very mild; indeed, in dimensions one and two, an algebraic variety has terminal quotient singularities if and only if it is smooth. ### The Fano varieties that we consider The fundamental example of a Fano variety is projective space \(\mathbb{P}^{N-1}\). This is a quotient of \(\mathbb{C}^{N}\setminus\{0\}\) by the group \(\mathbb{C}^{\times}\), where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \((x_{1},x_{2},\ldots,x_{N})\) and \((\lambda x_{1},\lambda x_{2},\ldots,\lambda x_{N})\). The resulting algebraic variety is smooth and has dimension \(N-1\). We will consider generalisations of projective spaces called _weighted projective spaces_ and _toric varieties of Picard rank two_. A detailed introduction to these spaces is given in SSA. To define a weighted projective space, choose positive integers \(a_{1},a_{2},\ldots,a_{N}\) such that any subset of size \(N-1\) has no common factor, and consider \[\mathbb{P}(a_{1},a_{2},\ldots,a_{N})=(\mathbb{C}^{N}\setminus\{0\})/\mathbb{ C}^{\times}\] Figure 1. Algebraic varieties and their equations: (a) a smooth example; (b) an example with a singular point. where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \[(x_{1},x_{2},\ldots,x_{N})\quad\text{and}\quad(\lambda^{a_{1}}x_{1},\lambda^{a_{ 2}}x_{2},\ldots,\lambda^{a_{N}}x_{N})\] in \(\mathbb{C}^{N}\setminus\{0\}\). The quotient \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is an algebraic variety of dimension \(N-1\). A general point of \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is smooth, but there can be singular points. Indeed, a weighted projective space \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is smooth if and only if \(a_{i}=1\) for all \(i\), that is, if and only if it is a projective space. To define a toric variety of Picard rank two, choose a matrix \[\begin{pmatrix}a_{1}&a_{2}&\cdots&a_{N}\\ b_{1}&b_{2}&\cdots&b_{N}\end{pmatrix} \tag{2}\] with non-negative integer entries and no zero columns. This defines an action of \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) on \(\mathbb{C}^{N}\), where \((\lambda,\mu)\in\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) identifies the points \[(x_{1},x_{2},\ldots,x_{N})\quad\text{and}\quad(\lambda^{a_{1}}\mu^{b_{1}}x_{1 },\lambda^{a_{2}}\mu^{b_{2}}x_{2},\ldots,\lambda^{a_{N}}\mu^{b_{N}}x_{N})\] in \(\mathbb{C}^{N}\). Set \(a=a_{1}+a_{2}+\cdots+a_{N}\) and \(b=b_{1}+b_{2}+\cdots+b_{N}\), and suppose that \((a,b)\) is not a scalar multiple of \((a_{i},b_{i})\) for any \(i\). This determines linear subspaces \[S_{+}=\{(x_{1},x_{2},\ldots,x_{N})\mid x_{i}=0\text{ if }b_{i}/a_{i}<b/a\}\] \[S_{-}=\{(x_{1},x_{2},\ldots,x_{N})\mid x_{i}=0\text{ if }b_{i}/a_{i}>b/a\}\] of \(\mathbb{C}^{N}\), and we consider the quotient \[X=(\mathbb{C}^{N}\setminus S)/(\mathbb{C}^{\times}\times\mathbb{C}^{\times}) \tag{3}\] where \(S=S_{+}\cup S_{-}\). The quotient \(X\) is an algebraic variety of dimension \(N-2\) and second Betti number \(b_{2}(X)\leq 2\). If, as we assume henceforth, the subspaces \(S_{+}\) and \(S_{-}\) both have dimension two or more then \(b_{2}(X)=2\), and thus \(X\) has Picard rank two. In general \(X\) will have singular points, the precise form of which is determined by the weights in (2). There are closed formulas for the regularized quantum period of weighted projective spaces and toric varieties [9]. We have \[\widehat{G}_{\mathbb{P}}(t)=\sum_{k=0}^{\infty}\frac{(ak)!}{(a_{1}k)!(a_{2}k )!\cdots(a_{N}k)!}t^{ak} \tag{4}\] where \(\mathbb{P}=\mathbb{P}(a_{1},\ldots,a_{N})\) and \(a=a_{1}+a_{2}+\cdots+a_{N}\), and \[\widehat{G}_{X}(t)=\!\!\!\sum_{(k,l)\in\mathbb{Z}^{2}\cap\mathbb{C}}\!\!\frac {(ak+b)!}{(a_{1}k+b_{1}l)!\cdots(a_{N}k+b_{N}l)!}t^{ak+bl} \tag{5}\] where the weights for \(X\) are as in (2), and \(C\) is the cone in \(\mathbb{R}^{2}\) defined by the equations \(a_{i}x+b_{i}y\geq 0\), \(i\in\{1,2,\ldots,N\}\). Formula (4) implies that, for weighted projective spaces, the coefficient \(c_{d}\) from (1) is zero unless \(d\) is divisible by \(a\). Formula (5) implies that, for toric varieties of Picard rank two, \(c_{d}=0\) unless \(d\) is divisible by \(\gcd\{a,b\}\). _Data generation: weighted projective spaces._ The following result characterises weighted projective spaces with terminal quotient singularities; this is [43, Proposition 2.3]. **Proposition 1**.: _Let \(X=\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) be a weighted projective space of dimension at least three. Then \(X\) has terminal quotient singularities if and only if_ \[\sum_{i=1}^{N}\{ka_{i}/a\}\in\{2,\ldots,N-2\}\] _for each \(k\in\{2,\ldots,a-2\}\). Here \(a=a_{1}+a_{2}+\cdots+a_{N}\) and \(\{q\}\) denotes the fractional part \(q-\lfloor q\rfloor\) of \(q\in\mathbb{Q}\)._ A simpler necessary condition is given by [42, Theorem 3.5]: **Proposition 2**.: _Let \(X=\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) be a weighted projective space of dimension at least two, with weights ordered \(a_{1}\leq a_{2}\leq\ldots\leq a_{N}\). If \(X\) has terminal quotient singularities then \(a_{i}/a<1/(N-i+2)\) for each \(i\in\{3,\ldots,N\}\)._ Weighted projective spaces with terminal quotient singularities have been classified in dimensions up to four [41, 43]. Classifications in higher dimensions are hindered by the lack of an effective upper bound on \(a\). We randomly generated \(150\,000\) distinct weighted projective spaces with terminal quotient singularities, and with dimension up to \(10\), as follows. We generated random sequences of weights \(a_{1}\leq a_{2}\leq\ldots\leq a_{N}\) with \(a_{N}\leq 10N\) and discarded them if they failed to satisfy any one of the following: 1. for each \(i\in\{1,\ldots,N\}\), \(\gcd\{a_{1},\ldots,\widehat{a}_{i},\ldots,a_{N}\}=1\), where \(\widehat{a}_{i}\) indicates that \(a_{i}\) is omitted; 2. \(a_{i}/a<1/(N-i+2)\) for each \(i\in\{3,\ldots,N\}\); 3. \(\sum_{i=1}^{N}\{ka_{i}/a\}\in\{2,\ldots,N-2\}\) for each \(k\in\{2,\ldots,a-2\}\). Condition (i) here was part of our definition of weighted projective spaces above; it ensures that the set of singular points in \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) has dimension at most \(N-2\), and also that weighted projective spaces are isomorphic as algebraic varieties if and only if they have the same weights. Condition (ii) is from Proposition 2; it efficiently rules out many non-terminal examples. Condition (iii) is the necessary and sufficient condition from Proposition 1. We then deduplicated the sequences. The resulting sample sizes are summarised in Table 1. _Data generation: toric varieties._ Deduplicating randomly-generated toric varieties of Picard rank two is harder than deduplicating randomly generated weighted projective spaces, because different weight matrices in (2) can give rise to the same toric variety. Toric varieties are uniquely determined, up to isomorphism, by a combinatorial object called a _fan_[25]. A fan is a collection of cones, and one can determine the singularities of a toric variety \(X\) from the geometry of the cones in the corresponding fan. We randomly generated \(200\,000\) distinct toric varieties of Picard rank two with terminal quotient singularities, and with dimension up to \(10\), as follows. We randomly generated weight matrices, as in (2), such that \(0\leq a_{i},b_{j}\leq 5\). We then discarded the weight matrix if any column was zero, and otherwise formed the corresponding fan \(F\). We discarded the weight matrix unless: 1. \(F\) had \(N\) rays; 2. each cone in \(F\) was simplicial (i.e. has number of rays equal to its dimension); 3. the convex hull of the primitive generators of the rays of \(F\) contained no lattice points other than the rays and the origin. Conditions (i) and (ii) together guarantee that \(X\) has Picard rank two, and are equivalent to the conditions on the weight matrix in (2) given in our definition. Conditions (ii) and (iii) guarantee that \(X\) has terminal quotient singularities. We then deduplicated the weight matrices according to the isomorphism type of \(F\), by putting \(F\) in normal form [48, 32]. See Table 1 for a summary of the dataset. \begin{table} \begin{tabular}{c r r r r r} \hline \hline \multicolumn{3}{c}{Weighted projective spaces} & \multicolumn{3}{c}{Rank-two toric varieties} \\ \hline Dimension & Sample size & Percentage & Dimension & Sample size & Percentage \\ \hline 1 & 1 & 0.001 & & & \\ 2 & 1 & 0.001 & 2 & 2 & 0.001 \\ 3 & 7 & 0.005 & 3 & 17 & 0.009 \\ 4 & 8 936 & 5.957 & 4 & 758 & 0.379 \\ 5 & 23 584 & 15.723 & 5 & 6 050 & 3.025 \\ 6 & 23 640 & 15.760 & 6 & 19 690 & 9.845 \\ 7 & 23 700 & 15.800 & 7 & 35 395 & 17.698 \\ 8 & 23 469 & 15.646 & 8 & 42 866 & 21.433 \\ 9 & 23 225 & 15.483 & 9 & 47 206 & 23.603 \\ 10 & 23 437 & 15.625 & 10 & 48 016 & 24.008 \\ \hline Total & 150 000 & & Total & 200 000 & \\ \hline \hline \end{tabular} \end{table} Table 1. The distribution by dimension in our datasets. _Data analysis: weighted projective spaces._ We computed an initial segment \((c_{0},c_{1},\dots,c_{m})\) of the regularized quantum period for all the examples in the sample of \(150\,000\) terminal weighted projective spaces, with \(m\approx 100\,000\). The non-zero coefficients \(c_{d}\) appeared to grow exponentially with \(d\), and so we considered \(\{\log c_{d}\}_{d\in S}\) where \(S=\{d\in\mathbb{Z}_{\geq 0}\mid c_{d}\neq 0\}\). To reduce dimension we fitted a linear model to the set \(\{(d,\log c_{d})\mid d\in S\}\) and used the slope and intercept of this model as features; see Figure 2(a) for a typical example. Plotting the slope against the \(y\)-intercept and colouring datapoints according to the dimension we obtain Figure 3(a): note the clear separation by dimension. A Support Vector Machine (SVM) trained on \(10\%\) of the slope and \(y\)-intercept data predicted the dimension of the weighted projective space with an accuracy of \(99.99\%\). Full details are given in SSSB-C. _Data analysis: toric varieties._ As before, the non-zero coefficients \(c_{d}\) appeared to grow exponentially with \(d\), so we fitted a linear model to the set \(\{(d,\log c_{d})\mid d\in S\}\) where \(S=\{d\in\mathbb{Z}_{\geq 0}\mid c_{d}\neq 0\}\). We used the slope and intercept of this linear model as features. **Example 3**.: In Figure 2(b) we plot a typical example: the logarithm of the regularized quantum period sequence for the nine-dimensional toric variety with weight matrix \[\left(\begin{array}{cccccccccc}1&2&5&3&3&3&0&0&0&0&0\\ 0&0&0&3&4&4&1&2&2&3&4\end{array}\right)\] Figure 3. The slopes and \(y\)-intercepts from the linear models: (a) for weighted projective spaces with terminal quotient singularities. The colour records the dimension of the weighted projective space and the circled points indicate projective spaces. (b) for toric varieties of Picard rank two with terminal quotient singularities. The colour records the dimension of the toric variety. Figure 2. The logarithm of the non-zero period coefficients \(c_{d}\): (a) for a typical weighted projective space; (b) for the toric variety of Picard rank two from Example 3. along with the linear approximation. We see a periodic deviation from the linear approximation; the magnitude of this deviation decreases as \(d\) increases (not shown). To reduce computational costs, we computed pairs \((d,\log c_{d})\) for \(1000\leq d\leq 20\,000\) by sampling every \(100\)th term. We discarded the beginning of the period sequence because of the noise it introduces to the linear regression. In cases where the sampled coefficient \(c_{d}\) is zero, we considered instead the next non-zero coefficient. The resulting plot of slope against \(y\)-intercept, with datapoints coloured according to dimension, is shown in Figure 3(b). We analysed the standard errors for the slope and \(y\)-intercept of the linear model. The standard errors for the slope are small compared to the range of slopes, but in many cases the standard error \(s_{\text{int}}\) for the \(y\)-intercept is relatively large. As Figure 4 illustrates, discarding data points where the standard error \(s_{\text{int}}\) for the \(y\)-intercept exceeds some threshold reduces apparent noise. This suggests that the underlying structure is being obscured by inaccuracies in the linear regression caused by oscillatory behaviour in the initial terms of the quantum period sequence; these inaccuracies are concentrated in the \(y\)-intercept of the linear model. Note that restricting attention to those data points where \(s_{\text{int}}\) is small also greatly decreases the range of \(y\)-intercepts that occur. As Example 4 and Figure 5 suggest, this reflects both transient oscillatory behaviour and also the presence of a subleading term in the asymptotics of \(\log c_{d}\) which is missing from our feature set. We discuss this further below. **Example 4**.: Consider the toric variety with Picard rank two and weight matrix \[\begin{pmatrix}1&10&5&13&8&12&0\\ 0&0&3&8&5&14&1\end{pmatrix}\] This is one of the outliers in Figure 3(b). The toric variety is five-dimensional, and has slope \(1.637\) and \(y\)-intercept \(-62.64\). The standard errors are \(4.246\times 10^{-4}\) for the slope and \(5.021\) for the \(y\)-intercept. We computed the first \(40\,000\) coefficients \(c_{d}\) in (1). As Figure 5 shows, as \(d\) increases the \(y\)-intercept of the linear model increases to \(-28.96\) and \(s_{\text{int}}\) decreases to \(0.7877\). At the same time, the slope of the linear model remains more or less unchanged, decreasing to \(1.635\). This supports the idea that computing (many) more coefficients \(c_{d}\) would significantly reduce noise in Figure 3(b). In this example, even \(40\,000\) coefficients may not be enough. Computing many more coefficients \(c_{d}\) across the whole dataset would require impractical amounts of computation time. In the example above, which is typical in this regard, increasing the number of coefficients computed from \(20\,000\) to \(40\,000\) increased the computation time by a factor of more than \(10\). Instead we restrict to those toric varieties of Picard rank two such that the \(y\)-intercept standard error \(s_{\text{int}}\) is less than \(0.3\); this retains \(67\,443\) of the \(200\,000\) datapoints. We used \(70\%\) of the slope and \(y\)-intercept data in the restricted dataset for model training, and the rest for validation. An SVM model predicted the dimension of the toric variety with an accuracy of \(87.7\%\), and a Random Forest Classifier (RFC) predicted the dimension with an accuracy of \(88.6\%\). Figure 4. The slopes and \(y\)-intercepts from the linear model. This is as in Figure 3(b), but plotting only data points for which the standard error \(s_{\text{int}}\) for the \(y\)-intercept satisfies \(s_{\text{int}}<0.3\). The colour records the dimension of the toric variety. _Neural networks._ Neural networks do not handle unbalanced datasets well. We therefore removed the toric varieties of dimensions \(3\), \(4\), and \(5\) from our data, leaving \(61\,164\) toric varieties of Picard rank two with terminal quotient singularities and \(s_{\mathrm{int}}<0.3\). This dataset is approximately balanced by dimension. A Multilayer Perceptron (MLP) with three hidden layers of sizes \((10,30,10)\) using the slope and intercept as features predicted the dimension with \(89.0\%\) accuracy. Since the slope and intercept give good control over \(\log c_{d}\) for \(d\gg 0\), but not for small \(d\), it is likely that the coefficients \(c_{d}\) with \(d\) small contain extra information that the slope and intercept do not see. Supplementing the feature set by including the first \(100\) coefficients \(c_{d}\) as well as the slope and intercept increased the accuracy of the prediction to \(97.7\%\). Full details can be found in SSSB-C. _From machine learning to rigorous analysis._ Elementary "out of the box" models (SVM, RFC, and MLP) trained on the slope and intercept data alone already gave a highly accurate prediction for the dimension. Furthermore even for the many-feature MLP, which was the most accurate, sensitivity analysis using SHAP values [50] showed that the slope and intercept were substantially more important to the prediction than any of the coefficients \(c_{d}\): see Figure 6. This suggested that the dimension of \(X\) might be visible from a rigorous estimate of the growth rate of \(\log c_{d}\). In SS3 we establish asymptotic results for the regularized quantum period of toric varieties with low Picard rank, as follows. These results apply to any weighted projective space or toric variety of Picard rank two: they do not require a terminality hypothesis. Note, in each case, the presence of a subleading logarithmic term in the asymptotics for \(\log c_{d}\). **Theorem 5**.: _Let \(X\) denote the weighted projective space \(\mathbb{P}(a_{1},\ldots,a_{N})\), so that the dimension of \(X\) is \(N-1\). Let \(c_{d}\) denote the coefficient of \(t^{d}\) in the regularized quantum period \(\widehat{G}_{X}(t)\) given in (4). Let \(a=a_{1}+\cdots+a_{N}\) and \(p_{i}=a_{i}/a\). Then \(c_{d}=0\) unless \(d\) is divisible by \(a\), and non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[A =-\sum_{i=1}^{N}p_{i}\log p_{i}\] \[B =-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^{N}\log p_{i}\] Note, although it plays no role in what follows, that \(A\) is the Shannon entropy of the discrete random variable \(Z\) with distribution \((p_{1},p_{2},\ldots,p_{N})\), and that \(B\) is a constant plus half the total self-information of \(Z\). Figure 5. Variation as we move deeper into the period sequence. The \(y\)-intercept and its standard error \(s_{\mathrm{int}}\) for the toric variety from Example 4, as computed from pairs \((k,\log c_{k})\) such that \(d-20\,000\leq k\leq d\) by sampling every \(100\)th term. We also show LOWESS-smoothed trend lines. **Theorem 6**.: _Let \(X\) denote the toric variety of Picard rank two with weight matrix_ \[\begin{pmatrix}a_{1}&a_{2}&a_{3}&\cdots&a_{N}\\ b_{1}&b_{2}&b_{3}&\cdots&b_{N}\end{pmatrix}\] _so that the dimension of \(X\) is \(N-2\). Let \(a=a_{1}+\cdots+a_{N}\), \(b=b_{1}+\cdots+b_{N}\), and \(\ell=\gcd\{a,b\}\). Let \([\mu:\nu]\in\mathbb{P}^{1}\) be the unique root of the homogeneous polynomial_ \[\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu)^{a_{i}b}-\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu )^{b_{i}a}\] _such that \(a_{i}\mu+b_{i}\nu\geq 0\) for all \(i\in\{1,2,\ldots,N\}\), and set_ \[p_{i}=\frac{\mu a_{i}+\nu b_{i}}{\mu a+\nu b}\] _Let \(c_{d}\) denote the coefficient of \(t^{d}\) in the regularized quantum period \(\widehat{G}_{X}(t)\) given in (5). Then non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[A =-\sum_{i=1}^{N}p_{i}\log p_{i}\] \[B =-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^{N}\log p_{i}- \frac{1}{2}\log\left(\sum_{i=1}^{N}\frac{(a_{i}b-b_{i}a)^{2}}{\ell^{2}p_{i}}\right)\] Theorem 5 is a straightforward application of Stirling's formula. Theorem 6 is more involved, and relies on a Central Limit-type theorem that generalises the De Moivre-Laplace theorem. _Theoretical analysis._ The asymptotics in Theorems 5 and 6 imply that, for \(X\) a weighted projective space or toric variety of Picard rank two, the quantum period determines the dimension of \(X\). Let us revisit the clustering analysis from this perspective. Recall the asymptotic expression \(\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\) and the formulae for \(A\) and \(B\) from Theorem 5. Figure 7(a) shows the values of Figure 6. Model sensitivity analysis using SHAP values. The model is an MLP with three hidden layers of sizes (10,30,10) applied to toric varieties of Picard rank two with terminal quotient singularities. It is trained on the slope, \(y\)-intercept, and the first 100 coefficients \(c_{d}\) as features, and predicts the dimension with 97.7% accuracy. and \(B\) for a sample of weighted projective spaces, coloured by dimension. Note the clusters, which overlap. Broadly speaking, the values of \(B\) increase as the dimension of the weighted projective space increases, whereas in Figure 3(a) the \(y\)-intercepts decrease as the dimension increases. This reflects the fact that we fitted a linear model to \(\log c_{d}\), omitting the subleading \(\log d\) term in the asymptotics. As Figure 8 shows, the linear model assigns the omitted term to the \(y\)-intercept rather than the slope. Figure 8. For weighted projective spaces, the asymptotic coefficients \(A\) and \(B\) are closely related to the slope and \(y\)-intercept. (a) Comparison between \(A\) and the slope from the linear model, for weighted projective spaces that occur in both Figure 3(a) and Figure 7(a), coloured by dimension. The line slope \(=A\) is indicated. (b) Comparison between \(B\) and the \(y\)-intercept from the linear model, for weighted projective spaces that occur in both Figure 3(a) and Figure 7(a), coloured by dimension. In each case the line \(y\)-intercept \(=B-\frac{9}{2}\dim X\) is shown. Figure 7. The values of the asymptotic coefficients \(A\) and \(B\): (a) for all weighted projective spaces \(\mathbb{P}(a_{1},\ldots,a_{N})\) with terminal quotient singularities and \(a_{i}\leq 25\) for all \(i\). The colour records the dimension of the weighted projective space. (b) for toric varieties of Picard rank two in our dataset. The colour records the dimension of the toric variety. The slope of the linear model is approximately equal to \(A\). The \(y\)-intercept, however, differs from \(B\) by a dimension-dependent factor. The omitted \(\log\) term does not vary too much over the range of degrees (\(d<100\,000\)) that we considered, and has the effect of reducing the observed \(y\)-intercept from \(B\) to approximately \(B-\frac{9}{2}\dim X\), distorting the clusters slightly and translating them downwards by a dimension-dependent factor. This separates the clusters. We expect that the same mechanism applies in Picard rank two as well: see Figure 7(b). We can show that each cluster in Figure 7(a) is linearly bounded using constrained optimisation techniques. Consider for example the cluster for weighted projective spaces of dimension five, as in Figure 9. **Proposition 7**.: _Let \(X\) be the five-dimensional weighted projective space \(\mathbb{P}(a_{1},\ldots,a_{6})\), and let \(A\), \(B\) be as in Theorem 5. Then \(B+\frac{5}{2}A\geq\frac{41}{8}\). If in addition \(a_{i}\leq 25\) for all \(i\) then \(B+5A\leq\frac{41}{40}\)._ Fix a suitable \(\theta\geq 0\) and consider \[B+\theta A=-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^{N}\log p_{i}- \theta\sum_{i=1}^{N}p_{i}\log p_{i}\] with \(\dim X=N-1=5\). Solving \[\min(B+\theta A)\quad\text{subject to}\quad p_{1}+\cdots+p_{6}=1\] \[p_{1},\ldots,p_{6}\geq 0\] on the five-simplex gives a linear lower bound for the cluster. This bound does not use terminality: it applies to any weighted projective space of dimension five. The expression \(B+\theta A\) is unbounded above on the five-simplex (because \(B\) is) so we cannot obtain an upper bound this way. Instead, consider \[\max(B+\theta A)\quad\text{subject to}\quad p_{1}+\cdots+p_{6}=1\] \[e\leq p_{1}\leq p_{2}\leq\cdots\leq p_{6}\] for an appropriate small positive \(e\), which we can take to be \(1/a\) where \(a\) is the maximum sum of the weights. For Figure 9, for example, we can take \(a=124\), and in general such an \(a\) exists because there are only finitely many terminal weighted projective spaces. This gives a linear upper bound for the cluster. The same methods yield linear bounds on each of the clusters in Figure 7(a). As the Figure shows however, the clusters are not linearly separable. Figure 9. Linear bounds for the cluster of five-dimensional weighted projective spaces in Figure 7(a). The bounds are given by Proposition 7. **Discussion.** We developed machine learning models that predict, with high accuracy, the dimension of a Fano variety from its regularized quantum period. These models apply to weighted projective spaces and toric varieties of Picard rank two with terminal quotient singularities. We then established rigorous asymptotics for the regularized quantum period of these Fano varieties. The form of the asymptotics implies that, in these cases, the regularized quantum period of a Fano variety \(X\) determines the dimension of \(X\). The asymptotics also give a theoretical underpinning for the success of the machine learning models. Perversely, because the series involved converge extremely slowly, reading the dimension of a Fano variety directly from the asymptotics of the regularized quantum period is not practical. For the same reason, enhancing the feature set of our machine learning models by including a \(\log d\) term in the linear regression results in less accurate predictions. So although the asymptotics in Theorems 5 and 6 determine the dimension in theory, in practice the most effective way to determine the dimension of an unknown Fano variety from its quantum period is to apply a machine learning model. The insights gained from machine learning were the key to our formulation of the rigorous results in Theorems 5 and 6. Indeed, it might be hard to discover these results without a machine learning approach. It is notable that the techniques in the proof of Theorem 6 - the identification of generating functions for Gromov-Witten invariants of toric varieties with certain hypergeometric functions - have been known since the late 1990s and have been studied by many experts in hypergeometric functions since then. For us, the essential step in the discovery of the results was the feature extraction that we performed as part of our ML pipeline. This work demonstrates that machine learning can uncover previously unknown structure in complex mathematical data, and is a powerful tool for developing rigorous mathematical results; cf. [18]. It also provides evidence for a fundamental conjecture in the Fano classification program [8]: that the regularized quantum period of a Fano variety determines that variety. ## 3. Methods In this section we prove Theorem 5 and Theorem 6. The following result implies Theorem 5. **Theorem 8**.: _Let \(X\) denote the weighted projective space \(\mathbb{P}(a_{1},\ldots,a_{N})\), so that the dimension of \(X\) is \(N-1\). Let \(c_{d}\) denote the coefficient of \(t^{d}\) in the regularized quantum period \(\widetilde{G}_{X}(t)\) given in (4). Let \(a=a_{1}+\ldots+a_{N}\). Then \(c_{d}=0\) unless \(d\) is divisible by \(a\), and_ \[\log c_{ka}\sim ka\left[\log a-\frac{1}{a}\sum_{i=1}^{N}a_{i}\log a_{i}\right] -\frac{\dim X}{2}\log(ka)+\frac{1+\dim X}{2}\log a-\frac{\dim X}{2}\log(2\pi) -\frac{1}{2}\sum_{i=1}^{N}\log a_{i}\] _That is, non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[A=-\sum_{i=1}^{N}p_{i}\log p_{i}\quad B=-\frac{\dim X}{2}\log(2\pi)-\frac{1}{ 2}\sum_{i=1}^{N}\log p_{i}\] _and \(p_{i}=a_{i}/a\)._ Proof.: Combine Stirling's formula \[n!\sim\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}\] with the closed formula (4) for \(c_{ka}\). _Toric varieties of Picard rank 2._ Consider a toric variety \(X\) of Picard rank two and dimension \(N-2\) with weight matrix \[\begin{pmatrix}a_{1}&a_{2}&a_{3}&\cdots&a_{N}\\ b_{1}&b_{2}&b_{3}&\cdots&b_{N}\end{pmatrix}\] as in (2). Let us move to more invariant notation, writing \(\alpha_{i}\) for the linear form on \(\mathbb{R}^{2}\) defined by the transpose of the \(i\)th column of the weight matrix, and \(\alpha=\alpha_{1}+\cdots+\alpha_{N}\). Equation 5 becomes \[\widehat{G}_{X}(t)=\sum_{k\in\mathbb{Z}^{2}\cap C}\frac{(\alpha\cdot k)!}{\prod_ {i=1}^{N}(\alpha_{i}\cdot k)!}t^{\alpha\cdot k}\] where \(C\) is the cone \(C=\{x\in\mathbb{R}^{2}\mid\alpha_{i}\cdot x\geq 0\text{ for }i=1,2,\ldots,N\}\). As we will see, for \(d\gg 0\) the coefficients \[\frac{(\alpha\cdot k)!}{\prod_{i=1}^{N}(\alpha_{i}\cdot k)!}\quad\text{where }k \in\mathbb{Z}^{2}\cap C\text{ and }\alpha\cdot k=d\] are approximated by a rescaled Gaussian. We begin by finding the mean of that Gaussian, that is, by minimising \[\prod_{i=1}^{N}(\alpha_{i}\cdot k)!\quad\text{where }k\in\mathbb{Z}^{2}\cap C \text{ and }\alpha\cdot k=d.\] For \(k\) in the strict interior of \(C\) with \(\alpha\cdot k=d\), we have that \[(\alpha_{i}\cdot k)!\sim\left(\frac{\alpha_{i}\cdot k}{e}\right)^{\alpha_{i} \cdot k}\] as \(d\to\infty\). **Proposition 9**.: _The constrained optimisation problem_ \[\min\prod_{i=1}^{N}(\alpha_{i}\cdot x)^{\alpha_{i}\cdot x}\quad\text{subject to }\begin{cases}x\in C\\ \alpha\cdot x=d\end{cases}\] _has a unique solution \(x=x^{*}\). Furthermore, setting \(p_{i}=(\alpha_{i}\cdot x^{*})/(\alpha\cdot x^{*})\) we have that the monomial_ \[\prod_{i=1}^{N}p_{i}^{\alpha_{i}\cdot k}\] _depends on \(k\in\mathbb{Z}^{2}\) only via \(\alpha\cdot k\)._ Proof.: Taking logarithms gives the equivalent problem \[\min\sum_{i=1}^{N}(\alpha_{i}\cdot x)\log(\alpha_{i}\cdot x) \text{subject to }\begin{cases}x\in C\\ \alpha\cdot x=d\end{cases} \tag{6}\] The objective function \(\sum_{i=1}^{N}(\alpha_{i}\cdot x)\log(\alpha_{i}\cdot x)\) here is the pullback to \(\mathbb{R}^{2}\) of the function \[f(x_{1},\ldots,x_{N})=\sum_{i=1}^{N}x_{i}\log x_{i}\] along the linear embedding \(\varphi:\mathbb{R}^{2}\to\mathbb{R}^{N}\) given by \((\alpha_{1},\ldots,\alpha_{N})\). Note that \(C\) is the preimage under \(\varphi\) of the positive orthant \(\mathbb{R}^{N}_{+}\), so we need to minimise \(f\) on the intersection of the simplex \(x_{1}+\cdots+x_{N}=d\), \((x_{1},\ldots,x_{N})\in\mathbb{R}^{N}_{+}\) with the image of \(\varphi\). The function \(f\) is convex and decreases as we move away from the boundary of the simplex, so the minimisation problem in (6) has a unique solution \(x^{*}\) and this lies in the strict interior of \(C\). We can therefore find the minimum \(x^{*}\) using the method of Lagrange multipliers, by solving \[\sum_{i=1}^{N}\alpha_{i}\log(\alpha_{i}\cdot x)+\alpha=\lambda\alpha \tag{7}\] for \(\lambda\in\mathbb{R}\) and \(x\) in the interior of \(C\) with \(\alpha\cdot x=d\). Thus \[\sum_{i=1}^{N}\alpha_{i}\log(\alpha_{i}\cdot x^{*})=(\lambda-1)\alpha\] and, evaluating on \(k\in\mathbb{Z}^{2}\) and exponentiating, we see that \[\prod_{i=1}^{N}(\alpha_{i}\cdot x^{*})^{\alpha_{i}\cdot k}\] depends only on \(\alpha\cdot k\). The result follows. Given a solution \(x^{*}\) to (7), any positive scalar multiple of \(x^{*}\) also satisfies (7), with a different value of \(\lambda\) and a different value of \(d\). Thus the solutions \(x^{*}\), as \(d\) varies, lie on a half-line through the origin. The direction vector \([\mu:\nu]\in\mathbb{P}^{1}\) of this half-line is the unique solution to the system \[\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu)^{a,b} =\prod_{i=1}^{N}(a_{i}\mu+b_{i}\nu)^{b_{i}a} \tag{8}\] \[\begin{pmatrix}\mu\\ \nu\end{pmatrix} \in C\] Note that the first equation here is homogeneous in \(\mu\) and \(\nu\); it is equivalent to (7), by exponentiating and then eliminating \(\lambda\). Any two solutions \(x^{*}\), for different values of \(d\), differ by rescaling, and the quantities \(p_{i}\) in Proposition 9 are invariant under this rescaling. They also satisfy \(p_{1}+\cdots+p_{N}=1\). We use the following result, known in the literature as the "Local Theorem" [29], to approximate multinomial coefficients. **Local Theorem**.: _For \(p_{1},\ldots,p_{n}\in[0,1]\) such that \(p_{1}+\cdots+p_{n}=1\), the ratio_ \[d^{\frac{n+1}{2}}\begin{pmatrix}d\\ k_{1}\cdots k_{n}\end{pmatrix}\prod_{i=1}^{n}p_{i}^{k_{i}}:\frac{\exp(-\frac{1}{ 2}\sum_{i=1}^{n}q_{i}x_{i}^{2})}{(2\pi)^{\frac{n+1}{2}}\sqrt{p_{1}\cdots p_{n} }}\to 1\] _as \(d\to\infty\), uniformly in all \(k_{i}\)'s, where_ \[q_{i} =1-p_{i} x_{i} =\frac{k_{i}-dp_{i}}{\sqrt{dp_{i}q_{i}}}\] _and the \(x_{i}\) lie in bounded intervals._ Let \(B_{r}\) denote the ball of radius \(r\) about \(x^{*}\in\mathbb{R}^{2}\). Fix \(R>0\). We apply the Local Theorem with \(k_{i}=\alpha_{i}\cdot k\) and \(p_{i}=(\alpha_{i}\cdot x^{*})/(\alpha\cdot x^{*})\), where \(k\in\mathbb{Z}^{2}\cap C\) satisfies \(\alpha\cdot k=d\) and \(k\in B_{R\sqrt{d}}\). Since \[x_{i}=\frac{\alpha_{i}\cdot(k-x^{*})}{\sqrt{dp_{i}q_{i}}}\] the assumption that \(k\in B_{R\sqrt{d}}\) ensures that the \(x_{i}\) remain bounded as \(d\to\infty\). Note that, by Proposition 9, the monomial \(\prod_{i=1}^{N}p_{i}^{k_{i}}\) depends on \(k\) only via \(\alpha\cdot k\), and hence here is independent of \(k\): \[\prod_{i=1}^{N}p_{i}^{k_{i}}=\prod_{i=1}^{N}p_{i}^{a_{i}\cdot x^{*}}=\prod_{i= 1}^{N}p_{i}^{dp_{i}}\] Furthermore \[\sum_{i=1}^{N}q_{i}x_{i}^{2}=\frac{(k-x^{*})^{T}A\left(k-x^{*}\right)}{d}\] where \(A\) is the positive-definite \(2\times 2\) matrix given by \[A=\sum_{i=1}^{N}\frac{1}{p_{i}}\alpha_{i}^{T}\alpha_{i}\] Thus as \(d\to\infty\), the ratio \[\frac{(\alpha\cdot k)!}{\prod_{i=1}^{N}(\alpha_{i}\cdot k)!}:\frac{\exp\left( -\frac{1}{2d}(k-x^{*})^{T}A\left(k-x^{*}\right)\right)}{(2\pi d)^{\frac{N-1}{2 }}\prod_{i=1}^{N}p_{i}^{dp_{i}+\frac{1}{2}}}\to 1 \tag{9}\] for all \(k\in\mathbb{Z}^{2}\cap C\cap B_{R\sqrt{d}}\) such that \(\alpha\cdot k=d\). **Theorem 6**.: _Let \(X\) be a toric variety of Picard rank two and dimension \(N-2\) with weight matrix_ \[\begin{pmatrix}a_{1}&a_{2}&a_{3}&\cdots&a_{N}\\ b_{1}&b_{2}&b_{3}&\cdots&b_{N}\end{pmatrix}\] _Let \(a=a_{1}+\cdots+a_{N}\) and \(b=b_{1}+\cdots+b_{N}\), let \(\ell=\gcd(a,b)\), and let \([\underline{\mu}:v]\in\mathbb{P}^{1}\) be the unique solution to (8). Let \(c_{d}\) denote the coefficient of \(t^{d}\) in the regularized quantum period \(\widetilde{G}_{X}(t)\). Then non-zero coefficients \(c_{d}\) satisfy_ \[\log c_{d}\sim Ad-\frac{\dim X}{2}\log d+B\] _as \(d\to\infty\), where_ \[A =-\sum_{i=1}^{N}p_{i}\log p_{i}\] \[B =-\frac{\dim X}{2}\log(2\pi)-\frac{1}{2}\sum_{i=1}^{N}\log p_{i}- \frac{1}{2}\log\left(\sum_{i=1}^{N}\frac{(a_{i}b-b_{i}a)^{2}}{\ell^{2}p_{i}}\right)\] _and \(p_{i}=\frac{\mu a_{i}+vb_{i}}{\mu a+vb}\)._ Proof.: We need to estimate \[c_{d}=\sum_{\begin{subarray}{c}k\in\mathbb{Z}^{2}\cap C\\ \text{with }\alpha\cdot k=d\end{subarray}}\frac{(\alpha\cdot k)!}{\prod_{i=1}^{N}( \alpha_{i}\cdot k)!}\] Consider first the summands with \(k\in\mathbb{Z}^{2}\cap C\) such that \(\alpha\cdot k=d\) and \(k\notin B_{R\sqrt{d}}\). For \(d\) sufficiently large, each such summand is bounded by \(cd^{-\frac{1+\dim X}{2}}\) for some constant \(c\) - see (9). Since the number of such summands grows linearly with \(d\), in the limit \(d\to\infty\) the contribution to \(c_{d}\) from \(k\notin B_{R\sqrt{d}}\) vanishes. As \(d\to\infty\), therefore \[c_{d}\sim\frac{1}{(2\pi d)^{\frac{N-1}{2}}\prod_{i=1}^{N}p_{i}^{dp_{i}+\frac{ 1}{2}}}\sum_{\begin{subarray}{c}k\in\mathbb{Z}^{2}\cap C\cap B_{R\sqrt{d}}\\ \text{with }\alpha\cdot k=d\end{subarray}}\exp\left(-\frac{(k-x^{*})^{T}A \left(k-x^{*}\right)}{2d}\right)\] Writing \(y_{k}=(k-x^{*})/\sqrt{d}\), considering the sum here as a Riemann sum, and letting \(R\to\infty\), we see that \[c_{d}\sim\frac{1}{(2\pi d)^{\frac{N-1}{2}}\prod_{i=1}^{N}p_{i}^{dp_{i}+\frac{ 1}{2}}}\sqrt{d}\int_{L_{\alpha}}\exp\left(-\tfrac{1}{2}y^{T}Ay\right)dy\] where \(L_{\alpha}\) is the line through the origin given by \(\ker\alpha\) and \(dy\) is the measure on \(L_{\alpha}\) given by the integer lattice \(\mathbb{Z}^{2}\cap L_{\alpha}\subset L_{\alpha}\). To evaluate the integral, let \[\alpha^{\perp}=\frac{1}{\ell}\begin{pmatrix}b\\ -a\end{pmatrix}\quad\text{where }\ell=\gcd\{a,b\}\] and observe that the pullback of \(dy\) along the map \(\mathbb{R}\to L_{\alpha}\) given by \(t\mapsto t\alpha^{\perp}\) is the standard measure on \(\mathbb{R}\). Thus \[\int_{L_{\alpha}}\exp\left(-\tfrac{1}{2}y^{T}Ay\right)dy=\int_{-\infty}^{\infty }\exp\left(-\tfrac{1}{2}\theta x^{2}\right)dx=\sqrt{\frac{2\pi}{\theta}}\] where \(\theta=\sum_{i=1}^{N}\frac{1}{\ell p_{i}}(\alpha_{i}\cdot\alpha^{\perp})^{2}\), and \[c_{d}\sim\frac{1}{(2\pi d)^{\frac{\dim X}{2}}\prod_{i=1}^{N}p_{i}^{dp_{i}+\frac {1}{2}}\sqrt{\theta}}\] Taking logarithms gives the result. ## Appendix A Supplementary Notes We begin with an introduction to weighted projective spaces and toric varieties, aimed at non-specialists. ### Projective spaces and weighted projective spaces The fundamental example of a Fano variety is two-dimensional projective space \(\mathbb{P}^{2}\). This is a quotient of \(\mathbb{C}^{3}\setminus\{0\}\) by the group \(\mathbb{C}^{\times}\), where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \((x,y,z)\) and \((\lambda x,\lambda y,\lambda z)\) in \(\mathbb{C}^{3}\setminus\{0\}\). The variety \(\mathbb{P}^{2}\) is smooth: we can see this by covering it with three open sets \(U_{x}\), \(U_{y}\), \(U_{z}\) that are each isomorphic to the plane \(\mathbb{C}^{2}\): \[U_{x} =\{(1,Y,Z)\}\quad\text{given by rescaling $x$ to $1$}\] \[U_{y} =\{(X,1,Z)\}\quad\text{given by rescaling $y$ to $1$}\] \[U_{z} =\{(X,Y,1)\}\quad\text{given by rescaling $z$ to $1$}\] Here, for example, in the case \(U_{x}\) we take \(x\neq 0\) and set \(Y=y/x\), \(Z=z/x\). Although the projective space \(\mathbb{P}^{2}\) is smooth, there are closely related Fano varieties called weighted projective spaces [20, 36] that have singularities. For example, consider the weighted projective plane \(\mathbb{P}(1,2,3)\): this is the quotient of \(\mathbb{C}^{3}\setminus\{0\}\) by \(\mathbb{C}^{\times}\), where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \((x,y,z)\) and \((\lambda x,\lambda^{2}y,\lambda^{3}z)\). Let us write \[\mu_{n}=\{e^{2\pi k1/n}\mid k\in\mathbb{Z}\}\] for the group of \(n\)th roots of unity. The variety \(\mathbb{P}(1,2,3)\) is once again covered by open sets \[U_{x} =\{(1,Y,Z)\}\quad\text{given by rescaling $x$ to $1$}\] \[U_{y} =\{(X,1,Z)\}\quad\text{given by rescaling $y$ to $1$}\] \[U_{z} =\{(X,Y,1)\}\quad\text{given by rescaling $z$ to $1$}\] but this time we have \(U_{x}\cong\mathbb{C}^{2}\), \(U_{y}\cong\mathbb{C}^{2}/\mu_{2}\), and \(U_{z}=\mathbb{C}^{2}/\mu_{3}\). This is because, for example, when we choose \(\lambda\in\mathbb{C}^{\times}\) to rescale \((x,y,z)\) with \(z\neq 0\) to \((X,Y,1)\), there are three possible choices for \(\lambda\) and they differ by the action of \(\mu_{3}\). In particular this lets us see that \(\mathbb{P}(1,2,3)\) is singular. For example, functions on the chart \(U_{y}\cong\mathbb{C}^{2}/\mu_{2}\) are polynomials in \(X\) and \(Z\) that are invariant under \(X\mapsto-X\), \(Z\mapsto-Z\), or in other words \[U_{y} =\operatorname{Spec}\mathbb{C}[X^{2},XZ,Z^{2}]\] \[=\operatorname{Spec}\mathbb{C}[a,b,c]/(ac-b^{2})\] Thus the chart \(U_{y}\) is the solution set for the equation \(ac-b^{2}=0\), as pictured in Figure 10(a). Similarly, the chart \(U_{z}\cong\mathbb{C}^{2}/\mu_{3}\) can be written as \[U_{z} =\operatorname{Spec}\mathbb{C}[X^{3},XY,Y^{3}]\] \[=\operatorname{Spec}\mathbb{C}[a,b,c]/(ac-b^{3})\] and is the solution set to the equation \(ac-b^{3}=0\), as pictured in Figure 10(b). The variety \(\mathbb{P}(1,2,3)\) has singular points at \((0,1,0)\in U_{y}\) and \((0,0,1)\in U_{z}\), and away from these points it is smooth. Figure 10. Singular charts on the weighted projective space \(\mathbb{P}(1,2,3)\): (a) the real-valued points in the chart \(U_{y}\). (b) the real-valued points in the chart \(U_{z}\). There are weighted projective spaces of any dimension. Let \(a_{1},a_{2},\ldots,a_{N}\) be positive integers such that any subset of size \(N-1\) has no common factor, and consider \[\mathbb{P}(a_{1},a_{2},\ldots,a_{N})=(\mathbb{C}^{N}\setminus\{0\})/\mathbb{C}^ {\times}\] where the action of \(\lambda\in\mathbb{C}^{\times}\) identifies the points \[(x_{1},x_{2},\ldots,x_{N})\qquad\text{and}\qquad(\lambda^{a_{1}}x_{1},\lambda^ {a_{2}}x_{2},\ldots,\lambda^{a_{N}}x_{N})\] in \(\mathbb{C}^{N}\setminus\{0\}\). The quotient \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is an algebraic variety of dimension \(N-1\). A general point of \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is smooth, but there can be singular points. Indeed, \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\) is covered by \(N\) open sets \[U_{i}=\{(X_{1},\ldots,X_{i-1},1,X_{i+1},\ldots,X_{N})\}\qquad i\in\{1,2,\ldots,N\}\] given by rescaling \(x_{i}\) to \(1\); here we take \(x_{i}\neq 0\) and set \(X_{j}=x_{j}/x_{i}\). The chart \(U_{i}\) is isomorphic to \(\mathbb{C}^{N-1}/\mu_{a_{i}}\), where \(\mu_{a_{i}}\) acts on \(\mathbb{C}^{N-1}\) with weights \(a_{j}\), \(j\neq i\). In Reid's notation, this is the cyclic quotient singularity \(\frac{1}{a_{i}}(a_{1},\ldots,\widehat{a}_{i},\ldots,a_{N})\); it is smooth if and only if \(a_{i}=1\). The topology of weighted projective space is very simple, with \[H^{k}(\mathbb{P}(a_{1},a_{2},\ldots,a_{N});\mathbb{Q})=\begin{cases}\mathbb{Q} &\text{if $0\leq k\leq 2N-2$ and $k$ is even;}\\ 0&\text{otherwise.}\end{cases}\] Hence every weighted projective space has second Betti number \(b_{2}=1\). There is a closed formula [9, Proposition D.9] for the regularized quantum period of \(X=\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\): \[\widehat{G}_{X}(t)=\sum_{k=0}^{\infty}\frac{(ak)!}{(a_{1}k)!(a_{2}k)!\cdots(a_ {N}k)!}t^{ak} \tag{10}\] where \(a=a_{1}+a_{2}+\cdots+a_{N}\). #### Toric varieties of Picard rank 2 As well as weighted projective spaces, which are quotients of \(\mathbb{C}^{N}\setminus\{0\}\) by an action of \(\mathbb{C}^{\times}\), we will consider varieties that arise as quotients of \(\mathbb{C}^{N}\setminus S\) by \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\), where \(S\) is a union of linear subspaces. These are examples of _toric varieties_[16, 25]. Specifically, consider a matrix \[\begin{pmatrix}a_{1}&a_{2}&\cdots&a_{N}\\ b_{1}&b_{2}&\cdots&b_{N}\end{pmatrix} \tag{11}\] with non-negative integer entries and no zero columns. This defines an action of \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) on \(\mathbb{C}^{N}\), where \((\lambda,\mu)\in\mathbb{C}^{\times}\times\mathbb{C}^{\times}\) identifies the points \[(x_{1},x_{2},\ldots,x_{N})\quad\text{and}\quad(\lambda^{a_{1}}\mu^{b_{1}}x_{1},\lambda^{a_{2}}\mu^{b_{2}}x_{2},\ldots,\lambda^{a_{N}}\mu^{b_{N}}x_{N})\] in \(\mathbb{C}^{N}\). Set \(a=a_{1}+a_{2}+\ldots+a_{N}\) and \(b=b_{1}+b_{2}+\cdots+b_{N}\), and suppose that \((a,b)\) is not a scalar multiple of \((a_{i},b_{i})\) for any \(i\). This determines linear subspaces \[S_{+}=\{(x_{1},x_{2},\ldots,x_{N})\mid x_{i}=0\text{ if $b_{i}/a_{i}<b /a$}\}\] \[S_{-}=\{(x_{1},x_{2},\ldots,x_{N})\mid x_{i}=0\text{ if $b_{i}/a_{i}>b /a$}\}\] of \(\mathbb{C}^{N}\), and we consider the quotient \[X=(\mathbb{C}^{N}\setminus S)/(\mathbb{C}^{\times}\times\mathbb{C}^{\times}) \tag{12}\] where \(S=S_{+}\cup S_{-}\). See e.g. [5, SSA.5]. These quotients behave in many ways like weighted projective spaces. Indeed, if we take the weight matrix (11) to be \[\begin{pmatrix}a_{1}&a_{2}&\cdots&a_{N}&0\\ 0&0&\cdots&0&1\end{pmatrix}\] then \(X\) coincides with \(\mathbb{P}(a_{1},a_{2},\ldots,a_{N})\). We will consider only weight matrices such that the subspaces \(S_{+}\) and \(S_{-}\) both have dimension two or more; this implies that the second Betti number \(b_{2}(X)=2\), and hence \(X\) is not a weighted projective space. We will refer to such quotients (12) as _toric varieties of Picard rank two_, because general theory implies that the Picard lattice of \(X\) has rank two. The dimension of \(X\) is \(N-2\). As for weighted projective spaces, toric varieties of Picard rank two can have singular points, the precise form of which is determined by the weights (11). There is also a closed formula [9, Proposition C.2] for the regularized quantum period. Let \(C\) denote the cone in \(\mathbb{R}^{2}\) defined by the equations \(a_{i}x+b_{i}y\geq 0\), \(i\in\{1,2,\ldots,N\}\). Then \[\widehat{G}_{X}(t)=\sum_{(k,l)\in\mathbb{Z}^{2}\cap C}\frac{(ak+b )!}{(a_{1}k+b_{1}l)!(a_{2}k+b_{2}l)!\cdots(a_{N}k+b_{N}l)!}t^{ak+bI} \tag{13}\] _Classification results._ Weighted projective spaces with terminal quotient singularities have been classified in dimensions up to four; see Table 2 for a summary. There are 35 three-dimensional Fano toric varieties with terminal quotient singularities and Picard rank two [41]. There is no known classification of Fano toric varieties with terminal quotient singularities in higher dimension, even when the Picard rank is two. ## Appendix B Supplementary Methods 1 _Data analysis: weighted projective spaces._ We computed an initial segment \((c_{0},c_{1},\ldots,c_{m})\) of the regularized quantum period, with \(m\approx 100\,000\), for all the examples in the sample of \(150\,000\) weighted projective spaces with terminal quotient singularities. We then considered \(\{\log c_{d}\}_{d\in S}\) where \(S=\{d\in\mathbb{Z}_{\geq 0}\mid c_{d}\neq 0\}\). To reduce dimension we fitted a linear model to the set \(\{(d,\log c_{d})\mid d\in S\}\) and used the slope and intercept of this model as features. The linear fit produces a close approximation of the data. Figure 11 shows the distribution of the standard errors for the slope and the \(y\)-intercept: the errors for the slope are between \(3.9\times 10^{-8}\) and \(1.4\times 10^{-5}\), and the errors for the \(y\)-intercept are between \(0.0022\) and \(0.82\). As we will see below, the standard error for the \(y\)-intercept is a good proxy for the accuracy of the linear model. This accuracy decreases as the dimension grows - see Figure 11(c) - but we will see below that this does not affect the accuracy of the machine learning classification. _Data analysis: toric varieties of Picard rank 2._ We fitted a linear model to the set \(\{(d,\log c_{d})\mid d\in S\}\) where \(S=\{d\in\mathbb{Z}_{\geq 0}\mid c_{d}\neq 0\}\), and used the slope and intercept of this linear model as features. The distribution of standard errors for the slope and \(y\)-intercept of the linear model are shown in Figure 12. The standard errors for the slope are small compared to the range of slopes, but in many cases the standard error for the \(y\)-intercept is relatively large. As Figure 13 illustrates, discarding data points where the standard error \(s_{\text{int}}\) for the \(y\)-intercept exceeds some threshold reduces apparent noise. As discussed above, we believe that this reflects inaccuracies in the linear regression caused by oscillatory behaviour in the initial terms of the quantum period sequence. _Weighted projective spaces._ We excluded dimensions one and two from the analysis, since there is only one weighted projective space in each case (namely \(\mathbb{P}^{1}\) and \(\mathbb{P}^{2}\)). Therefore we have a dataset of \(149\,998\) slope-intercept pairs, labelled by the dimension which varies between three and ten. We standardised the features, by translating the means to zero and scaling to unit variance, and applied a Support Vector Machine (SVM) with linear kernel and regularisation parameter \(C=10\). By looking at different train-test splits we obtained the learning curves shown in Figure 15. The figure displays the mean accuracies for both training and validation data obtained by performing five random test-train splits each time: the shaded areas around the lines correspond to the \(1\sigma\) region, where \(\sigma\) denotes the Figure 11. Standard errors for the slope and \(y\)-intercept. The distribution of standard errors for the slope and \(y\)-intercept from the linear model applied to weighted projective spaces \(X\) with terminal quotient singularities: (a) standard error for the slope. (b) standard error for the \(y\)-intercept. (c) standard error for the \(y\)-intercept by dimension. Figure 12. Standard errors for the slope and \(y\)-intercept. The distribution of standard errors for the slope and \(y\)-intercept from the linear model applied to toric varieties of Picard rank two with terminal quotient singularities: (a) standard error for the slope. (b) standard error for the \(y\)-intercept. Figure 14. The logarithm of the non-zero coefficients \(c_{d}\) for Example 3: (a) the first 250 terms. (b) terms between \(d=1000\) and \(d=1250\). In each case, the linear approximation is also shown. Figure 13. The slopes and \(y\)-intercepts from the linear model applied to toric varieties of Picard rank two with terminal quotient singularities. Data points are selected according to the standard error \(s_{\text{int}}\) for the \(y\)-intercept. The colour records the dimension of the toric variety. (a) All data points. (b) Points with \(s_{\text{int}}<1\): \(101\,183/200000\) points. (c) Points with \(s_{\text{int}}<0.3\): \(67\,445/200000\) points. standard deviation. Using 10% (or more) of the data for training we obtained an accuracy of 99.99%. In Figure 16 we plot the decision boundaries computed by the SVM between neighbouring dimension classes. Toric varieties of Picard rank 2In light of the discussion above, we restricted attention to toric varieties with Picard rank two such that the \(y\)-intercept standard error \(s_{\text{int}}\) is less than 0.3. We also excluded dimension two from the analysis, since in this case there are only two varieties (namely, \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and the Hirzebruch surface \(\mathbb{F}_{1}\)). The resulting dataset contains 67 443 slope-intercept pairs, labelled by dimension; the dimension varies between three and ten, as shown in Table 3. Support Vector MachineWe used a linear SVM with regularisation parameter \(C=50\). By considering different train-test splits we obtained the learning curves shown in Figure 17, where the means and the standard deviations were obtained by performing five random samples for each split. Note that the model did not overfit. We obtained a validation accuracy of 88.2% using 70% of the data for training. Figure 18 shows the decision boundaries computed by the SVM between neighbouring dimension classes. Figure 19 shows the confusion matrices for the same train-test split. Random Forest ClassifierWe used a Random Forest Classifier (RFC) with 1500 estimators and the same features (slope and \(y\)-intercept for the linear model). By considering different train-test splits we obtained the learning curves shown in Figure 20; note again that the model did not overfit. Using 70% of the data for training, the RFC gave a validation accuracy of 89.4%. Figure 21 on page 22 shows confusion matrices for the same train-test split. Figure 16. Decision boundaries computed from a Support Vector Machine with linear kernel trained on 70% of the dataset of weighted projective spaces. Note that the data has been standardised. Figure 15. Learning curves for a Support Vector Machine with linear kernel applied to the dataset of weighted projective spaces. The plot shows the means of the training and validation accuracies for five different random train–test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. Figure 19. Confusion matrices for a Support Vector Machine with linear kernel trained on 70% of the dataset of toric varieties of Picard rank two. Figure 17. Learning curves for a Support Vector Machine with linear kernel applied to the dataset of toric varieties of Picard rank two. The plot shows the means of the training and validation accuracies for five different random train-test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. Figure 18. Decision boundaries computed from a Support Vector Machine with linear kernel trained on 70% of the dataset of toric varieties of Picard rank two. Note that the data has been standardised. Figure 21. Confusion matrices for a Random Forest Classifier trained on 70% of the dataset of toric varieties of Picard rank two. Figure 20. Learning curves for a Random Forest Classifier applied to the dataset of toric varieties of Picard rank two. The plot shows the means of the training and validation accuracies for five different random train-test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. \begin{table} \begin{tabular}{c r r} \hline \hline \multicolumn{3}{c}{Rank-two toric varieties with \(s_{\text{int}}<0.3\)} \\ \hline Dimension & \multicolumn{1}{c}{Sample size} & \multicolumn{1}{c}{Percentage} \\ \hline 3 & 17 & 0.025 \\ 4 & 758 & 1.124 \\ 5 & 5 504 & 8.161 \\ 6 & 12 497 & 18.530 \\ 7 & 16 084 & 23.848 \\ 8 & 13 701 & 20.315 \\ 9 & 10 638 & 15.773 \\ 10 & 8 244 & 12.224 \\ \hline Total & 67 443 & \\ \hline \hline \end{tabular} \end{table} Table 3. The distribution by dimension among toric varieties of Picard rank two in our dataset with \(s_{\text{int}}<0.3\). Feed-forward neural networkAs discussed above, neural networks do not handle unbalanced datasets well, and therefore we removed the toric varieties with dimensions three, four, and five from our dataset: see Table 3. We trained a Multilayer Perceptron (MLP) classifier on the same features, using an MLP with three hidden layers \((10,30,10)\), Adam optimiser [44], and rectified linear activation function [2]. Different train-test splits produced the learning curve in Figure 22; again the model did not overfit. Using 70% of the data for training, the MLP gave a validation accuracy of 88.7%. One could further balance the dataset, by randomly undersampling so that there are the same number of representatives in each dimension (8244 representatives: see Table 3). This resulted in a slight decrease in accuracy: the better balance was outweighed by loss of data caused by undersampling. Feed-forward neural network with many featuresWe trained an MLP with the same architecture, but supplemented the features by including \(\log c_{d}\) for \(1\leq d\leq 100\) (unless \(c_{d}\) was zero in which case we set that feature to zero), as well as the slope and \(y\)-intercept as before. We refer to the previous neural network as MLP\({}_{2}\), because it uses 2 features, and refer to this neural network as MLP\({}_{102}\), because it uses 102 features. Figure 23 shows the learning curves obtained for different train-test splits. Using 70% of the data for training, the MLP\({}_{102}\) model gave a validation accuracy of 97.7%. We do not understand the reason for the performance improvement between MLP\({}_{102}\) and MLP\({}_{2}\). But one possible explanation is the following. Recall that the first 1000 terms of the period sequence were excluded when calculating the slope and intercept, because they exhibit irregular oscillations Figure 23. Learning curves for a Multilayer Perceptron classifier MLP\({}_{102}\) applied to the dataset of toric varieties of Picard rank two and dimension at least six, using as features the regression data as well as \(\log c_{d}\) for \(1\leq d\leq 100\). The plot shows the means of the training and validation accuracies for five different random train–test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. Figure 22. Learning curves for a Multilayer Perceptron classifier MLP\({}_{2}\) applied to the dataset of toric varieties of Picard rank two and dimension at least six, using just the regression data as features. The plot shows the means of the training and validation accuracies for five different random train–test splits. The shaded regions show the \(1\sigma\) interval, where \(\sigma\) is the standard deviation. that decay as \(d\) grows. These oscillations reduce the accuracy of the linear regression. The oscillations may, however, carry information about the toric variety, and so including the first few values of \(\log(c_{d})\) potentially makes more information available to the model. For example, examining the pattern of zeroes at the beginning of the sequence (\(c_{d}\)) sometimes allows one to recover the values of \(a\) and \(b\) - see (13) for the notation. This information is relevant to estimating the dimension because, as a very crude approximation, larger \(a\) and \(b\) go along with larger dimension. Omitting the slope and intercept, however, and training on the coefficients \(\log c_{d}\) for \(1\leq d\leq 100\) with the same architecture gave an accuracy of only 62%. Comparison of models.The validation accuracies of the SVM, RFC, and the neural networks \(\mathrm{MLP}_{2}\) and \(\mathrm{MLP}_{102}\), on the same data set (\(s_{\mathrm{int}}<0.3\), dimension between six and ten), are compared in Table 4. Their confusion matrices are shown in Table 5. All models trained on only the regression data performed well, with the RFC slightly more accurate than the SVM and the neural network \(\mathrm{MLP}_{2}\) slightly more accurate still. Misclassified examples are generally in higher dimension, which is consistent with the idea that misclassification is due to convergence-related noise. The neural network trained on the supplemented feature set, \(\mathrm{MLP}_{102}\), outperforms all other models. However, as discussed above, feature importance analysis using SHAP values showed that the slope and the intercept were the most influential features in the prediction. ## Appendix D Supplementary Discussion Comparison with Principal Component Analysis.An alternative approach to dimensionality reduction, rather than fitting a linear model to \(\log c_{d}\), would be to perform Principal Component Analysis (PCA) on this sequence and retain only the first few principal components. Since the vectors (\(c_{d}\)) have different patterns of zeroes - \(c_{d}\) is non-zero only if \(d\) is divisible by the Fano index \(r\) of \(X\) - we need to perform PCA for Fano varieties of each index \(r\) separately. We analysed this in the weighted projective space case, finding that for each \(r\) the first two components of PCA are related to the growth coefficients (\(A,B\)) from Theorem 5 by an invertible affine-linear transformation. That is, our analysis suggests that the coefficients (\(A,B\)) contain exactly the same information as the first two components of PCA. Note, however, that the affine-linear transformation that relates PCA to (\(A,B\)) varies with the Fano index \(r\). Using \(A\) and \(B\) as features therefore allows for meaningful comparison between Fano varieties of different index. Furthermore, unlike PCA-derived values, the coefficients (\(A,B\)) can be computed for a single Fano variety, rather than requiring a sufficiently large collection of Fano varieties of the same index. Towards more general Fano varieties.Weighted projective spaces and toric varieties of Picard rank two are very special among Fano varieties. It is hard to quantify this, because so little is known about Fano classification in the higher-dimensional and non-smooth cases, but for example this class includes only 18% of the \(\mathbb{Q}\)-factorial terminal Fano toric varieties in three dimensions. On the other hand, one can regard weighted projective spaces and toric varieties of Picard rank two as representative of a much broader class of algebraic varieties called toric complete intersections. Toric complete intersections share the key properties that we used to prove Theorems 5 and 6 - geometry that is tightly controlled by combinatorics, including explicit expressions for genus-zero Gromov-Witten invariants in terms of hypergeometric functions - and we believe that the rigorous results of this paper will generalise to the toric complete intersection case. All smooth two-dimensional Fano varieties and 92 of the 105 smooth \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{ML models} \\ SVM & RFC & \(\mathrm{MLP}_{2}\) & \(\mathrm{MLP}_{102}\) \\ \hline 87.7\% & 88.6\% & 88.7\% & 97.7\% \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of model accuracies. Accuracies for various models applied to the dataset of toric varieties of Picard rank two and dimension at least six: a Support Vector Machine with linear kernel, a Random Forest Classifier, and the neural networks \(\mathrm{MLP}_{2}\) and \(\mathrm{MLP}_{102}\). \begin{table} \begin{tabular}{l c c} \hline \hline Model & True confusion matrix & Predicted confusion matrix \\ \hline SVM & 0.00001 & 0.00002 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0000 & 0.0001 \\ RFC & 0.0000 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0.0001 & 0.0001 \\ RFC & 0. three-dimensional Fano varieties are toric complete intersections [9]. Many theorems in algebraic geometry were first proved for toric varieties and later extended to toric complete intersections and more general algebraic varieties; cf. [26, 27, 33] and [28, 56]. The machine learning paradigm presented here, however, applies much more broadly. Since our models take only the regularized quantum period sequence as input, we expect that whenever we can calculate \(\widehat{G}_{X}\) - which is the case for almost all known Fano varieties - we should be able to apply a machine learning pipeline to extract geometric information about \(X\). **Data availability.** Our datasets [11, 12] and the code for the Magma computer algebra system [4] that was used to generate them are available from Zenodo [23] under a CC0 license. The data was collected using Magma V2.25-4. **Code availability.** All code required to replicate the results in this paper is available from Bitbucket under an MIT license [13]. **Acknowledgements.** TC is funded by ERC Consolidator Grant 682603 and EPSRC Programme Grant EP/N03189X/1. AK is funded by EPSRC Fellowship EP/N022513/1. SV is funded by the EPSRC Centre for Doctoral Training in Geometry and Number Theory at the Interface, grant number EP/L015234/1. We thank Giuseppe Pitton for conversations and experiments that began this project, and thank John Aston and Louis Christie for insightful conversations and feedback. We also thank the anonymous referees for their careful reading of the text and their insightful comments, which substantially improved both the content and the presentation of the paper.
2308.04444
Changes in Policy Preferences in German Tweets during the COVID Pandemic
Online social media have become an important forum for exchanging political opinions. In response to COVID measures citizens expressed their policy preferences directly on these platforms. Quantifying political preferences in online social media remains challenging: The vast amount of content requires scalable automated extraction of political preferences -- however fine grained political preference extraction is difficult with current machine learning (ML) technology, due to the lack of data sets. Here we present a novel data set of tweets with fine grained political preference annotations. A text classification model trained on this data is used to extract policy preferences in a German Twitter corpus ranging from 2019 to 2022. Our results indicate that in response to the COVID pandemic, expression of political opinions increased. Using a well established taxonomy of policy preferences we analyse fine grained political views and highlight changes in distinct political categories. These analyses suggest that the increase in policy preference expression is dominated by the categories pro-welfare, pro-education and pro-governmental administration efficiency. All training data and code used in this study are made publicly available to encourage other researchers to further improve automated policy preference extraction methods. We hope that our findings contribute to a better understanding of political statements in online social media and to a better assessment of how COVID measures impact political preferences.
Felix Biessmann
2023-07-31T16:07:28Z
http://arxiv.org/abs/2308.04444v1
# Changes in Policy Preferences in German ###### Abstract Online social media have become an important forum for exchanging political opinions. In response to COVID measures citizens expressed their policy preferences directly on these platforms. Quantifying political preferences in online social media remains challenging: The vast amount of content requires scalable automated extraction of political preferences - however fine grained political preference extraction is difficult with current machine learning (ML) technology, due to the lack of data sets. Here we present a novel data set of tweets with fine grained political preference annotations. A text classification model trained on this data is used to extract policy preferences in a German Twitter corpus ranging from 2019 to 2022. Our results indicate that in response to the COVID pandemic, expression of political opinions increased. Using a well established taxonomy of policy preferences we analyse fine grained political views and highlight changes in distinct political categories. These analyses suggest that the increase in policy preference expression is dominated by the categories pro-welfare, pro-education and pro-governmental administration efficiency. All training data and code used in this study are made publicly available to encourage other researchers to further improve automated policy preference extraction methods. We hope that our findings contribute to a better understanding of political statements in online social media and to a better assessment of how COVID measures impact political preferences. Keywords:Policy Preference extraction text classification social media ## 1 Introduction The past decades have shown two trends that are becoming increasingly interdependent: Political campaigns take place online in social media. And at the same time online content for individual users is recommended using automated machine learning (ML) systems that are often optimized for user engagement or other proxy metrics for economic profit. These mechanisms can increase visibility of polarizing content and simultaneously enforce a bias towards existing user preferences. During the COVID pandemic, global platforms such as online social media allowed users to directly express their preferences for or against the measures taken by governments, such as lockdowns or vaccination programs. Analysing these policy preferences can yield valuable insights that could help to improve governmental policies. The large amount of content requires methods for automated extraction of policy preferences. Recent trends in machine learning (ML) towards bigger and more powerful language models could help to improve policy preference extraction. However there are few training data sets that contain annotations for fine grained policy preferences [9]. The lack of high quality annotated data sets with political information impedes the development of better models for automated detection of policy preferences. Here we present a data set of online social media content, Twitter posts, with fine grained political annotations as defined in [24]. The data set is used to train a text classification model that predicts policy preferences from social network posts. On a larger corpus of tweets collected from 2019 to 2022 the model is used to predict policy preferences before and during the COVID pandemic. Analyses of automatically extracted policy preferences suggest that the amount of policy preferences expressed on Twitter increased after the first lockdown. Leveraging a fine grained political viewpoint taxonomy we can investigate which policy preferences were expressed in those political tweets. To summarize, the main contributions of this study are: * A data set of German tweets with fine grained political preference annotation * A novel text classification model * An analysis of policy preferences before and during the COVID pandemic ## 2 Related Work The general topic of automated information extraction from online social media has been widely studied and different approaches have been proposed, including supervised ML methods, such as text classification [11], and unsupervised methods, such as topic models, or extensions thereof [1, 7, 8]. Many of these methods are dedicated to trending topic extraction. Since not all trending topics are related to the political discourse a large fraction of these methods do not lend themselves easily to the investigation of policy preferences. A number of studies have explored automated extraction of policy preferences, for a comprehensive overview we refer the interested reader to [9]. There have been many studies exploring traditional ML techniques for ideology detection and policy preference extraction [21] as well as approaches based on more recent natural language processing models, such as Recurrent Neural Networks [12] or more recently also Transformers [17]. The authors of [9] highlight that training ML models for automated extraction of fine grained policy preferences expressed in online social media content remains challenging. Primarily this is due to the fact that annotating this data requires expertise that can not as easily be crowdsourced, as the annotation of hate speech for instance. Annotation of policy preferences requires domain expertise and in particular experience with policy preferences as expressed in online media. There are some publicly available data sets that can be used for training ML models that detect policy preferences in text data. One of the largest and best curated data sets is the corpus of the Manifesto Project [23] which contains over 1,500,000 quasi-sentences, extracted from over 1,500 party manifestos, and annotated according to a well established category scheme of 56 policy categories [24]. This data has been used by researchers to investigate policy preferences [15] and there have been efforts to train ML models on this data to make predictions on online social media texts [6, 18, 16]. However the texts of party manifestos are written in a different style than posts in online social media. Hence models trained on the manifesto data usually do not work well on online social media texts. Other data sets focus more on texts in online social media but these often focus on a small set of political policy preferences [4, 13, 2, 10]. ## 3 Training Data Set For annotating training data with fine grained policy preferences we sampled tweets from a corpus of German tweets [14]. The tweets were sampled between August 2019 and March 2022 and filtered using the following criteria: User InteractionWe selected tweets that were interacted with in some form (likes, retweets, quotes) at least once. \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{precision recall f1-score support} \\ \hline controlled economy + & 1.00 & 0.67 & 0.80 3.0 \\ europe - & 0.80 & 0.75 & 0.77 16.0 \\ environmentalism + & 0.76 & 0.70 & 0.73 90.0 \\ democracy + & 0.63 & 0.74 & 0.68 77.0 \\ anti-imperialism + & 1.00 & 0.50 & 0.67 2.0 \\ economic orthodoxy + & 0.57 & 0.67 & 0.62 6.0 \\ europe + & 0.56 & 0.64 & 0.60 14.0 \\ undefined & 0.58 & 0.55 & 0.57 271.0 \\ infrastructure + & 0.43 & 0.80 & 0.56 20.0 \\ foreign special + & 0.50 & 0.55 & 0.52 11.0 \\... & & & \\ \hline accuracy & & & 0.46 1214 \\ macro avg & 0.30 & 0.31 & 0.30 1214 \\ weighted avg & 0.46 & 0.46 & 0.46 1214 \\ \hline \hline \end{tabular} \end{table} Table 1: F1 scores for tweets in the test set for the top 10 (according to F1) political categories. The complete list can be found in the Appendix, table 3. #### Relevance We used a ML model (see below) trained on the Manifesto Project corpus [23] to estimate the political relevance of each tweet. To increase the usefulness of the annotated data set we tried to cover all labels of the Manifesto Project's category scheme by selecting for each week only the top 5 tweets that were predicted as the most likely for each political category by an ML model trained on German party manifestos [23]. The filtered set of tweets were then annotated by two experts trained by researchers of the Manifesto Project. The annotation was performed in a custom written web app and later using labelstudio [22]. Annotators were instructed to label a tweet with one of the 56 political categories of the Manifesto Project codebook [24]. Additionally annotators provided the label undefined for tweets that could not be associated with any of the relevant political categories. If the tweet contained an image, annotators also considered the image content for the annotation. Context beyond the actual tweet content was not taken into account. Exceptions were tweets that replied to or commented on another tweet. In that case the original tweet was also considered. These replied-to tweets are, to keep the data set simpler, not part of the data set but can be retrieved via the url of the annotated tweet. In the current version of the data set there are 6097 unique tweets and the most frequent political categories annotated are shown in Table 2 (Appendix). Note that the majority of tweets is labeled as undefined, despite the filtering with the ML model. This is an indication that the data set contains useful negative examples for training better models. The data set is released and available for research purposes [5]. ## 4 Evaluation of Policy Preference Predictors To establish a simple baseline for policy preference extraction on the PoliTweet data set we used the TextPredictor module of the autoML package AutoGluon [3, 19]. The model was trained on a V100 NVIDIA GPU with a pretrained BERT model checkpoint (bert-base-german-cased) on the entire German part of the manifesto corpus [23] and 4883 annotated tweets from the training data set in section 3; 1214 annotated tweets were held out for testing the model. In section 2 we list the results for the top 10 political categories that could be predicted with highest F1 score by the model; the full list of results for all categories is listed in the Appendix, Table 3. Note that while the overall prediction performance is below 0.50 F1 score (macro or class-frequency weighted), these results are still encouraging. Fine grained political viewpoint extraction is a challenging task and even when trained on the manifesto corpus, the classification performance for all categories with extensive tuning and leveraging state-of-the-art ML models often stays below an F1 score of 0.5 [20]. ## 5 Policy Preferences after COVID lockdown The model as trained in section 4 was then applied to the entire twitter corpus [14] between 2019 and 2022 and filtered using the relevance and activity criteria as mentioned in section 3. We applied additional relevance filters to extract only tweets expressing political views. All tweets for which the category undefined was amongst the top 3 predictions of the text classification model were considered irrelevant and filtered out. The histograms of policy preferences in the remaining tweets were then compared before and after the COVID lockdown onset in Germany. In Figure 1 we show histograms of political views expressed in tweets before and after onset of the first lockdown. Overall our results suggest that the number of political tweets increased after the first lockdown. Investigating the fine grained political categories we find that this increase is driven by an increased number of tweets categorized as _pro education, pro welfare_ and _pro government administration efficiency_. These changes in policy preferences of tweets could reflect the negative impact that COVID measures such as lockdowns had: many employes lost their jobs, many needed to teach their children at home and all administrational processes were substantially slowed down due to the poor digitalization in German administration. In Figure 2 timelines are shown for the political categories _pro education, pro welfare_ and _pro government administration efficiency_, which exhibit the largest change after onset of the COVID lockdown as shown in Figure 1. The bottom panel in Figure 2 shows the onsets of lockdowns and COVID case numbers. The strongest impact of lockdown measures with respect to political policy preferences on Twitter appears to develop during the second wave of the pandemic. Figure 1: Increases in political tweets after the first COVID lockdown in Germany. Policy preferences were extracted with a text classifier. _Left:_ After the first lockdown the total number of political tweets per day increases. _Middle:_ Strong increases were observed in the broad political category of political system and welfare; note the log scale on the x-axis. _Right:_ Fine grained policy preferences show a strong increase in _pro education, pro welfare_ and _pro government administration efficiency_ ## 6 Conclusion This study presents three main contributions, a) a data set of German tweets with fine grained political preference annotation, b) a novel text classification model trained on that data and c) an analysis of policy preferences before and during the COVID pandemic. Our preliminary analyses of tweets during the COVID pandemic showed a pronounced increase in political tweets overall and in particular also in certain fine grained political categories. These findings are not from a representative sample and have several other limitations, such as the predictive performance of the text classification model for some classes, especially in rare categories. Nonetheless we believe the data set, the model and the experimental results are an important step towards more scalable and more fine grained policy preference extraction in German online social media. We hope that our contributions will encourage other researchers to improve current state-of-the-art models for policy preference extraction in research and applications. ## Acknowledgements We thank Jonas Bauer for conceptualizing, implementing and maintaining the first data annotation setup, Teo Chiaburu for setting up labelstudio, Marvin Figure 2: Number of tweets over time in those political categories that exhibit a strong increase after the first lockdown in Germany. Bottom panel shows an overview of COVID cases reported from Robert-Koch-Institutel, lockdown starts are indicated in blue. While the first lockdown did not result in strong increases of Tweets with political preferences, during the second COVID wave political preferences in the categories _pro education, pro welfare_ and _pro government administration efficiency_ were expressed more often than before. Muller and Maren Krumbein for annotating tweets, Pola Lehmann for training the annotators and valuable feedback on the analyses, Johannes Hoster for analyses and Philipp Staab for valuable discussions on sociological aspects.
2309.07264
Small error algorithms for tropical group testing
We consider a version of the classical group testing problem motivated by PCR testing for COVID-19. In the so-called tropical group testing model, the outcome of a test is the lowest cycle threshold (Ct) level of the individuals pooled within it, rather than a simple binary indicator variable. We introduce the tropical counterparts of three classical non-adaptive algorithms (COMP, DD and SCOMP), and analyse their behaviour through both simulations and bounds on error probabilities. By comparing the results of the tropical and classical algorithms, we gain insight into the extra information provided by learning the outcomes (Ct levels) of the tests. We show that in a limiting regime the tropical COMP algorithm requires as many tests as its classical counterpart, but that for sufficiently dense problems tropical DD can recover more information with fewer tests, and can be viewed as essentially optimal in certain regimes.
Vivekanand Paligadu, Oliver Johnson, Matthew Aldridge
2023-09-13T18:56:38Z
http://arxiv.org/abs/2309.07264v2
# Small error algorithms for tropical group testing ###### Abstract We consider a version of the classical group testing problem motivated by PCR testing for COVID-19. In the so-called tropical group testing model, the outcome of a test is the lowest cycle threshold (Ct) level of the individuals pooled within it, rather than a simple binary indicator variable. We introduce the tropical counterparts of three classical non-adaptive algorithms (COMP, DD and SCOMP), and analyse their behaviour through both simulations and bounds on error probabilities. By comparing the results of the tropical and classical algorithms, we gain insight into the extra information provided by learning the outcomes (Ct levels) of the tests. We show that in a limiting regime the tropical COMP algorithm requires as many tests as its classical counterpart, but that for sufficiently dense problems tropical DD can recover more information with fewer tests, and can be viewed as essentially optimal in certain regimes. ## 1 Introduction Group testing is the problem of reliably recovering a subset \(\mathcal{K}\) of 'defective' items from a population of \(N\) items, using a relatively small number \(T\) of so-called pooled tests to test multiple items at the same time. In the classical noiseless setting, the outcome of such tests is binary, indicating whether or not there is at least one defective item in the pool. Group testing was initially introduced by Dorfman [10] in the context of testing for syphilis. It has now developed into a combinatorial and algorithmic problem with a rich history [11, 4], where one seeks to understand the trade-offs between \(N\), \(|\mathcal{K}|\) and \(T\), and to understand how \(\mathcal{K}\) can be efficiently recovered using computationally feasible algorithms. Group testing has applications to many fields such as biology, manufacturing, communications and information technology, as described in [4, Section 1.7]. This framework also recently gained considerable attention during the COVID-19 pandemic - see [3] for a review of its use in this context, and [21] for an early proof of concept that pooled testing can detect the presence of COVID-positive individuals. In general, the efficiency of group testing makes it useful when using PCR machines to test large populations for rare conditions. In the PCR test (see for example [20]), the viral genetic material is extracted from a sample and amplified in 'cycles' of a process using polymerase enzymes. In each cycle, the quantity of this material is approximately doubled. The presence of the viral genetic material is detected by fluorescence, indicating a positive result if the quantity present exceeds a certain amount. The _cycle threshold_ (Ct) value is the number of cycles after which fluorescence is observed - this represents the number of doublings required to achieve detection. Hence when using PCR to test for COVID [18], a lower Ct value indicates a higher concentration of viral genetic material in the sample. Since classical group testing only considers a binary outcome of tests, it fails to take advantage of all the available information regarding strength of infection. However, some COVID-inspired pooled testing schemes, such as Tapestry [14, 13] and the two-stage adaptive scheme of Heidarzadeh and Narayanan [15] are designed to take account of quantitative information through numerical values of test outcomes. An alternative perspective comes through the way that Ct values are explicitly combined together in the so-called _tropical group testing_ model of Wang _et al._[19]. The Ct value \(z\) of two pooled samples with individual Ct values \(x\) and \(y\) satisfies \(z\approx\min\{x,y\}\). This is because the combined pool will contain an amount of viral genetic material proportional to \(2^{-x}+2^{-y}\), and require \(-\log_{2}(2^{-x}+2^{-y})\approx\min\{x,y\}\) doublings to fluoresce. Uninfected samples will never fluoresce given any number of doublings, so we can think of those as having Ct value \(\infty\). The tropical group testing model (see Definition 2.1 below) simply takes the outcome of the pooled test to be the minimum Ct value of the infected individuals contained within it. Consequently, strongly infected items with low Ct values tend to dominate the test outcomes, potentially concealing weakly infected individuals with high Ct values. To address this limitation, Wang _et al._ introduce the concept of a tropical code, which involves a 'delay matrix'. With this approach, Wang describes adaptive and non-adaptive constructions. The key contribution of this paper is the development and analysis of non adaptive algorithms in this tropical setting to recover the Ct values of defective items under a small-error criterion, and to demonstrate gains in performance relative to the classical group testing setting. These algorithms are tropical generalisations of the classical COMP, DD and SCOMP algorithms [2, 4]. Our algorithms and results do not require the use of a delay matrix, meaning that all the tests can be run in parallel simultaneously, making the resulting schemes easy to implement in practice on a PCR machine. In particular, we identify a sharp threshold for the performance of the tropical DD algorithm (see Section 3.4) in certain parameter regimes. In Theorem 6.1 we give an achievability result by showing that in a limiting regime where the number of tests \[T\geq(1+\delta)\max\{T_{\infty},T_{d},T_{d-1},\ldots,T_{1}\} \tag{1}\] for some \(\delta\) then the error probability of this algorithm tends to zero. Here the \(T_{r}\) are explicit expressions in terms of the total number of items with particular defectivity levels. Roughly speaking, \(T_{\infty}\) tests are required to find most (but not necessarily all) of the non-defective items, while \(T_{r}\) tests are required to find all the defective items with Ct value \(r\). Further in Remark 6.2, we argue that in a certain 'uniform' case, this result represents an explicit (albeit second-order) improvement over the performance of classical DD. In contrast, in Theorems 7.1 and 7.4 we show that in the regime where \[T\leq(1-\delta)\max\{T_{d},T_{d-1},\ldots,T_{1}\}\] for any \(\delta\) then the error probability of tropical DD and even of an optimal algorithm tends to \(1\). Since apart from the sign of the \(\delta\) term and the absence of \(T_{\infty}\) this is identical to the expression (1), we can conclude that our tropical DD algorithm is asymptotically optimal in parameter regimes where \(T_{\infty}\) does not give the maximum in (1). The structure of the rest of the paper is as follows. In Section 2 we introduce the notation used in the paper, and formalise the tropical group testing model. In Section 3 we describe the three tropical algorithms we will study, and briefly mention some of their basic properties. Section 4 gives simulation results indicating the performance of these algorithms. We analyse the theoretical performance of the tropical COMP algorithm in Section 5 and of the tropical DD algorithm in Sections 6 and 7. ## 2 Notation and tropical model We adapt the classical group testing notation and algorithms of [2, 4] to the tropical group testing model of [19]. The tropical model replaces the 'OR' operation of standard group testing with a'min' operation, in a way motivated by the use of PCR testing for COVID-19. In more detail, for a fixed positive integer value \(d\) we define the set \(\mathcal{D}=\{1,2,\ldots,d,\infty\}\) of possible defectivity (or infection) levels. Here level \(\infty\) represents the state of being not defective, and levels \(1,2,\ldots,d\) represent different levels of defectivity. As with Ct values in PCR testing, the lower the numerical value of the defectivity level, the stronger the infection; and the higher the numerical value of the defectivity level, the weaker the infection. The exact values represented in the set do not matter from a mathematical point of view - while it may be closer to medical practice to use Ct values such as \(\{20,21,\ldots,40,\infty\}\), the choice \(\{1,2,\ldots,d,\infty\}\) provides notational convenience. Given \(N\) items, we represent the defectivity level \(U_{i}\in\mathcal{D}\) of each item \(i\) as a vector \(\mathbf{U}=(U_{1},\ldots,U_{N})\in\mathcal{D}^{N}\). We write \(\mathcal{K}_{r}=\{j:U_{j}=r\}\) for the set of items at each level \(r\in\mathcal{D}\), and write \(\mathcal{K}=\bigcup_{r=1}^{d}\mathcal{K}_{r}\) for the total set of defective items, with finite \(U_{i}\). We write \(K_{r}=|\mathcal{K}_{r}|\) for the size of each set, \(K=\sum_{r=1}^{d}K_{r}=|\mathcal{K}|\) for the total number of defective items, and adopt the notation \(\mathbf{K}=(K_{1},\ldots,K_{d})\). For \(1\leq r\leq d\) and \(1\leq s\leq K_{r}\), we will write \(i(r,s)\) for the \(s\)th item in set \(\mathcal{K}_{r}\) (labelled arbitrarily within \(\mathcal{K}_{r}\)). We assume a combinatorial model: that is, we fix set sizes \(K_{1},\ldots,K_{d}\) in advance and assume that the sets \(\mathcal{K}_{r}\) are disjoint and chosen uniformly at random among sets which satisfy these constraints. We will sometimes consider a limiting sequence of problems where \(N\to\infty\) with \(K\simeq N^{\alpha}\) for some fixed \(\alpha\in(0,1)\) and \(K_{i}\simeq\theta_{i}K\) for some \(\theta_{i}\) with \(\sum_{i=1}^{d}\theta_{i}=1\). We use a non-adaptive testing strategy, where we fix the whole test design in advance. We represent the test design in a binary \(T\times N\) test matrix \(\mathbf{x}\), with the standard convention that \(x_{ti}=1\) means that item \(i\) appears in test \(t\) and \(x_{ti}=0\) means that item \(i\) does not appear in test \(t\). Our use of non-adaptive strategies in this context is motivated by the fact that PCR tests can be performed in parallel using plates with large numbers of wells (such as 96 or 384) - see for example [12] - meaning that the test strategy needs to be designed in advance. We now describe the outcome of a so-called tropical group test. **Definition 2.1**.: _Tropical group testing_ is defined by the outcome \(Y_{t}\) of test \(t\) being given by the lowest defectivity level \(U_{i}\) among items \(i\) that appear in the test: \[Y_{t}=\min_{i}\{U_{i}:x_{ti}=1\}. \tag{2}\] For \(d=1\), there are only two defectivity levels possible for an item \(i\), namely \(U_{i}=1\) (defective) and \(U_{i}=\infty\) (non-defective). In this case, Definition 2.1 reduces to Dorfman's standard binary group testing model [10], with the outcome of a negative test \(t\) denoted by \(Y_{t}=\infty\) (rather than the usual \(Y_{t}=0\)). We refer to this as 'classical group testing'. For any value of \(d\), if a test contains no defective items (that is, if \(U_{i}=\infty\) for all items \(i\) in the test) then the outcome is \(Y_{t}=\infty\), which we regard as a negative test, just as in classical group testing. However, unlike classical group testing, we also receive information about the defectivity levels of the items through the outcomes of positive tests being a number from \(1\) to \(d\). In order to analyse tropical group testing, we make some definitions that will be useful, and which extend the definitions and terminology of [2]. **Definition 2.2**.: Write \(\mu_{i}\) for the highest outcome of any test that item \(i\) appears in: \[\mu_{i}\coloneqq\max_{t}\{Y_{t}:X_{ti}=1\}. \tag{3}\] If item \(i\) is not tested, so that \(\{Y_{t}:X_{ti}=1\}=\emptyset\), we use the convention \(\mu_{i}\coloneqq 1\). A key deduction is that \(\mu_{i}\) is the lowest possible defectivity level for item \(i\). **Lemma 2.3**.: _For each item \(i\), we have \(U_{i}\geq\mu_{i}\)._ _In particular, if \(\mu_{i}=\infty\) (that is, if the item appears in a negative test) then we can recover with certainty that \(U_{i}=\infty\)._ Proof.: If an item \(i\) is not tested at all, then by Definition 2.2 we know that \(\mu_{i}=1\), and so the result trivially holds. Otherwise, if an item \(i\) is tested, then for each \(t\) such that \(x_{ti}=1\), by Definition 2.1, we know that \(U_{i}\geq Y_{t}\). So \(U_{i}\geq\max_{t}\{Y_{t}:x_{ti}=1\}=\mu_{i}\). **Definition 2.4**.: We define the following: 1. For each \(1\leq r\leq d\), we refer to an item \(i\) that has \(\mu_{i}=r\) as \(\mathrm{PD}(r)\) ('Possibly Defective at levels \(\{r,\ldots,d,\infty\}\)') and an item \(i\) with \(\mu_{i}>r\) as \(\mathrm{DND}(r)\) ('Definitely Not Defective at levels \(\{1,\ldots,r\}\)'). 2. For \(r\in\mathcal{D}\), we say that an item of defectivity level \(r\) is 1. _intruding_ if it never appears in a test of outcome \(r\) (in which case strict inequality \(U_{r}>\mu_{r}\) holds in Lemma 2.3), 2. _masked_ if it never appears in a test without some other item of level \(\leq r\) also present. 3. For \(r\in\mathcal{D}\), write \(H_{r}\) for the number of tested non-defective items in \(\mathrm{PD}(r)\) (those that have \(\mu_{i}=r\)). For convenience, also define \(H_{0}\) to be the number of untested non-defective items, and define \(G_{r}=\sum_{j=0}^{r}H_{j}\). We note that there are \((N-K)-G_{r}=H_{r+1}+\ldots+H_{d}+H_{\infty}\) non-defective items in \(\mathrm{DND}(r)\). The notion of'masked' items in Definition 2.4.2 generalizes the one given in [4, Proof of Theorem 2.2]. If \(d=1\), then, in the notation of [2], the number \(G\) of intruding non-defectives (i.e. non-defectives that don't appear in any negative tests) corresponds here to those items \(i\) with \(\mu_{i}=1\), tested or untested; so \(G\) in [2] corresponds to \(G_{1}=H_{0}+H_{1}\) here. To aid understanding, it can be helpful to sort the rows and columns of the test matrix as illustrated in Figure 1. The algorithms we describe in Section 3 will be effective for a variety of matrix designs. However, as in [2], in the theoretical analysis in Sections 5-7 we assume that the matrix \(\boldsymbol{x}\) is sampled according to a Bernoulli design with parameter \(p\); that is, that the elements \(x_{ti}\) are equal to \(1\) independently of one another with a fixed probability \(p\). As in [4, Section 2.1], we consider a probability \(p=\nu/K\) for some fixed \(\nu\). In fact, as justified in [2] and in Section 4 it is often reasonable to take \(\nu=1\). It remains possible that some other choice may be better in some situations, although simulation evidence in Figure 3 shows that the performance of our algorithms is relatively robust to choices of \(\nu\) close to \(1\). This means that while in theory we need to know the number of defective items in the sample to design the matrix, for practical purposes it is enough to have a good estimate of this number. The paper [16] proves that performance is improved in the classical case when using matrix designs with near-constant column weights \(L=\lfloor\nu T/K\rfloor\), and sim Figure 1: Schematic illustration of test matrix and outcomes sorted into block form. Here a \(0\) represents a submatrix of all zeroes, a \(+1\) represents a submatrix which has at least one entry equal to \(1\) in each column, and \(?\) represents a submatrix which could be of any form. The defective items are sorted by level to the left of the vertical line. The column labels above the matrix represents the number of elements of each type; the vector represents the outcomes of the test. ulation evidence in Figure 6 suggests that the same might well be true in the tropical case. However the analysis involved in [16] is significantly more complicated than that in [2], so here we restrict ourselves to the Bernoulli case for the sake of simplicity of exposition, and leave alternate matrix designs for future work. ## 3 Description of tropical algorithms ### General remarks concerning algorithms In this section, we describe three algorithms which estimate the true vector of defectivity levels \(\mathbf{U}\), given the test design matrix \(\mathbf{x}\) and the vector of test outcomes \(\mathbf{Y}\). These are the tropical COMP, tropical DD and tropical SCOMP algorithms, adapted from the classical algorithms of the same names in [6, 2] (see also [4, Chapter 2] for a more detailed description). We first define what is meant by an algorithm in this setting. **Definition 3.1**.: A decoding (or detection) algorithm is a function \(\widehat{\mathbf{U}}:\{0,1\}^{T\times N}\times\mathcal{D}^{T}\to\mathcal{D}^{N}\) which estimates the defectivity level of each of the items, based only on knowledge of the test design \(\mathbf{x}\) and outcomes \(\mathbf{Y}\). We write \(\mathbb{P}(\mathrm{err})\) for the error probability of an algorithm, and \(\mathbb{P}(\mathrm{suc})=1-\mathbb{P}(\mathrm{err})\) for the success probability. We define \[\mathbb{P}(\mathrm{err})=\mathbb{P}(\widehat{\mathbf{U}}\neq\mathbf{U}) \tag{4}\] to be the probability that the algorithm fails to recover all the defectivity levels exactly, where the randomness comes through the design of the matrix and the value of \(\mathbf{U}\) itself. Sometimes for emphasis we will include the name of the algorithm and the number of tests, for example by writing \(\mathbb{P}(\mathrm{err};\mathrm{DD},T)\). Recovering \(\mathbf{U}\) exactly represents a strong success criterion for this problem. For example, in some cases, we might be happy to simply recover the defective set \(\mathcal{K}=\{i:U_{i}<\infty\}\). We later show that recovering \(\mathbf{U}\) and recovering \(\mathcal{K}\) represent equivalent success criteria for tropical DD and tropical SCOMP, but not for tropical COMP. From a clinical point of view, since lower Ct levels are generally associated with higher infectiousness [18], it might be sufficient to recover all the items with defectivity level below a certain threshold \(t\), that is to find \(\bigcup_{r<t}\mathcal{K}_{r}=\{i:U_{i}<t\}\). In this setting, we say that a _false positive error_ is an error of underestimating the defectivity level \(U_{i}\) of an item \(i\) - that is, of setting \(\widehat{U}_{i}<U_{i}\) - and a _false negative error_ is an error of overestimating the defectivity level \(U_{i}\) of an item \(i\) - that is, of setting \(\widehat{U}_{i}>U_{i}\). In the remainder of this section, we define the tropical COMP (Subsection 3.3), tropical DD (Subsection 3.4) and tropical SCOMP (Subsection 3.5) algorithms as tropical equivalents of their established classical counterparts. All of these algorithms are relatively simple: they do not require exhaustive search over possible values of \(\mathbf{U}\) (in contrast to the classical SSS algorithm [2], for example), can be implemented with a small number of passes through the data, and require an amount of storage which is proportional to the number of items and tests. Despite this simplicity, in the classical case, the DD algorithm has performance close to optimal for certain parameter ranges. This can be seen by comparing [9, Eq. (1.1), (1.2)], which show that DD under a constant column weight design achieves an asymptotic performance which matches that achievable by any algorithm and any test design in the case where \(K\simeq N^{\alpha}\) and \(1/2\leq\alpha<1\). Also, note that while simulations show that classical SCOMP outperforms classical DD for a range of finite size problems, Coja-Oghlan _et al._[8] prove that it requires the same number of tests in an asymptotic sense, with SCOMP having the same rate (in the sense of [2]) as classical DD. ### Counting bounds For classical group testing, a lower bound on the number of tests required is given by the so-called'magic number' \(T^{*}_{\text{class}}:=\log_{2}\binom{N}{K}\), which can be justified on information-theoretic grounds. In fact below this number of tests there is exponential decay in performance of any algorithm, adaptive or non-adaptive, and for any test strategy. Specifically, [5, Theorem 3.1] shows that if \(T\) tests are used then in any scenario the success probability for classical group testing satisfies \[\mathbb{P}(\text{suc})\leq 2^{-(T^{*}_{\text{class}}-T)}=\frac{2^{T}}{\binom{N}{K }}, \tag{5}\] sometimes referred to as the counting bound. It may not be _a priori_ obvious how the difficulty of the tropical decoding problem with success criterion (4) compares with the corresponding classical problem. In the tropical setting, we receive more information from each test through the more diverse test outcomes, which suggests the problem could be easier; but we also need to recover more information (to find the levels \(\mathbf{U}\)), which suggests the problem could be harder. Nonetheless, if for given parameters any tropical algorithm can demonstrate performance exceeding the classical counting bound (5) then we can be sure that the corresponding tropical problem is easier than its classical counterpart. By closely mimicking the proof of the classical counting bound (5) given in [5] we can prove its tropical counterpart. **Theorem 3.2**.: _Write \(T^{*}_{\mathrm{trop}}:=\log_{d+1}\binom{N}{\mathbf{K}}\), where_ \[\binom{N}{\mathbf{K}}=\binom{N}{K_{1},K_{2},\ldots,K_{d},N-K}=\frac{N!}{K_{1}!K_{2}! \cdots K_{d}!(N-K)!}\] _is the multinomial coefficient. Then_ \[\mathbb{P}(\mathrm{suc})\leq(d+1)^{-(T^{*}_{\mathrm{trop}}-T)}=\frac{(d+1)^{T} }{\binom{N}{\mathbf{K}}}. \tag{6}\] Proof.: See Appendix A. Writing \(\binom{K}{\mathbf{K}}=K!/(K_{1}!K_{2}!\ldots K_{d}!)\) and \(H(\mathbf{\theta})=-\sum_{i=1}^{d}\theta_{i}\log_{2}(\theta_{i})\), we expand \[T^{*}_{\mathrm{trop}}=\log_{d+1}\binom{N}{K}+\log_{d+1}\binom{K}{\mathbf{K}}\simeq \frac{T^{*}_{\mathrm{class}}}{\log_{2}(d+1)}+K\frac{H(\mathbf{\theta})}{\log_{2}(d +1)}. \tag{7}\] Compared with the classical case, the scaling factor \(1/\log_{2}(d+1)<1\) on the first term of (7) represents the fact that we potentially gain more information through each test, while the second additive term represents the extra information we are required to recover. ### Tropical COMP We now describe the tropical COMP algorithm, which extends the classical COMP algorithm described in [6] (see also [7]) - although the idea of the algorithm dates back at least to the work of Kautz and Singleton [17]. We first describe the classical COMP algorithm, which simply declares any item that appears in a negative test as non-defective. All other items are declared defective. In the notation of this paper, classical COMP can be described in the following way. For each item \(i\) with \(\mu_{i}=\infty\), we set \(\widehat{U}_{i}=\infty\); otherwise, \(\mu_{i}=1\) and we set \(\widehat{U}_{i}=1\). In other words, we set \(\widehat{U}_{i}=\mu_{i}\) for each item \(i\). The same rule \(\widehat{U}_{i}=\mu_{i}\) can also be used in tropical group testing. This is what we call the tropical COMP algorithm. ``` Input: Test design matrix \(\mathbf{x}\) and vector of test outcomes \(\mathbf{Y}\) Output: Estimated vector of defectivity levels \(\widehat{\mathbf{U}}\) for each item \(i\)do set \(\widehat{U}_{i}=\mu_{i}\); ``` **Algorithm 1**Tropical COMP algorithm While both classical and tropical COMP mark items appearing in negative tests as non-defective, the tropical COMP algorithm further classifies items into estimated defectivity levels. Note that the two algorithms operate identically when \(d=1\), and have some analogous properties in general. To aid terminology, we first define the notion of unexplained tests in this setting. **Definition 3.3**.: Fix a test matrix \(\mathbf{x}\) and an estimate \(\widehat{\mathbf{U}}\) of \(\mathbf{U}\). Write \[\widehat{Y}_{t}=\min_{i}\{\widehat{U}_{i}:x_{ti}=1\}\] to be the outcome of test \(t\) using matrix \(\mathbf{x}\) if the true defectivity vector were equal to \(\widehat{\mathbf{U}}\). We say that test \(t\) is _unexplained_ by \(\widehat{\mathbf{U}}\) if \(\widehat{Y}_{t}\neq Y_{t}\), where \(Y_{t}\) is the actual test outcome, or _explained_ if \(\widehat{Y}_{t}=Y_{t}\). We call an estimate vector \(\widehat{\mathbf{U}}\) a _satisfying vector_ if it explains all \(T\) tests. The terminology'satisfying vector' here is the tropical group testing equivalent of the classical group testing notion of a satisfying set [2, 4]. For classical COMP, the estimate given is a satisfying set [4, Lemma 2.3]) - indeed, the largest satisfying set. We have a similar result for tropical COMP. **Lemma 3.4**.: _The estimate \(\widehat{\mathbf{U}}^{\mathrm{COMP}}\) given by tropical COMP is a satisfying vector._ _Further, \(\widehat{\mathbf{U}}^{\mathrm{COMP}}\) is the least satisfying vector, in that if \(\mathbf{V}\in\mathcal{D}^{N}\) is also a satisfying vector, then \(U_{i}^{\mathrm{COMP}}\leq V_{i}\) for all items \(i\)._ Proof.: For the first part, take an arbitrary test \(t\) with outcome \(Y_{t}\). All items \(i\) included in this test have \(U_{i}\geq\mu_{i}\geq Y_{t}\). Further, there must be an item \(j\) with \(U_{j}=Y_{t}\), otherwise the test outcome would be greater than \(Y_{t}\). For that item, \(\mu_{j}=Y_{t}\). Hence, \[\widehat{Y}_{t}=\min_{i}\{\mu_{i}:x_{ti}=1\}=\mu_{j}=U_{j}=\min_{i}\{U_{i}:x_{ ti}=1\}=Y_{t},\] and the test is explained. Since the test \(t\) was arbitrary, we have \(\widehat{\mathbf{Y}}=\mathbf{Y}\), and hence \(\widehat{\mathbf{U}}^{\mathrm{COMP}}\) explains all the tests. For the second part, note that any satisfying vector \(\mathbf{V}\) must have \(V_{i}\geq\mu_{i}=U_{i}^{\mathrm{COMP}}\) for all \(i\). To see this, consider a vector \(\mathbf{V}\) and item \(j\) with \(V_{j}<\mu_{j}\). Then let \(t\) be a test containing item \(j\) for which \(Y_{t}=\mu_{j}\). There must be at least one such test, by the definition of \(\mu_{j}\), unless \(j\) is never tested. If \(j\) is never tested, then by assumption \(V_{j}\in\mathcal{D}\) has \(V_{j}\geq 1=\mu_{j}\). For this test \(t\), \[\min_{i}\{V_{i}:x_{ti}=1\}\geq\mu_{j}>V_{j},\] so \(\mathbf{V}\) is not satisfying. We know that classical COMP never makes false negative errors. The same is true for tropical COMP - recall that we use this terminology to refer to an error of the form \(\widehat{U}_{i}>U_{i}\). **Lemma 3.5**.: _Tropical COMP never makes false negative errors._ Proof.: This follows directly from Lemma 2.3, which tells us that \(U_{i}\geq\mu_{i}\), where \(\mu_{i}\) is the tropical COMP estimate. For tropical COMP, the success criterion given by (4) to recover the whole vector \(\mathbf{U}\) is not equivalent to the success criterion of merely recovering the defective set \(\mathcal{K}\). It is true that if any algorithm correctly recovers \(\mathbf{U}\), so that \(\widehat{\mathbf{U}}=\mathbf{U}\), then it also recovers \(\mathcal{K}\) as \(\widehat{\mathcal{K}}=\{i:\widehat{U}_{i}<\infty\}=\mathcal{K}\). But the following example shows that for tropical COMP the converse does not hold; that is, just because COMP fails to recover \(\mathbf{U}\), that does not necessarily mean it also fails to recover \(\mathcal{K}\): **Example 3.6**.: Suppose we have two items, with true defectivity levels \(\mathbf{U}=(1,2)\). Suppose further that we run just one test, which contains both items, so \(\mathbf{x}=(1,1)\) and \(\mathbf{Y}=(1).\) Then both items are in just one test with outcome \(Y_{1}=1\), so have \(\mu_{1}=\mu_{2}=1\). Tropical COMP therefore incorrectly estimates \(\widehat{\mathbf{U}}=(1,1)\neq\mathbf{U}\). However, it does succeed in recovering the defective set \(\widehat{\mathcal{K}}=\mathcal{K}=\{1,2\}\). Despite this, we show in Section 5 that tropical COMP asymptotically requires the same number of tests to recover \(\mathbf{U}\) as classical COMP does to recover \(\mathcal{K}\). ### Tropical DD We now describe the tropical DD algorithm. This extends the classical DD algorithm introduced in [2], which works in three steps: 1. Every item appearing in a negative test is non-defective. (All other items are 'possibly defective'.) 2. If a positive test contains a single possibly defective item, that item is defective. 3. All remaining items are assumed non-defective. Tropical DD works the same way, except in step 2, it takes account of the different levels of defectivity in the tropical testing. Recalling that a PD(\(r\)) item is one with \(\mu_{i}=r\), the tropical DD algorithm is as follows: ``` Input: Test design matrix \(\mathbf{x}\) and vector of test outcomes \(\mathbf{Y}\) Output: Estimated vector of defectivity levels \(\widehat{\mathbf{U}}\) for each item \(i\) with \(\mu_{i}=\infty\)do set \(\widehat{U}_{i}=\infty\); for each test \(t\) with \(Y_{t}=r<\infty\)do if there exists only one \(\text{PD}(r)\) item \(i\) in test \(t\)then set \(\widehat{U}_{i}=r\); end Declare all remaining unclassified items to have \(\widehat{U}_{i}=\infty\); ``` **Algorithm 2**Tropical DD algorithm To understand why this algorithm works, consider a test \(t\) with outcome \(Y_{t}=r\). Observe that (by Definitions 2.1 and 2.2 respectively): 1. test \(t\) cannot contain any items \(i\) with \(U_{i}<r\), and must contain at least one'special item' \(j\) with \(U_{j}=r\); 2. every item \(i\) appearing in test \(t\) has \(\mu_{i}\geq r\) and so is either \(\text{PD}(r)\) or \(\text{DND}(r)\). Suppose all but one item \(j\) in test \(t\) are \(\text{DND}(r)\) (i.e. have \(\mu_{i}>r\)), so none of those other items are in \(\text{PD}(r)\). Then we know (by Lemma 2.3) that each of those items has \(U_{i}\geq\mu_{i}>r\), and cannot be a special item. Hence the remaining item \(j\) must be the special item that we seek. In other words, the sole \(\text{PD}(r)\) item in the test is marked as definitely defective at level \(r\). This mirrors the classical case where if there is a single PD item in a test, it is marked as definitely defective. We can think of the problem of finding \(\mathcal{K}\) in the classical case as now being split into sub-problems of finding \(\mathcal{K}_{1},\ldots,\mathcal{K}_{d}\) in the tropical case. It is helpful to think of moving upwards through the rows in the block formulation of Figure 1: 1. By examining the tests with outcome \(\infty\), we can identify \(H_{\infty}\) non-defective items and remove them from consideration for tests of outcome \(r<\infty\). 2. In general, for \(r=d,d-1,\ldots,1\), by examining all the tests with outcome \(r\), we hope to find all the defective items \(i(r,1),\ldots,i(r,K_{r})\) and to find the \(H_{r}\) non-defective items that are in \(\text{PD}(r)\) and remove them from consideration for tests of outcome lower than \(r\). We note that the operation of classical DD is the same as the operation of tropical DD when \(d=1\). We know that classical DD never makes false positive errors [4, Lemma 2.2]. The same is true for tropical DD: **Lemma 3.7**.: _Tropical DD never makes false positive errors. Indeed the only errors it can make is wrongly declaring a defective items of some finite level \(U_{i}=r\) to be non-defective \(\widehat{U}_{i}=\infty\)._ Proof.: The first step finds some non-defective items from negative tests, and so is error-free. The second step identifies the sole defective item that can explain the outcome of the test it is in. It is thus also error-free. The final step is the only point at which errors can be made; specifically, false negative errors where a defective item is marked non-defective can occur. For tropical DD, the success criteria of recovering the vector \(\mathbf{U}\) and recovering the defective set \(\mathcal{K}\) are equivalent. We know that if an algorithm recovers \(\mathbf{U}\), then it recovers \(\mathcal{K}\). To prove equivalence of the success criteria, it suffices to show that if tropical DD fails to recover \(\mathbf{U}\), then it fails to recovers \(\mathcal{K}\). This is done in the following paragraph. Suppose that tropical DD fails to recover \(\mathbf{U}\). Then by Lemma 3.7, the only errors that could have been made are false negative errors where a defective item is wrongly marked non-defective. Hence, tropical DD also fails to recover \(\mathcal{K}\). ### Tropical SCOMP We now describe the tropical SCOMP algorithm, which extends the classical SCOMP algorithm introduced in [2]. Classical SCOMP starts with the estimate given by the DD algorithm. It then greedily adds items to the estimated defective set \(\hat{\mathcal{K}}\) until all tests are explained. Similarly, the tropical SCOMP algorithm starts with the estimate given by tropical DD. It then greedily adds items to the sets \(\hat{\mathcal{K}}_{r}\), for each \(r\) such that there are unexplained tests of outcome \(r\). This is done until all tests are explained. ``` Input: Test design matrix, \(\mathbf{x}\), and vector of test outcomes \(\mathbf{Y}\) Output: Estimated vector of defectivity levels, \(\widehat{\mathbf{U}}\) Initialize \(\widehat{\mathbf{U}}\) as the estimate \(\widehat{\mathbf{U}}_{\text{DD}}\) of \(\mathbf{U}\) produced by the DD algorithm; while unexplained tests exist do Choose a test outcome \(r\) from the unexplained tests; Retrieve all the tests with outcome \(r\); Find the \(\text{PD}(r)\) item \(i\) that occurs the most times in those tests (ties can be broken arbitrarily); Set \(\widehat{U}_{i}=r\) and update the list of unexplained tests; end ``` **Algorithm 3**Tropical SCOMP algorithm Note that tropical SCOMP attempts to solve the sub-problems of finding \(\mathcal{K}_{1},\ldots,\mathcal{K}_{d}\) that are not solved by tropical DD. The action of classical SCOMP is the same as that of tropical SCOMP when \(d=1\). _Remark 3.8_.: If tropical DD succeeds, then so does tropical SCOMP. This is because tropical SCOMP starts with the estimate produced by tropical DD. If tropical DD succeeds, then no tests are unexplained and tropical SCOMP also succeeds (cf. [4, Theorem 2.5]). We show that the success criteria of recovering \(\mathbf{U}\) and recovering \(\mathcal{K}\) are equivalent for tropical SCOMP. Similar to the case of tropical DD, it suffices to show that if tropical SCOMP fails to recovers \(\mathbf{U}\), then it fails to recovers \(\mathcal{K}\). This is done in the following paragraph. Suppose that tropical SCOMP fails to recover \(\mathbf{U}\). Then necessarily, tropical DD also fails to recover \(\mathbf{U}\). Then there exists an item \(i\in\mathcal{K}\) such that there are no tests in which it is the only \(\mathrm{PD}(\mu_{i})\) item. Since tropical SCOMP fails to recover \(\mathbf{U}\), at least one such item \(i\) was not chosen to explain the test outcomes of any of the tests that it is in and is marked as non-defective. Hence, \(\widehat{\mathcal{K}}\neq\mathcal{K}\). ### Comparison of tropical algorithms Table 1 summarises the features of the tropical algorithms, while comparing them to the classical algorithms (cf. [4, Table 2.1]). We now present a worked example which illustrates the operation of the various tropical algorithms: **Example 3.9**.: Suppose we use the test design \(\mathbf{x}\) and receive the outcomes \(\mathbf{Y}\) as \begin{table} \begin{tabular}{l c c c} \hline \hline & **satisfying** & **no false +** & **no false \(-\)** \\ \hline COMP & ✓ & ✗ & ✓ \\ DD & ✗ & ✓ & ✗ \\ SCOMP & ✓ & ✗ & ✗ \\ \hline tropical COMP & ✓ & ✗ & ✓ \\ tropical DD & ✗ & ✓ & ✗ \\ tropical SCOMP & ✓ & ✗ & ✗ \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of features of algorithms in the classical and tropical case: (i) whether the output estimate \(\widehat{\mathbf{U}}\) is guaranteed to explain all test outcomes; (ii)–(iii) guarantees on false positives or false negatives. follows: \[\mathbf{x}=\begin{bmatrix}1&0&0&0&0&0&0\\ 1&0&1&0&0&0&1\\ 0&1&0&1&1&0&0\\ 0&1&0&0&1&1&0\\ 1&0&0&0&1&0&0\end{bmatrix}\qquad\mathbf{Y}=\begin{bmatrix}\infty\\ 37\\ \infty\\ 29\\ \infty\end{bmatrix}.\] It is convenient to first calculate \(\mathbf{\mu}\). For example, item 1 occurs in tests \(1,2,5\). We then deduce \[\mu_{1}=\max_{t\in\{1,2,5\}}Y_{t}=\max\{\infty,37,\infty\}=\infty.\] Proceeding similarly for the other items, we obtain \[\mathbf{\mu}=\big{(}\infty,\infty,37,\infty,\infty,29,37\big{)}.\] **Tropical COMP:**: We set \(\widehat{\mathbf{U}}=\mathbf{\mu}\), obtaining the following: \[\widehat{\mathbf{U}}^{\rm COMP}=\big{(}\infty,\infty,37,\infty,\infty,29,37\big{)}.\] **Tropical DD:**: In the first step, we find the items with \(\mu_{i}=\infty\). These are items \(1,2,4\) and \(5\). We declare these to be non-defective, so \(\widehat{U}_{1}^{\rm DD}=\widehat{U}_{2}^{\rm DD}=\widehat{U}_{4}^{\rm DD}= \widehat{U}_{5}^{\rm DD}=\infty\). In the second step, we check each positive test \(t\) and look to see if they contain a single \({\rm PD}(Y_{t})\) item. For test 2, there are two \({\rm PD}(Y_{2})={\rm PD}(37)\) items in the test, items 3 and 7, so DD does nothing. For test 4, items \(2,5\) and \(6\) appear, but only item 6 is a \({\rm PD}(Y_{4})={\rm PD}(29)\) item. Hence, the tropical DD algorithm sets \(\widehat{U}_{6}^{\rm DD}=29\). Finally, in the third step, items 3 and 7, which have not yet been classified, get assigned a defectivity level of \(\widehat{U}_{3}^{\rm DD}=\widehat{U}_{7}^{\rm DD}=\infty\). Hence, the output of the tropical DD algorithm is: \[\widehat{\mathbf{U}}_{\rm DD}=\big{(}\infty,\infty,\infty,\infty,\infty,29,\infty \big{)}.\] **Tropical SCOMP:**: The algorithm initializes with the tropical DD estimate \(\widehat{\mathbf{U}}=\widehat{\mathbf{U}}_{\rm DD}.\) The corresponding outcome would be (written as the transpose, a row vector) \[\widehat{\mathbf{Y}}=(\infty,\infty,\infty,29,\infty)^{\top},\] where \(\widehat{Y}_{2}=\infty\neq 37=Y_{2}\). Hence, test 2 is the only unexplained test. We retrieve the \({\rm PD}(37)\) items in test 2. These are items 3 and 7. Because these items both appear in the same number of tests with outcome 37, namely one, the tropical SCOMP algorithm chooses between them arbitrarily - let's say it chooses item 7 - and assigns the defectivity level of \(\widehat{U}_{7}^{\mathrm{SCOMP}}=37\) to it. Now no tests remain unexplained, and the algorithm terminates. Hence the algorithm returns \[\widehat{\mathbf{U}}^{\mathrm{SCOMP}}=\big{(}\infty,\infty,\infty,\infty,\infty,29, 37\big{)}.\] ## 4 Simulation results In this section, we present some simulation results. We empirically compare the performance of the tropical and classical algorithms, and investigate how changing the probability \(p\) and the sequence \(\mathbf{K}=(K_{1},\ldots,K_{d})\) affects their performance. We also investigate the effect of using a combinatorial model with random defectivity levels for defective items, as opposed to the model with fixed \(K_{r}\) introduced in Section 2. Finally, we compare the Bernoulli design and the near-constant column weight design, described in [4, Section 2.7]. Figure 2 shows the performance of the tropical algorithms, relative to the performance of the classical algorithms and to the counting bounds (5) and (6). Figure 2: Empirical performance of the classical COMP, DD and SCOMP algorithms together with their tropical counterparts, through simulation with a Bernoulli design. For comparison, we plot the classical and tropical counting bounds of (5) and (6). The parameters chosen are \(N=500,K=10,p=0.1,\mathbf{K}=(2,2,2,2,2).\) Each point is obtained through \(10^{4}\) simulations. Figure 2 shows for the chosen set of parameters that the tropical COMP algorithm performs almost identically to its classical counterpart (the lines are so close that they are difficult to distinguish), and the tropical DD and SCOMP algorithms perform better than their classical counterparts. We also notice that in this case tropical SCOMP beats the classical counting bound (5) for small values of \(T\), showing that the tropical model can allow genuine performance gains over even adaptive classical group testing algorithms. Figure 3 shows how the performance of the tropical algorithms vary with \(p\), for fixed \(N\), \(T\) and \(\mathbf{K}\). Figure 3 shows that the performance of tropical DD and tropical SCOMP have a relatively wide plateau near their peak, indicating some robustness to misspecification of \(p\), and showing that in general the choice \(p=1/K\) is close to optimal for each algorithm. Figure 4 shows how the performance of the tropical algorithms vary as \(\mathbf{K}\) is varied, for fixed \(N,K,T\) and \(d\). We note that there are \(\binom{K-1}{d-1}\) possible vectors \(\mathbf{K}\) that sum to \(K\) while having each \(K_{i}>0\). Also, a \(d\)-dimensional plot is required to visualize the performance of the algorithms for all the \(\mathbf{K}\) simultaneously. Hence, for simplicity of exposition, we only present the case \(d=2\). Figure 4 shows that changing \(\mathbf{K}\) has very little effect on the performance of tropical COMP. This is quantified later in Section 5, where we find that, for tropical COMP, the error contribution of the \(K\) defective items is small compared to that of the \(N-K\) Figure 3: Simulation of the tropical COMP, DD and SCOMP algorithms with a Bernoulli design to investigate the effect of changing the parameter \(p\) for the Bernoulli design. The parameters are \(N=500,K=10,T=125\) and \(\mathbf{K}=(2,2,2,2,2)\). Each point is obtained through \(10^{4}\) simulations. non-defective items. Figure 4 also shows that the performance of tropical DD and tropical SCOMP improves as \(K_{1}\) increases, until reaching a peak. Figure 5 shows the effect on the performance of the tropical algorithms when the defective set \(\mathcal{K}\) is chosen with a combinatorial prior, and the defectivity level for each defective item is drawn uniformly and independently from \(\{1,\ldots,d\}\). We note that that the performance of tropical DD and tropical SCOMP improves as \(d\) increases, until reaching a peak, while the performance of tropical COMP does not change. Finally, Figure 6 compares the performance of the tropical algorithms with the Bernoulli design and with a near-constant column weight design. We see that the performance mirrors the classical case (cf. [4, Figure 2.3]). ## 5 Analysis of tropical COMP algorithm In this section, we give an analysis of the performance of tropical COMP. The main result of this section (Theorem 5.1) shows that the number of tests needed to ensure a vanishing error probability using a Bernoulli design is asymptotically identical to that needed in the classical case. Figure 4: Simulation of the tropical COMP, DD and SCOMP algorithms with a Bernoulli design to investigate the effect of changing \(\mathbf{K}\) on the performance. The parameters are \(N=1000,K=20,T=400,p=0.1\) and \(\mathbf{K}=(K_{1},20-K_{1})\). Each point is obtained through \(10^{4}\) simulations. Figure 5: Simulation of the tropical COMP, DD and SCOMP algorithms to investigate their performance with a Bernoulli design when the set of defective items, \(\mathcal{K}\), is chosen with a combinatorial prior, and the defectivity level for each defective item is drawn uniformly and independently from \(\{1,\ldots,d\}\). The parameters are \(N=500,K=10,T=120,p=0.1\). Each point is obtained through \(10^{4}\) simulations. Figure 6: Simulation of the tropical COMP, DD and SCOMP algorithms to investigate their performance with a Bernoulli design as well as with a near-constant column weight design. The parameters are \(N=500,K=10,p=0.1,\nu=\ln 2,L=\lfloor\nu T/K\rfloor,\mathbf{K}=(2,2,2,2,2)\). Each point is obtained through \(10^{4}\) simulations. ### Achievability result Our main result for tropical COMP is the following (cf. [4, Eq. (2.9)]): **Theorem 5.1**.: _Let \(\delta>0\). Let \(p=\nu/K\), for \(0<\nu<K\). Taking_ \[T\geq(1+\delta)\,\frac{\mathrm{e}^{\nu}}{\nu}K\ln N\] _ensures that the tropical COMP error probability \(\mathbb{P}(\mathrm{err})\) is asymptotically at most \(N^{-\delta}\)._ _Remark 5.2_.: Note that \(T=(1+\delta)\frac{\mathrm{e}^{\nu}}{\nu}K\ln N\) is minimised over \(\nu\) when \(\nu=1\). This corresponds to choosing the same optimal \(p\) as in the classical case. We note that tropical COMP, similar to classical COMP, is reasonably robust to misspecification of \(p\) (cf. [4, Remark 2.3]). To reach the result of Theorem 5.1, we find a bound on the error probability of tropical COMP using a Bernoulli design. This bound, given below, extends the corresponding bound by Chan _et al._ for classical COMP, given in [6, Eq. (8)], to the tropical setting. **Lemma 5.3**.: _For a Bernoulli test design with parameter \(p\), we can bound the error probability of Tropical COMP from above by_ \[\mathbb{P}(\mathrm{err};\mathrm{COMP},T)\leq\sum_{r\in\mathcal{D}}K_{r}(1-p(1- p)^{\sum_{i<r}K_{i}})^{T}. \tag{8}\] Proof.: To obtain an upper bound on the error probability of tropical COMP, we consider each item in turn, using the fact that the union bound \[\mathbb{P}(\mathrm{err})=\mathbb{P}\left(\bigcup_{i}\{\widehat{U}_{i}\neq U_{ i}\}\right)\leq\sum_{i}\mathbb{P}(\widehat{U}_{i}\neq U_{i}), \tag{9}\] tells us that we only need to control the individual probabilities that an item is misclassified. Any given item \(i\) is misclassified only if it is intruding. This happens if every test which it appears in contains at least one of the \(\sum_{i<r}K_{i}\) items of lower level. For a given test, the chance that it contains \(i\) but doesn't contain such an item is \(p(1-p)^{\sum_{i<r}K_{i}}\). Hence using independence between tests, we have that \[\mathbb{P}(\widehat{U}_{i}\neq U_{i})\leq(1-p(1-p)^{\sum_{i<r}K_{i}})^{T}. \tag{10}\] The result follows on substituting (10) in (9). We can now prove the main result. Proof of Theorem 5.1.: This proof is adapted from the one given in the classical case by Chan _et al._ in [6]. Let \(T=\beta K\ln N\). The key is to observe for a given \(T\) and \(p\) that the function \(f(\ell)=(1-p(1-p)^{\ell})^{T}\) is increasing in \(\ell\). Hence we can write (8) as \[\mathbb{P}(\mathrm{err})\leq\sum_{r\in\mathcal{D}}K_{r}f\left(\sum_{i<r}K_{i} \right)\leq\sum_{r\in\mathcal{D}}K_{r}f\left(K\right)=Nf(K). \tag{11}\] Then, setting \(p=\nu/K\) in Lemma 5.3, we have \[\mathbb{P}(\mathrm{err}) \leq N\exp(-Tp(1-p)^{K})\] \[=N\exp(-\beta\nu(1-\nu/K)^{K}\ln N)\] \[\simeq N\exp(-\beta\nu\mathrm{e}^{-\nu}\ln N)\qquad\text{as }K\to\infty\] \[=N^{1-\beta\nu\mathrm{e}^{-\nu}}.\] Hence, taking \(\beta=(1+\delta)\frac{\mathrm{e}^{\nu}}{\nu}\) ensures that \(\mathbb{P}(\mathrm{err})\) is asymptotically at most \(N^{-\delta}\). ### Contributions to the error probability Figure 7 illustrates the contribution of each summand to the bound (8) for a range of values of \(T\). It is clear that the dominant term contributing to the error bound is \(r=\infty\) (that the dominant error event is for a non-defective item to be wrongly Figure 7: Plot illustrating the variation of \(\min\{1,K_{r}f(\sum_{i<r}K_{r})\}\) with \(T\), for each \(r\in\mathcal{D}\). The parameters chosen are \(N=500,K=10,p=0.1\) and \(\mathbf{K}=(2,2,2,2,2)\). classified as defective, and that the defective items are more typically correctly classified). Indeed, (11) implies that the proportion of the bound (8) arising from the \(r=\infty\) term is at least \(1-K/N\), since this result gives \[\frac{(N-K)f(K)}{\sum_{r\in\mathcal{D}}K_{r}f\left(\sum_{i<r}K_{i}\right)}\geq \frac{(N-K)f(K)}{Nf(K)}=1-\frac{K}{N}.\] In fact in the 'uniform' case where \(\mathbf{K}=(K/d,\ldots,K/d)\), the contributions to the bound \[K_{r}f\left(\sum_{i<r}K_{i}\right)\simeq K_{r}\exp\left(-Tp(1-p)^{(r-1)K/d} \right)\simeq\frac{K}{d}\exp\left(-Tp\mathrm{e}^{-p(r-1)K/d}\right) \tag{12}\] decay doubly-exponentially as \(r\) gets smaller, meaning that the errors are overwhelmingly likely to arise from wrongly classifying items with high \(r\). ### Error probability for different defectivity sequences For a fixed number of defective items \(K\), it would be interesting to know what defectivity sequences \(\mathbf{K}\) make the tropical group testing problem hardest or easiest. This is explored via simulation in Figure 4, but we would also like to understand which sequences \(\mathbf{K}\) lead to the largest and smallest error probability for tropical COMP. Unfortunately, we cannot directly control the error probability in this way. However, we can use the COMP error bound (8) to induce a partial order on sequences \(\mathbf{K}\) with the same sum. This will show that the error bound is smallest in the classical case where all items have the same level and largest in the case where each item has distinct levels. Given a sequence \(\mathbf{K}=(K_{1},\ldots,K_{d})\), we can sort the items in increasing order of level, and for each item \(k\) write \(L_{k}\) for the number of items with a strictly lower level. For example, with \(K=8\), the sequence \(\mathbf{K}^{(1)}=(2,2,2,2)\) induces the sequence \(\mathbf{L}^{(1)}=(0,0,2,2,4,4,6,6)\), while the sequence \(\mathbf{K}^{(2)}=(1,1,1,\ldots,1)\) induces the sequence \(\mathbf{L}^{(2)}=(0,1,2,3,4,5,6,7)\). We can compare two sequences \(\mathbf{K}^{(i)}=\left(K_{1}^{(i)},\ldots,K_{d}^{(i)}\right)\), for \(i=1,2\), and define a partial order \(\mathbf{K}^{(1)}\preceq\mathbf{K}^{(2)}\) if the corresponding sequences \(L_{k}^{(1)}\leq L_{k}^{(2)}\) for all \(k\). Hence in the example above, \(\mathbf{K}^{(1)}\preceq\mathbf{K}^{(2)}\). In this partial ordering, for a given \(K\) the minimal sequence will be \(\mathbf{K}=(K)\), giving \(\mathbf{L}=(0,0,\ldots,0)\), and the sequence \(\mathbf{K}=(1,1,\ldots,1)\) as seen above will be maximal. Now, note that the bound on the RHS of (8) respects this partial order. That is, since the \(r=\infty\) term will be the same for all such sequences, we can regard the variable part of the bound (11) as a sum over defective items \[\sum_{k\in\mathcal{K}}f(L_{k}), \tag{13}\] and use the fact that the function \(f(\ell)\) is increasing in \(\ell\) to deduce that: **Lemma 5.4**.: _If \(\mathbf{K}^{(1)}\preceq\mathbf{K}^{(2)}\) then the corresponding error bound (13) is lower for \(\mathbf{K}^{(1)}\) than \(\mathbf{K}^{(2)}\)._ Hence, for fixed \(K\) the error bound (13) is smallest for the minimal sequence \(\mathbf{K}=(K)\) corresponding to the classical case and largest for the sequence \(\mathbf{K}=(1,1,\ldots,1)\) where each defective item has its own unique level. ## 6 Analysis of tropical DD algorithm: achievability In this section we give an analysis of the performance of tropical DD, which extends that given in [2] for the classical \(d=1\) case by taking advantage of the information provided by the more varied test outcomes of the tests. Our main achievability result ensures success with high probability when we have a number of tests above a certain threshold. **Theorem 6.1**.: _For \(\nu>0\), write_ \[\psi_{r}:=\psi_{r}(\nu)=\Big{(}1-\frac{\nu}{K}\Big{)}^{\sum_{t\leq r}K_{t}}\,.\] _Also define_ \[T_{\infty}(\nu):=\frac{1}{\nu\psi_{d}}K\ln\frac{N}{K}\] _and_ \[T_{r}(\nu):=\frac{1}{\nu\psi_{r}}K\ln K_{r}.\] _If we take_ \[T\geq(1+\delta)\max\big{\{}T_{\infty}(\nu),T_{d}(\nu),T_{d-1}(\nu),\ldots,T_{1 }(\nu)\big{\}},\] _tests, then the error probability of the tropical DD algorithm for a Bernoulli design with \(p=\nu/K\) tends to \(0\)._ _Remark 6.2_.: Note that in the uniform case where \(K_{r}=K/d\) for all \(r\), in regimes where \(K=N^{\alpha}\) and \(\alpha>1/2\), the dominant term in Theorem 6.1 is \[T_{d}(\nu)=\frac{1}{\nu(1-\nu/K)^{K}}\,K\ln\frac{K}{d};\] that is, the maximum over \(r\) is achieved at \(r=d\), since \(\psi_{r}\) is decreasing in \(r\) and \(\ln K_{r}\) is constant. Further, in this case, since we are able to choose \(\nu\) to minimise \(T_{\mathrm{fin}}(\nu)\), we can maximise \(\nu(1-\nu/K)^{K}\) by taking \(\nu=K/(K+1)\) or \(p=1/(K+1)\). Note that this does not necessarily mean that we minimise the error probability, since we are minimising the number of tests to control a bound on \(\mathbb{P}(\mathrm{err})\), not \(\mathbb{P}(\mathrm{err})\) itself. In other words, asymptotically we require \(\mathrm{e}K\ln(K/d)\) tests. This means that the tropical performance bound of Theorem 6.1 represents an asymptotic reduction of \(\mathrm{e}K\ln d\) tests over the classical bound of \(\mathrm{e}K\ln K\) tests (see [1, Theorem 1]). While this is a second-order asymptotic term compared with the leading order \(\mathrm{e}K\ln K\) term, it may still represent a valuable improvement in problems of finite size. In the next section, we will see a converse result Theorem 7.1, showing that success probability can't be high with a number of tests below a threshold. Further, we show that for certain parameter regimes these two thresholds coincide, showing that we have sharp performance bounds on tropical DD. ### Proof outline To prove Theorem 6.1, we need a general upper bound on the error probability of DD. The key idea is that DD succeeds if and only if each defective item is proven to be such in the second stage of the algorithm. **Definition 6.3**.: For any \(1\leq s\leq K_{r}\), we write \(L_{r,s}\) for the number of tests that contain item \(i(r,s)\), no other defective item \(i(t,u)\) with \(t\leq r\), and also no non-defective \(\mathrm{PD}(r)\) item. A test that counts towards \(L_{r,s}\) is precisely one that discovers \(i(r,s)\) to be defective at level \(r\). So with this definition, we can say that the tropical DD algorithm succeeds if and only if \(L_{r,s}\geq 1\) for all \((r,s)\). Hence we have \[\mathbb{P}(\mathrm{err})=\mathbb{P}\left(\bigcup_{r,s}\{L_{r,s}=0\}\right) \leq\sum_{r=1}^{d}\mathbb{P}\left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\right), \tag{14}\] One way we could get \(L_{r,s}=0\) is if there is a large number of potentially intruding non-defectives at this outcome level. Recall from Definition 2.4.3 that this number of intruding non-defectives is \(G_{r}\). However, provided we use sufficiently many tests, \(G_{r}\) is unlikely to be large. Hence, we will condition on \(G_{r}\) being no larger than some threshold level \(g_{r}^{*}\), to be chosen later. So for each level \(r\), the summand in (14) can be bound as \[\mathbb{P}\left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\right)=\mathbb{P} \left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\left|\ G_{r}\leq g_{r}^{*} \right.\right)\mathbb{P}(G_{r}\leq g_{r}^{*})\] \[\qquad+\mathbb{P}\left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\left|\ G_ {r}>g_{r}^{*}\right.\right)\mathbb{P}(G_{r}>g_{r}^{*})\] \[\leq\mathbb{P}\left(\bigcup_{s=1}^{K_{r}}\{L_{r,s}=0\}\left|\ G_ {r}\leq g_{r}^{*}\right.\right)+\mathbb{P}(G_{r}>g_{r}^{*})\] \[\leq K_{r}\,\mathbb{P}\left(L_{r,s}=0\mid G_{r}\leq g_{r}^{*} \right)+\mathbb{P}(G_{r}>g_{r}^{*}), \tag{15}\] where we used the union bound in the last line. We need to show that both terms in (15) are small. The first term being small tells us we're likely to find the level-\(r\) defectives provided \(G_{r}\) is not too big; we will show this happens when \(T\geq(1+\delta)\max\{T_{\infty},T_{r}\}\). The second term being small tells us that \(G_{r}\) is unlikely to be too big; we will show this happens when \(T\) is big; for example, \(T\geq(1+\delta)T_{r}\) will be plenty. In Subsection 6.2 we will analyse the first term \(\mathbb{P}\left(L_{r,s}=0\mid G_{r}\leq g_{r}^{*}\right)\). In Subsection 6.3 we will bound the second term \(\mathbb{P}(G_{r}>g_{r}^{*})\). Then in Subsection 6.4 we will put the pieces together to prove Theorem 6.1. ### Finding defectives We first describe the joint distribution of certain random variables arising in the analysis of DD. We provide additional notation to that used in Section 2. **Definition 6.4**.: We define the following random variables: 1. Write \(M_{\infty}\) for the number of tests which contain no defectives - and so are negative tests with outcome \(\infty\). 2. For \(1\leq r\leq d\), write \(M_{r}\) for the total number of positive tests with outcome \(r\). 3. Further, decompose \(M_{r}=\sum_{s=1}^{K_{r}}M_{r,s}+M_{r,+}\) as follows: 1. For \(1\leq s\leq K_{r}\), write \(M_{r,s}\) for the number of tests that contain a single item \(i(r,s)\) at level \(r\) and no other defective item \(i(t,u)\) with \(t\leq r\); note that each such test has outcome \(r\). 2. Write \(M_{r,+}\) for the number of tests which have outcome \(r\) but contain multiple defective items at level \(r\). Write \(\mathbf{M}\) for the collection of random variables \[\mathbf{M}=(M_{1,1},M_{1,2},\ldots M_{1,K_{1}},M_{1,+},\ldots,M_{d,1},M_{d,2},\ldots, M_{d,K_{d}},M_{d,+},M_{\infty})\] (noting this includes the terms in the decompositions of the \(M_{r}\)s, but not the \(M_{r}\)s themselves). Note that (using Definition 2.4.2) item \(i(r,s)\) is masked if and only if \(M_{r,s}=0\). In particular \(M_{r,s}=0\) means necessarily that \(L_{r,s}=0\), and this is the event we wish to avoid. But first, let us note the joint distribution of \(\mathbf{M}\). **Lemma 6.5**.: _The random vector \(\mathbf{M}\) is multinomial with parameters \(T\) and \(\mathbf{q}\), where_ \[\mathbf{q}=(q_{1,1},q_{1,2},\ldots q_{1,K_{1}},q_{1,+},\ldots,q_{d,1},q_{d,2}, \ldots,q_{d,K_{d}},q_{d,+},q_{\infty})\,.\] _Here for each \(r\) and \(s\):_ \[q_{\infty} :=(1-p)^{K},\] \[q_{r,s} :=\prod_{t<r}(1-p)^{K_{t}}\left((1-p)^{K_{r}-1}p\right),\] \[q_{r} :=\prod_{t<r}(1-p)^{K_{t}}\left(1-(1-p)^{K_{r}}\right),\] \[q_{r,+} :=q_{r}-K_{r}q_{r,s}.\] Proof.: First, a test is negative if all defective items are absent, with happens with probability \(q_{\infty}(1-p)^{K}\). Second, \(q_{r,s}\) is the probability all items at levels \(t<r\) are absent, that item \(i(r,s)\) is present, and that the \(K_{r}-1\) other items at level \(r\) are absent. Third, \(q_{r}\) is the probability of outcome \(r\), which happens all items at level \(t<r\) are absent, and also it's not the case that all items at level \(r\) are absent. Fourth, \(q_{r,+}\) is the probability \(q_{r}\) of an outcome \(r\) minus the probabilities of a single level-\(r\) item being the cause. Although the distribution of the crucial variable \(L_{r,s}\) seems tricky to derive from first principles, it is much easier once we know \(M_{r,s}\) and the number of potentially intruding non-defectives. This is because a test counting towards \(M_{r,s}\) will count also towards \(L_{r,s}\) provided that no non-defectives intrude on the test too. **Lemma 6.6**.: _The conditional distribution of \(L_{r,s}\), given \(M_{r,s}\) and \(G_{r}\), is_ \[L_{r,s}\mid\{M_{r,s}=m_{r,s},G_{r}=g_{r}\}\sim\mathrm{Bin}(m_{r,s},(1-p)^{g_{r }}). \tag{16}\] Proof.: There are \(m_{r,s}\) tests which contain item \(i(r,s)\) and no other defective item \(i(t,u)\) with \(t\leq r\). Each of these \(m_{r,s}\) tests independently contributes to \(L_{r,s}\) if and only if none of the \(g_{r}\) potentially intruding non-defective items appear in the test. Because of the Bernoulli design, each of those \(g_{r}\) non-defectives appears in the test with probability \(p\), so the probability none of them appear is \((1-p)^{g_{r}}\), independent over the \(m_{r,s}\) tests. We can now bound the probability that of undesirable event that \(L_{r,s}=0\). **Lemma 6.7**.: _Using a Bernoulli design with parameter \(p\), for any \(g_{r}^{*}\), we have the bound_ \[\mathbb{P}(L_{r,s}=0\mid G_{r}\leq g_{r}^{*})\leq\exp\big{(}-q_{r,1}(1-pg_{r}^ {*})T\big{)}, \tag{17}\] _where as in Lemma 6.5 we write \(q_{r,1}=p(1-p)^{\sum_{t\leq r}K_{t}-1}\)._ Proof.: We start by conditioning on equality \(G_{r}=g_{r}\). Noting that by Lemma 6.5\(M_{r,s}\sim\operatorname{Bin}(T,q_{r,1})\) and that \(\mathbb{P}(\operatorname{Bin}(m,q)=0)=(1-q)^{m}\), we can write \[\mathbb{P}(L_{r,s}=0\mid G_{r}=g_{r}) =\sum_{m=0}^{T}\mathbb{P}(M_{r,s}=m)\,\mathbb{P}(L_{r,s}=0\mid G_ {r}=g_{r},M_{r,s}=m)\] \[=\sum_{m=0}^{T}\binom{T}{m}q_{r,1}^{m}(1-q_{r,1})^{T-m}\,(1-(1-p) ^{g_{r}})^{m}\] \[\leq(1-q_{r,1}(1-p)^{g_{r}})^{T}\] \[\leq\exp\left(-q_{r,1}(1-p)^{g_{r}}T\right)\] \[\leq\exp\left(-q_{r,1}(1-pg_{r})T\right). \tag{18}\] From the second to the third line, we used the binomial theorem, and then we used Bernoulli's inequality in the form \((1-p)^{g}\geq 1-pg\). Note that (18) is increasing in \(g_{r}\). Thus we can bound (17) by the worst-case conditioning, where \(G_{r}=g_{r}^{*}\). ### Intruding non-defectives Recall from Definition 2.4.3 that \(G_{r}\) is the number of non-defectives that could intrude into tests with outcome \(r\). Our goal is to bound that tail probability of \(G_{r}\) in (15). **Lemma 6.8**.: _Write_ \[\overline{M}_{r}:=M_{\infty}+M_{d}+M_{d-1}+\cdots+M_{r+1}\] _for the number of tests with outcomes higher than \(r\). Then \(\overline{M}_{r}\) has distribution_ \[\overline{M}_{r}\sim\operatorname{Bin}\left(T,\ \prod_{t\leq r}(1-p)^{K_{t}}\right) \tag{19}\] _Further, the conditional distribution of \(G_{r}\) given \(\boldsymbol{M}\) is_ \[G_{r}\ |\ \{\boldsymbol{M}=\boldsymbol{m}\}\sim\operatorname{Bin}(N-K,(1-p)^{m_{r }^{*}}), \tag{20}\] _where \(m_{r}^{*}=m_{r+1}+\dots+m_{d}+m_{\infty}\)._ Proof.: By standard properties of the multinomial (see [2, Lemma 30]), \[\overline{M}_{r}\sim\operatorname{Bin}\left(T,q_{r+1}+\dots+q_{d}+q_{\infty} \right).\] But \[q_{r+1}+\dots+q_{d}+q_{\infty}=\prod_{t\leq r}(1-p)^{K_{t}}, \tag{21}\] since it forms a collapsing sum. This proves the first distribution. Given \(\overline{M}_{r}=m_{r}^{*}\), each of the \(N-K\) non-defectives will be independently counted in \(G_{r}\) provided they don't appear in any of the \(m_{r}^{*}\) tests with outcomes higher than \(r\). By the Bernoulli design structure, each item is independently counted with probability \((1-p)^{m_{r}^{*}}\). This proves the second distribution. We can calculate the expectation of \(G_{r}\) by conditioning on \(\overline{M}_{r}\). **Lemma 6.9**.: \(\mathbb{E}G_{r}\leq(N-K)\exp(-p\psi_{r}T)\)_._ Proof.: We use the facts that (19) gives that \(\overline{M}_{r}\sim\operatorname{Bin}(T,\psi_{r})\) and Lemma 6.8 gives that \(G_{r}\ |\ \{\overline{M}_{r}=m_{r}^{*}\}\sim\operatorname{Bin}\left(N-K,(1-p)^{m_{r }^{*}}\right)\). Hence we can use the binomial theorem to write \[\mathbb{E}G_{r} =\sum_{m=0}^{T}\mathbb{P}(\overline{M}_{r}=m)\,\mathbb{E}[G_{r}\ |\ \overline{M}_{r}=m]\] \[=\sum_{m=0}^{T}\binom{T}{m}\psi_{r}^{m}(1-\psi_{r})^{T-m}\left(N -K\right)(1-p)^{m}\] \[=(N-K)(1-\psi_{r}+\psi_{r}(1-p))^{T},\] and the result follows. We will choose the threshold \(g_{r}^{*}\) to be just slightly bigger than this expectation; specifically, we take \(g_{r}^{*}=(N-K)\exp(-p\psi_{r}T(1-\epsilon))\) for some \(\epsilon>0\) to be determined later. **Lemma 6.10**.: _With \(g_{r}^{*}=(N-K)\exp(-p\psi_{r}T(1-\epsilon))\), we have_ \[\mathbb{P}(G_{r}>g_{r}^{*})\leq\exp(-p\psi_{r}T\epsilon).\] Proof.: This is a simple application of Markov's inequality. Using Lemma 6.9, we get \[\mathbb{P}(G_{r}>g_{r}^{*}) \leq\frac{\mathbb{E}G_{r}}{g_{r}^{*}}\] \[\leq\frac{(N-K)\exp(-p\psi_{r}T)}{(N-K)\exp(-pT\psi_{r}(1-\epsilon ))}\] \[=\exp(-p\psi_{r}T\epsilon).\qed\] ### Completing the proof We are now ready to complete the proof of our main result. Proof of Theorem 6.1.: From (14) and (15), we had got as far as the bound \[\mathbb{P}(\mathrm{err})\leq\sum_{r=1}^{d}\big{[}K_{r}\,\mathbb{P}\left(L_{r, s}=0\mid G_{r}\leq g_{r}^{*}\right)+\mathbb{P}(G_{r}>g_{r}^{*})\big{]}, \tag{22}\] and in Subsection 6.3 we had chosen \(g_{r}^{*}=(N-K)\exp(-p\psi_{r}T(1-\epsilon))\), with \(\epsilon\) still to be fixed. We need to show that, in each summand of (22), both the first and second terms tend to \(0\). We begin with the first term. From Lemma 6.7, we have the bound \[K_{r}\,\mathbb{P}(L_{r,s}=0\mid G_{r}\leq g_{r}^{*}) \leq K_{r}\exp\big{(}-q_{r,1}(1-pg_{r}^{*})T\big{)}\] \[=\exp\left(\ln K_{r}-T\psi_{r}p\frac{1-pg_{r}^{*}}{1-p}\right),\] where we have used that \(q_{r,1}=\psi_{r}p/(1-p)\). The condition \(T\geq(1+\delta)T_{r}\) means that \(T\psi_{r}p\geq(1+\delta)\ln K_{r}\), so we get \[K_{r}\,\mathbb{P}(L_{r,s}=0\mid G_{r}\leq g_{r}^{*})\leq\exp\left(\ln K_{r} \left(1-(1+\delta)\frac{1-pg_{r}^{*}}{1-p}\right)\right).\] This tends to \(0\) so long as \(pg_{r}^{*}\) tends to \(0\), which we will now check. Since \(T\geq(1+\delta)T_{\infty}\) and \(\psi_{r}\geq\psi_{d}\), we know that \(Tp\psi_{r}\geq(1+\delta)\ln(N/K)\). With \(g_{r}^{*}=(N-K)\exp(-p\psi_{r}T(1-\epsilon))\), we therefore have \[pg_{r}^{*}\leq\left(\frac{N}{K}\right)\exp\left(-Tp\psi_{r}(1-\epsilon) \right)\leq\left(\frac{N}{K}\right)^{1-(1-\epsilon)(1+\delta)}.\] This means that \(pg_{r}^{*}\to 0\) is indeed guaranteed by choosing \(\epsilon<\delta/(1+\delta)\). Now the second term. From Lemma 6.10, we have \[\mathbb{P}(G_{r}>g_{r}^{*})\leq\exp(-p\psi_{r}T\epsilon)<\exp(-p\psi_{r}T\delta/( 1+\delta)),\] since we have just chosen \(\epsilon<\delta/(1+\delta)\). The condition \(T\geq(1+\delta)T_{r}\) gives us \(p\psi_{r}T/(1+\delta)=\ln K_{r}\), so this term does indeed tend to \(0\). Since we have shown that all the terms in (22) tend to zero, the proof is complete. ## 7 Converse results Our achievability result Theorem 6.1 shows that tropical DD can succeed with \[T\geq(1+\delta)\max\{T_{\infty}(\nu),T_{d}(\nu),T_{d-1}(\nu),\ldots,T_{1}(\nu )\}\qquad\text{tests}. \tag{23}\] We can use similar ideas to provide a converse result for the tropical DD algorithm. **Theorem 7.1**.: _For a given \(\nu>0\), in the limiting regime where_ \[T\leq(1-\delta)\max\{T_{d}(\nu),T_{d-1}(\nu),\ldots,T_{1}(\nu)\}\] _then the error probability of the tropical DD algorithm for a Bernoulli design with \(p=\nu/K\) tends to \(1\)._ Note that the difference between the achievability and the converse results is the lack of the \(T_{\infty}\) term in the converse. We will prove Theorem 7.1 for tropical DD in Subsection 7.1. In addition, we will show in Subsection 7.2 that this same bound acts as a more general 'algorithm-independent' converse for Bernoulli designs. ### Proof for tropical DD The key to proving Theorem 7.1 is to observe that tropical DD will definitely fail if any \(M_{r,s}=0\), since that means that item \(i(r,s)\) never appears without at least one other defective item with which it could be confused, so \(L_{r,s}\) is certainly \(0\) too. Thus we start with the bound \[\mathbb{P}(\mathrm{err})\geq\mathbb{P}\left(\bigcup_{r,s}\{M_{r,s}=0\}\right). \tag{24}\] By picking out just the defective items at a given level \(r=r^{*}\), we have \[\mathbb{P}(\mathrm{err})\geq\mathbb{P}\left(\bigcup_{s=1}^{K_{r^{*}}}\{M_{r^ {*},s}=0\}\right) \tag{25}\] As in [2, Eq. (13)], we define the function \[\phi_{K}(q,T):=\sum_{j=0}^{K}(-1)^{j}\binom{K}{j}(1-jq)^{T}. \tag{26}\] We will bound the error probability in terms of \(\phi_{K}\) as follows. **Lemma 7.2**.: _For \(1\leq r^{*}\leq d\), the error probability of tropical DD is bounded below by_ \[\mathbb{P}(\mathrm{err})\geq 1-\phi_{K^{*}_{r}}(q_{r^{*},1},T), \tag{27}\] Proof.: We follow the general idea from [2]. We can calculate (25) using the inclusion-exclusion formula \[\mathbb{P}\left(\bigcup_{s=1}^{K_{r^{*}}}\{M_{r^{*},s}=0\}\right)=\sum_{|S|=0 }^{K_{r^{*}}}(-1)^{|S|}\,\mathbb{P}\left(\bigcap_{s\in S}\{M_{r^{*},s}=0\} \right), \tag{28}\] where the sum is over subsets \(S\) of \(\mathcal{K}_{r}\). By the multinomial distribution form of Lemma 6.5, we have \[\mathbb{P}\left(\bigcap_{s\in S}\{M_{r^{*},s}=0\}\right) =\binom{T}{0,0,\ldots,0,T}\left(\prod_{s\in S}q_{r^{*},s}^{0} \right)\left(1-\sum_{s\in S}q_{r^{*},s}\right)^{T}\] \[=\left(1-\sum_{s\in S}q_{r^{*},s}\right)^{T}\] \[=(1-|S|q_{r^{*},1})^{T}\,. \tag{29}\] Substituting (29) into (28) gives \[\mathbb{P}\left(\bigcup_{s=1}^{K_{r^{*}}}\{M_{r^{*},s}=0\}\right)=\sum_{|S|=0 }^{K_{r^{*}}}(-1)^{|S|}\,\left(1-|S|q_{r^{*},1}\right)^{T}.\] Collecting together the summands according to the value of \(|S|=j\) gives the result. We bound this quantity from below by deducing an upper bound on \(\phi_{K}\). **Lemma 7.3**.: _For all \(K\), \(q\) and \(T\) we can bound_ \[\phi_{K}(q,T)\leq\exp\left(-\frac{K(1-q)^{T+1}}{1+Kq(1-q)^{T}}\right) \tag{30}\] Proof.: See Appendix B.1. We can now finish our proof of the converse for tropical DD. Proof of Theorem 7.1.: By hypothesis, \(T\leq(1-\delta)\max_{r}T_{r}\). So pick some level \(r^{*}\) such that \(T\leq(1-\delta)T_{r^{*}}\). We had already reached the bound (27): \[\mathbb{P}(\mathrm{err})\geq 1-\phi_{K_{r}^{*}}(q_{r^{*},1},T).\] We now combine this with Lemma 7.3. We deduce that \[\mathbb{P}(\mathrm{err})\geq 1-\exp\left(-\frac{K_{r^{*}}(1-q_{r^{*},1})^{T+1}}{1 +K_{r^{*}}q_{r^{*},1}(1-q_{r^{*},1})^{T}}\right). \tag{31}\] The exponential term here is of the form \(\exp(-(1-q)u/(1+qu))\), with \(u=K_{r^{*}}(1-q_{r^{*},1})^{T}\). Since \(\exp(-(1-q)u/(1+qu))\) increases as \(u\) decreases (for fixed \(q\)), it suffices to bound \(K_{r^{*}}(1-q_{r^{*},1})^{T}\) from below, which we do now. Since \(q_{r^{*},1}=p\psi_{r^{*}}/(1-p)\) we know that \[\frac{q_{r^{*},1}}{1-q_{r^{*},1}}=\frac{p\psi_{r^{*}}}{1-p(1+\psi_{r^{*}})}.\] Combining this with \(T\leq(1-\delta)T_{r^{*}}=(1-\delta)K\ln K_{r^{*}}/(\nu\psi_{r^{*}})\) gives \[\frac{Tq_{r^{*},1}}{1-q_{r^{*},1}} \leq\frac{(1-\delta)P}{\nu\psi_{r^{*}}}\frac{p\psi_{r^{*}}}{1-p(1 +\psi_{r^{*}})}\ln K_{r^{*}}\] \[=\frac{(1-\delta)}{1-p(1+\psi_{r}^{*})}\ln K_{r}^{*}\] \[\leq(1-c)\ln K_{r}^{*}, \tag{32}\] for some \(c>0\), for \(K_{r^{*}}\) sufficiently large. This gives us the lower bound \[K_{r^{*}}(1-q_{r^{*},1})^{T} =K_{r^{*}}\exp(T\log(1-q_{r^{*},1}))\] \[\geq K_{r^{*}}\exp\left(-\frac{Tq_{r^{*},1}}{1-q_{r^{*},1}}\right)\] \[\geq K_{r^{*}}^{c}, \tag{33}\] where we used \(\log(1-q)\geq-q/(1+q)\) for \(q>-1\) and (32). Using the bound (33) in (31), we get \[\mathbb{P}(\mathrm{err}) \geq 1-\exp\left(-\frac{K_{r^{*}}(1-q_{r^{*},1})^{T+1}}{1+q_{r^{*}, 1}K(1-q_{r^{*},1})^{T}}\right)\] \[\geq 1-\exp\left(-\frac{K_{r^{*}}^{c}(1-q_{r^{*},1})}{1+q_{r^{*},1 }K_{r^{*}}^{c}}\right)\] \[=1-\exp\left(-\frac{(1-\psi_{r^{*}}/(K-1))K_{r^{*}}^{c}}{1+\psi_{ r^{*}}K_{r^{*}}^{c}/(K-1)}\right). \tag{34}\] where (34) follows since \(q_{r^{*},1}=p\psi_{r^{*}}/(1-p)=\psi_{r^{*}}/(K-1)\). Finally, that bound (34) is asymptotically equivalent to \(1-\exp(-K_{r^{*}}^{c})\), which tends to \(1\) as \(K\to\infty\). This completes the proof. ### Algorithm-independent converse In fact, our DD-specific converse, Theorem 7.1, helps give a converse bound for _all_ algorithms with a Bernoulli design. We can write \(\mathbb{P}(\mathrm{err};\mathrm{optimal},T)\) for the minimum error probability that can be achieved by any algorithm. The key observation is that in Theorem 7.1 we find that with too few tests there is a good chance that some item \(i(r^{*},s)\) appears in no tests without other items of the same level, so a satisfying vector can be formed without it. **Theorem 7.4**.: _For a given \(\nu>0\) and any \(T\leq(1-\delta)T_{\mathrm{fin}}(\nu)\), the error probability of the optimal algorithm \(\mathbb{P}(\mathrm{err};\mathrm{optimal},T)\) for a Bernoulli design with parameter \(\nu/K\) is bounded away from zero, even for algorithms which are given access to the values of \(K_{i}\)._ Proof.: First note that for any \(T^{\prime}\geq 1\) the \(\mathbb{P}(\mathrm{err};\mathrm{optimal},T)\geq\mathbb{P}(\mathrm{err}; \mathrm{optimal},T+T^{\prime})\), since there exists a (possibly suboptimal) algorithm using \(T+T^{\prime}\) tests which simply ignores the last \(T^{\prime}\) tests and applies the optimal \(T\)-test algorithm to the remaining tests. Hence it will be sufficient to bound \(\mathbb{P}(\mathrm{err};\mathrm{optimal},T)\) away from zero for \(T=(1-\delta)\max_{r}T_{r}\), as the same bound will hold for all \(T\leq(1-\delta)\max_{r}T_{r}\). We argue as in [1]. Recall that \(M_{r,s}\) is the number of tests that have a chance of proving \(i(r,s)\) is defective at level \(r\), and \(H_{r}\) is the number of non-defective items in \(\mathrm{PD}(r)\). The key idea is this: Suppose for some \(r^{*}\) that both \(A=\bigcup_{s=1}^{K_{r^{*}}}\{M_{r^{*},s}=0\}\) and \(B=\{H_{r^{*}}\geq 1\}\) occur. The event \(A\) would mean that there is some item \(i(r^{*},t)\) that is masked and the event \(B\) would mean that there is some non-defective item which is a possible defective at that level. So we could form an alternative satisfying vector from the true vector \(\mathbf{U}\) by swapping the entries in \(\mathbf{U}\) of these two items. Hence, if \(A\cap B\) occurs, then there are at least two satisfying vectors with the correct number of items at each level, so we the success probability can only be at most \(1/2\). The probability of an intersection can always be bounded with \(\mathbb{P}(A\cap B)\geq\mathbb{P}(A)-\mathbb{P}(B^{c})\), so the error probability for any algorithm satisfies \[\mathbb{P}(\mathrm{err})\geq\frac{1}{2}\,\mathbb{P}\left(\bigcup_{s=1}^{K_{r ^{*}}}\{M_{r^{*},s}=0\}\right)-\frac{1}{2}\,\mathbb{P}(H_{r^{*}}=0).\] Now the first term involves exactly the term we have controlled in Theorem 7.1, so we know it is larger than \(1/4\) in the regime of interest for \(K\) sufficiently large. Hence, to bound the error probability away from zero it will be sufficient to prove that \(\mathbb{P}(H_{r^{*}}=0)\leq 1/4\). We will prove this in a series of technical lemmas in Appendix B.2: 1. In Lemma B.4, we will show that \[\mathbb{P}(H_{r^{*}}=0)\leq\mathbb{P}(G_{r^{*}}=0)+\mathbb{E}(1-p)^{M_{r^{*}}}.\] We deal with the two terms separately. 2. In Lemma B.5, we will show that the first term is bounded by \[\mathbb{P}(G_{r^{*}}=0)\leq\left(1-(1-p)^{m_{r^{*}}^{*}}\right)^{N-K}+\exp \left(-\frac{\delta^{2}T\psi_{r^{*}}}{2}\right),\] (35) where \(m_{\ell}^{*}=T\psi_{\ell}(1+\delta)\). 3. In Lemma B.6, we show that the second term is bounded by \[\mathbb{E}(1-p)^{M_{r^{*}}}\leq\exp(-p\psi_{r^{*}}d_{r^{*}}T),\] (36) where \(d_{\ell}=(1-p)^{-K_{\ell}}-1\). Recall that we consider \(T=(1-\delta)T_{r^{*}}=(1-\delta)K\ln K_{r^{*}}/(\nu\psi_{r^{*}})\), for the maximising \(r^{*}\). Since \(p\psi_{r^{*}}T=(1-\delta)\ln K_{r}^{*}\), we know that \((1-p)^{m_{r^{*}}}\simeq K^{-(1-\delta^{2})}\), so that both terms in (35) tend to zero. Similarly, (36) also tends to zero for this choice of \(\ell\) and \(T\). This completes the proof. ## 8 Discussion In this paper, we have considered the tropical group testing model of Wang _et al._[19] in a small-error setting. We have described small-error algorithms in Section 3. We demonstrated the empirical performance of these algorithms in Section 4, showing that tropical DD and tropical SCOMP outperform their classical counterparts. We performed theoretical analysis of the tropical COMP algorithm in Section 5 and of the DD algorithm in Sections 6 and 7, proving that in certain parameter regimes the tropical DD algorithm is asymptotically optimal. We briefly mention some open problems. Further work could explore test designs with near-constant column weights in the tropical setting, as these designs show a gain in performance in the classical case (see [4]), and Figure 6 suggests the same may well be true here. The results could be made more practically valuable by developing bounds in a noisy setting, under a variety of noise models similar to those described in [4, Chapter 4]. Also, there is potential to extend the results in this paper by considering models with random defectivity levels, as illustrated in Figure 5. It may also be mathematically interesting to develop small-error algorithms and bounds using the delay matrix approach of [19]. ## Acknowledgements This work was carried out while Vivekanand Paligadu was on a University of Bristol School of Mathematics undergraduate summer bursary placement, funded by Mark Williams Alumni Funds.
2309.10718
DRIVE: Data-driven Robot Input Vector Exploration
An accurate motion model is a fundamental component of most autonomous navigation systems. While much work has been done on improving model formulation, no standard protocol exists for gathering empirical data required to train models. In this work, we address this issue by proposing Data-driven Robot Input Vector Exploration (DRIVE), a protocol that enables characterizing uncrewed ground vehicles (UGVs) input limits and gathering empirical model training data. We also propose a novel learned slip approach outperforming similar acceleration learning approaches. Our contributions are validated through an extensive experimental evaluation, cumulating over 7 km and 1.8 h of driving data over three distinct UGVs and four terrain types. We show that our protocol offers increased predictive performance over common human-driven data-gathering protocols. Furthermore, our protocol converges with 46 s of training data, almost four times less than the shortest human dataset gathering protocol. We show that the operational limit for our model is reached in extreme slip conditions encountered on surfaced ice. DRIVE is an efficient way of characterizing UGV motion in its operational conditions. Our code and dataset are both available online at this link: https://github.com/norlab-ulaval/DRIVE.
Dominic Baril, Simon-Pierre Deschênes, Luc Coupal, Cyril Goffin, Julien Lépine, Philippe Giguère, François Pomerleau
2023-09-19T16:02:23Z
http://arxiv.org/abs/2309.10718v2
# DRIVE: Data-driven Robot Input Vector Exploration ###### Abstract An accurate motion model is a fundamental component of most autonomous navigation systems. While much work has been done on improving model formulation, no standard protocol exists for gathering empirical data required to train models. In this work, we address this issue by proposing Data-driven Robot Input Vector Exploration (DRIVE), a protocol that enables characterizing uncrewed ground vehicles (UGVs) input limits and gathering empirical model training data. We also propose a novel learned slip approach outperforming similar acceleration learning approaches. Our contributions are validated through an extensive experimental evaluation, cumulating over 7 km and 1.8 h of driving data over three distinct UGVs and four terrain types. We show that our protocol offers increased predictive performance over common human-driven data-gathering protocols. Furthermore, our protocol converges with 46 s of training data, almost four times less than the shortest human dataset gathering protocol. We show that the operational limit for our model is reached in extreme slip conditions encountered on surfaced ice. DRIVE is an efficient way of characterizing UGV motion in its operational conditions. Our code and dataset are both available online at this link: [https://github.com/norlab-ulaval/DRIVE](https://github.com/norlab-ulaval/DRIVE). ## I Introduction The ability to model the motion of uncrewed ground vehicles (UGVs) is fundamental to enabling localization [1], path planning [2] and path following [3]. Poor vehicle-terrain characterization will lead to significant modeling errors, potentially causing system failure [4]. With limited available information and sensory measurements on vehicle and ground properties, generating a reliable UGV motion model remains challenging. For most models, training on empirical data is required to reduce modeling error [5]. This task requires deploying a UGV in its operational environment and manually drive it for an extended period [6]. Since energy consumption and deployment time are critical for various UGV applications, facilitating this task is of high importance. Additionally, standardizing this process could help engineers to ensure that their systems are satisfactory to norm ISO 34502:2022(E) on autonomous navigation.2 Footnote 2: “ISO 34502:2022(E): Road vehicles — Test scenarios for automated driving systems — Scenario-based safety evaluation framework*, 2022 Most work on UGV motion modeling relies on manual driving to gather a training dataset, with little to no details on the driving protocol. Thus, we propose the _Data-driven Robot Input Vector Exploration (DRIVE)_, a protocol aiming to facilitate and standardize vehicle characterization with respect to the terrain, as illustrated in Figure 1. We start by identifying the true vehicle's input space, differing from the manufacturer's specifications. We then automatically send commands to the UGV to cover the entire true input space. This differs from the common manual driving approach, which tends to cover only forward driving, as shown by the red dots representing our previous work [7]. We show that broad input-space coverage offers significant modeling error reduction compared to narrow coverage. With this dataset, we train a learned vehicle slip model that maps UGV commands to resulting body velocities. The resulting trained parameters vary significantly depending on terrain, as highlighted by the green and blue diamond areas in Figure 1, representing navigation on gravel and snow respectively. Fig. 1: Vehicle and terrain characterization done through DRIVE. The manufacturer-defined Naive input-space region is drawn in gray. The vehicle’s true input-space, characterized through internal measurements, is shown in orange. Typical human driving is shown in red. The resulting body velocities are represented in green for gravel and blue for snow. The specific contributions of this paper are (i) DRIVE, a standardized UGV characterization and motion data generation protocol allowing to train motion models on the entire vehicle input space; (ii) A novel slip-based UGV motion prediction model, leveraging the accuracy of model-based approaches and the minimal system characterization requirement of learning-based approaches. We validate our contributions with an extensive experimental evaluation featuring three distinct UGVs, with weights ranging from \(75\,\mathrm{kg}\) to \(470\,\mathrm{kg}\), two types of ground interaction (i.e., wheels and tracks) and four different terrain types. Our observations rely on driving data totaling \(7\,\mathrm{km}\) and \(1.8\,\mathrm{h}\). ## II Related Work Most vehicle motion modeling approaches can be divided into two distinct categories: model-based and learning-based. Both categories share the requirement of using empirical driving data to train their parameters and reduce modeling errors. For both categories, there exists no standardized protocol for training dataset generation. **Model-based approaches** can be split into two distinct categories: _kinematics_ and _dynamics_. Kinematic models remain the most popular for UGVs due to their low computational complexity and number of parameters to train. For skid-steering mobile robots (SSMRs), Mandow _et al._[8] reduced model prediction error by \(15\,\mathrm{\char 37}\) compared to the manufacturer's model using a kinematic model empirically identifying vehicle slip and skid. Segemiller _et al._[9] proposed a similar additive slip approach, computing slip based on kinematic quantities, yielding prediction error reduction between \(70\,\mathrm{\char 37}\) and \(90\,\mathrm{\char 37}\) depending on terrain type, again compared to the manufacturer's model. Bussmann _et al._[10] extended the experimental validation for additive slip approaches and showed similar performance for a \(900\,\mathrm{m}\) experiment on off-road terrain. On the other hand, dynamic models account for various forces acting on the vehicle's body. Segemiller _et al._[4] proposed a multi-body full dynamic motion model with a generic formulation based on vehicle geometry and properties. This work has been extended by Yang _et al._[11], showing simulation errors of less than \(3.4\,\mathrm{\char 37}\) for vehicle slip ratio. While being more accurate than kinematic models, dynamic models require extensive vehicle characterization effort and expertise. For all the work mentioned above, empirical training data is acquired through a human driving the UGV with little to no guidelines, which motivates our standardized protocol. Alternatively, **learning-based approaches** have been explored in the literature, yielding more accurate models for extreme UGV motion. In these approaches, part of the prediction is done through a nominal model, often represented by a unicycle model, with a module allowing to learning system dynamics. Gaussian processes (GPs) have become a popular approach to learn system dynamics, both for vehicle slip in off-road driving [12] and tire forces in high-speed road racing [13]. McKinnon _et al._[14] have proposed a similar learning approach, however replacing GPs learning with Bayesian linear regression (BLR). The lower computational complexity of BLR, when compared to GPs, makes it a more suitable approach for real-time UGV motion prediction. Alternatively, Djeumou _et al._[15] have proposed a tire-force learning framework allowing to perform autonomous drifting with \(3\,\mathrm{min}\) of driving data. Deep learning has also been explored for motion prediction in off-road terrain. Williams _et al._[6] have shown the ability to perform aggressive driving when relying on a \(30\,\mathrm{min}\) training dataset. For increased resilience to sensor failure, Tremblay _et al._[16] have proposed a multi-modal learned-dynamics model that leverages the various sensor measurements available for UGVs. Due to the importance of prediction uncertainty in enabling robust control for UGVs [3], this work focuses on BLR, which provides prediction uncertainty estimations [14]. Our novel _slip-based BLR_ model allows us to leverage the minimal requirements of learning-based approaches in terms of system characterization, [14] as well as the better accuracy of model-based approaches [9]. In this work, the approach of McKinnon _et al._[14] is used as a comparison point, as it is the closest to our model formulation. Although both model-based and learning-based approaches require empirical training data, only a few **dataset-gathering protocols** have been published. Voser _et al._[17] have proposed to maintain a steady forward velocity while slowly increasing angular velocity, enabling generation of a quasi-steady-state empirical dataset. Wang _et al._[18] have proposed a similar approach with a varying commanded curvature radius to empirically identify the relation between angular velocity and SSMR skid. These approaches only cover a small subset of the vehicle's input space. One can also find large, multimodal datasets allowing to train and evaluate models for off-road and extreme driving [19]. However, such datasets overrepresent forward motion, are limited to a specific UGV and would require new training data for any new vehicle configuration. Manual training data gathering guidelines have been proposed by Williams _et al._[6], asking the driver to vary his driving style. However, these remain time-consuming and subject to input space coverage bias. We demonstrate that training a motion model with the DRIVE protocol allows increased motion prediction performance and fast training dataset gathering. ## III Methodology and Theory In this section, we provide details on DRIVE, our automated vehicle characterization and training dataset-gathering protocol. We then describe our proposed slip-based BLR motion model. Due to the limited number of UGVs accessible to us, we focus on SSMRs. The involved model variables are depicted in Figure 2. We limit the states of the vehicle to planar motion, such that the robot's state \(\tensor[\mathbf{q}]{\mathbf{q}}=[x,y,\theta]^{T}\) represents the pose of the vehicle in the global coordinate frame \(\mathcal{G}\). The robot's body frame \(\mathcal{R}\), has its \(x\) and \(y\) axis aligned with the vehicle's longitudinal and lateral directions respectively. For most SSMRs, the input vector is defined as \(\mathbf{u}=[\omega_{l},\omega_{r}]^{T}\), representing the left and right wheel angular velocities. State propagation, allowing to compute the next state \(\mathbf{q}_{t+dt}\) based on the current state \(\mathbf{q}_{t}\) and input \(\mathbf{u}\) is computed as follows: \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! model as used by Seegmiller _et al._[5]: \[\begin{split}\hat{\omega}_{t}&=\left(e^{\beta}\right) \omega_{t_{0}}+\left(1-e^{\beta}\right)\tilde{\omega}_{t-\tau_{d}}\,\\ \beta&=\frac{(t-\tau_{d})}{\tau_{c}}\,\end{split} \tag{3}\] where \(\hat{\omega}\), \(\tilde{\omega}\) and \(\omega\) are the predicted, commanded and measured wheel velocities, respectively. We also define the initial time \(t_{0}\) and prediction horizon at time \(t\). Here, the parameters that require characterization are the time constant \(\tau_{c}\) and the time delay \(\tau_{d}\). One should note that these parameters are not considered symmetrical in our protocol and are trained independently for both sides of SSMRs. Thus, our protocol can identify vehicle powertrain asymmetries. #### Iii-C2 Body slip model Next, we define a model enabling to compute both the commanded body velocity \({}^{\mathcal{R}}\mathbf{f}_{t}\) and resulting slip velocity \({}^{\mathcal{R}}\mathbf{s}_{t}\) with respect to predicted input \(\tilde{\mathbf{u}}_{t}\). For SSMRs, the commanded body velocity \({}^{\mathcal{R}}\mathbf{f}_{t}(\tilde{\mathbf{u}}_{t})\) can be modeled through the ideal differential-drive model [8] as \[{}^{\mathcal{R}}\mathbf{f}_{t}(\mathbf{u}_{t})=\begin{bmatrix}f_{x}\\ f_{y}\\ f_{\omega}\end{bmatrix}=r\begin{bmatrix}\frac{1}{2},\frac{1}{2}\\ 0,0\\ -\frac{1}{6},\frac{1}{8}\end{bmatrix}\begin{bmatrix}\tilde{\omega}_{t}\\ \tilde{\omega}_{t}\end{bmatrix}, \tag{4}\] where \(r\) and \(b\) are the SSMR's wheel or track sprocket radius and vehicle width, respectively, as shown in Figure 2. We use the estimated wheel velocities through Equation 3 as the input vector \(\tilde{\mathbf{u}}_{t}\). We consider slip in each dimension of the vehicle separately \({}^{\mathcal{R}}\mathbf{s}_{t}=[s_{x},s_{y},s_{\omega}]^{T}\), with the form \[s_{t}=\mathbf{\gamma}^{T\,\mathcal{R}}\mathbf{x}_{t}+\eta, \tag{5}\] where \(\mathbf{\gamma}\in\mathbb{R}^{k}\) are the weights associated to each slip input and \(\eta\sim\mathcal{N}(0,\sigma^{2})\). We draw inspiration from off-road vehicles dynamics work in the literature to define dynamics-aware basis functions for vehicle slip [9]. As shown by Seegmiller _et al._[4], the following set of basis functions to estimate vehicle slip shows similar performance as fully dynamic models in off-road terrain. Firstly, for longitudinal slip \({}^{\mathcal{R}}\mathbf{s}_{x}\), we use the vehicle's rolling resistance, proportional to commanded body longitudinal velocity \({}^{\mathcal{R}}\mathbf{x}_{x}=f_{x}\). Secondly, for lateral slip \({}^{\mathcal{R}}\mathbf{s}_{y}\), we use centrifugal force \({}^{\mathcal{R}}\mathbf{x}_{y}=\psi=(f_{x}f_{w})\), proportional to commanded longitudinal and angular velocities. Thirdly, for angular slip \({}^{\mathcal{R}}\mathbf{s}_{w}\), we use three distinct slip learning inputs \({}^{\mathcal{R}}\mathbf{x}_{w}=[\psi,f_{x},f_{\omega}]\). The first angular slip input is the vehicle's centrifugal force \(\psi\). We then add UGV asymmetry, which can be caused by manufacturing imperfections and mechanical wear, causing angular velocity error proportional to commanded longitudinal velocity \(f_{x}\). Finally, we account for the vehicle's skid, leading to an error between commanded angular velocity and actual angular velocity \(f_{w}\). It should be noted that the vehicle gravity-dependent parameters, used by Seegmiller _et al._[9], are missing in this work. The reason is that we simplify our calibration protocol to be executed on planar terrain. The remainder of this section describes how we learn slip for a single dimension, but the process is the same for all dimensions of slip. We use Bayesian linear regression (BLR) to estimate the values for \(\mathbf{\gamma}\) and \(\sigma^{2}\). For a more in-depth explanation of BLR, refer to the book written by Murphy [22]. It can be shown that the posterior for learned parameters \(p(\mathbf{\gamma},\sigma^{2}|\mathcal{D}_{\text{d}})\) is distributed according to a Normal Inverse Gamma distribution \(\text{NIG}(\mathbf{\gamma},\sigma^{2}|\mathbf{\gamma},\mathbf{K},a,b)\), where \[\begin{split}\mathbf{\gamma}&=\mathbf{K}\left(\mathbf{K}_{0}^{-1} \mathbf{\gamma}_{0}+\mathbf{X}^{T}\mathbf{s}\right),\\ \mathbf{K}&=(\mathbf{K}_{0}^{-1}+\mathbf{X}^{T}\mathbf{X})^{-1},\\ a&=a_{0}+\frac{n}{2},\\ b&=b_{0}+\frac{1}{2}\left(\mathbf{\gamma}_{0}^{T}\mathbf{K}_{0}^{-1} \mathbf{\gamma}_{0}+\mathbf{s}^{T}\mathbf{s}-\mathbf{\gamma}^{T}\mathbf{K}^{-1}\mathbf{\gamma}\right), \end{split} \tag{6}\] where the estimated covariance of the distribution is represented by \(\mathbf{K}\in\mathbb{R}^{k\times k}\). Priors for all parameters are defined by the \((\cdot)_{0}\) subscript. We define \(\mathcal{D}_{\text{d}}=\{\mathbf{X},\mathbf{s}\}\) as a training dataset consisting of vectors of \(n\) concatenated observed values for slip inputs \(\mathbf{X}\) and observed slip velocities \(\mathbf{s}\) for a specific dimension. The posterior equations can be used to train the BLR slip model for each dimension based on a training dataset \(\mathcal{D}_{\text{d}}\). Once the model is trained, we can predict vehicle slip based on \(m\) test inputs \(\tilde{\mathbf{X}}\in\mathbb{R}^{m\times k}\): \[p\left(\tilde{\mathbf{s}}|\tilde{\mathbf{X}},\mathcal{D}_{\text{d}}\right)=\mathcal{T} \left(\mathbf{s}|\mathbf{X}\mathbf{\gamma},\frac{b}{a}\left(\mathbf{I}_{m}+\mathbf{X}\mathbf{K}\mathbf{X}^ {T}\right),2a\right), \tag{7}\] where \(\mathcal{T}\) is a Student's t-distribution and \(\hat{\mathbf{s}}\) represents a vector of \(m\) concatenated predicted slip velocities for a specific direction. In this work, we use an uninformative prior to ensure our protocol requires as little expertise as possible to execute. This consists of setting \(a_{0}=b_{0}=0\), \(\mathbf{\gamma}_{0}=\mathbf{0}\) and \(\mathbf{K}_{0}=\phi(\mathbf{X}^{T}\mathbf{X})^{-1}\) for any positive value \(\phi\). This allows to initialize our slip-based training model with little knowledge of the UGV except for wheel radius \(r\) and vehicle width \(b\). ## IV Results In this section, we evaluate the improvement of motion prediction accuracy when training models with the DRIVE protocol. We also analyze the number of training data required to reach convergence with our model. Finally, we demonstrate that for off-road navigation of SSMRs, learning vehicle slip based on dynamics-aware basis functions is more accurate than learning on vehicle acceleration. Fig. 3: Commanded, encoder-measured and modeled wheel velocities for both sides of a SSMR during two DRIVE training intervals. The powertrain model is described in Section III-B1. Each training step consists of one transient-state window (in light gray) and two steady-state windows (in dark gray). Commands and measurements on the x-axis are acquired at a rate of \(20\,\mathrm{Hz}\). ### _Experimental Setup_ We have conducted an extensive experimental evaluation of our calibration protocol and novel slip-based BLR model. Three distinct UGV platforms were used, as shown in Figure 4. First, we tested on a _Clearpath Robotics_ Warthog on wheels, weighing \(470\,\mathrm{kg}\), on gravel-covered terrain and an ice Fink. The ice Fink was leveled and recently resurfaced, leading to an extreme vehicle slip. Next, we tested on smaller platforms, namely a wheeled _Clearpath Robotics_ Husky, weighing \(75\,\mathrm{kg}\), and a tracked _Superdroid_ HD2, weighing \(80\,\mathrm{kg}\), both on indoor tile and snow-covered terrain. The Warthog has a top speed of \(5\,\mathrm{m}\mathrm{/}\mathrm{s}\), which is around five times that of the HD2 at \(1.2\,\mathrm{m}\mathrm{/}\mathrm{s}\) and of the Husky at \(1\,\mathrm{m}\mathrm{/}\mathrm{s}\). These platforms and terrains were selected to maximize the difference in properties between experiments. For all platforms, localization ground truth is estimated through point-cloud registration with the iterative closest point (ICP) algorithm. This localization approach was selected to use a common, centimeter-accurate ground truth [23] across all indoor and outdoor experiments. The localization system for the Husky and HD2 robots is described in [24] and for the Warthog in [25]. For every experiment, the recorded data was split into two halves, the training dataset and the evaluation dataset, to enable extensive model evaluation. Our experimental dataset totals over \(7\,\mathrm{km}\) and \(1.8\,\mathrm{h}\) of driving data across all platforms and terrain types. ### _Protocol performance analysis_ First, we define a function to evaluate model prediction performance. While learned models are trained on single-step vehicle slips or accelerations, our goal is to use them to predict vehicle motion over a specific horizon. One should note that we train our model on single-step slip velocities to simplify the learning problem. Williams _et al._[6] showed that this simplification allows sufficient prediction performances for high-speed UGV path following. Thus, we use the multi-step root mean squared error (MRMSE) \(\epsilon\) to evaluate prediction errors [14], with our localization as ground truth: \[\epsilon=\frac{1}{h}\sum_{j=1}^{h}\sqrt{(\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{ \sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\s{\sfrac{\s}}}{\cdot}}{{{{{{{{{{ {{{{{{{{{{{{{ } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \] \] \] \] \] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ and ice. The rightmost results combine the prediction errors for all experiments conducted in this work. We also show the performance of the model provided by manufacturers (i.e., Naive) and the improvement done through powertrain modeling. When accounting for all datasets, we observe a \(34\,\mathrm{\char 37}\) decrease in translation prediction error median and a \(38\,\mathrm{\char 37}\) decrease in rotation prediction error median when comparing the naive model with the powertrain-aware model. Also, our slip-based BLR approach leads to a \(22\,\mathrm{\char 37}\) decrease in rotation prediction error median and a \(6\,\mathrm{\char 37}\) decrease in translation prediction error median when compared to acceleration-based BLR. Looking at specific experiments, the Warthog in gravel shows the largest improvement between our slip BLR and acceleration BLR, with \(71\,\mathrm{\char 37}\) in rotation error median and \(23\,\mathrm{\char 37}\) in translation error median. In contrast, the HD2 on tile experiment shows a performance decrease for acceleration BLR and similar performance for slip BLR when compared to the powertrain model. Indeed, the indoor tile ground already had low prediction error for the powertrain-aware model. Lastly, the ice rink experiment shows similar performance between slip and acceleration BLR. This experiment corresponds to an extreme slip, similar to a UGV driving over black ice for an extended duration. This result shows the limit of our slip-based BLR model which still performs similarly or better than other models In this case, dynamic modeling could improve performance. Overall, we conclude that slip-based BLR offers improved performances for rotation prediction and similar performance in translation prediction over acceleration-based BLR, especially for driving at higher velocities on off-road terrains. For SSMRs in particular, rotation motion is the highest source of error due to the complexity of wheel-terrain skidding interactions [7], justifying the significance of our model. Moreover, generating the training data is time and energy-consuming, which leads us to look for a trade-off between calibration duration and model prediction accuracy. Thus, we evaluated the relationship between training driving time and prediction accuracy. The results are shown in Figure 7. Three distinct experiments are presented, notably Husky on snow, HD2 on tile and Warthog on gravel. No other experiment is shown, to avoid cluttering, but similar results were observed. As specified in Section III-B2, an uninformative prior is used for every platform, explaining the initially high errors. As shown by the red vertical line in Figure 7, the prediction accuracy stabilizes after \(46\,\mathrm{s}\) of driving time for all robots. To compute this time, we evaluated the error gradient with respect to calibration time. We then evaluated the maximum time for which the translational and rotational error values for all shown experiments was under \(0.01\,\mathrm{m}\mathrm{/}\mathrm{s}\) or \(0.01\,\mathrm{rad}\mathrm{/}\mathrm{s}\), indicating all models have converged. Thus, users of the DRIVE protocol SSMRs can expect that the slip-based BLR motion model has converged after \(46\,\mathrm{s}\) of training data, which is almost four times shorter than \(180\,\mathrm{s}\), the shortest training time observed in other work [15]. ## V Conclusion In this paper, we proposed _Data-driven Robot Input Vector Exploration (DRIVE)_, an automated vehicle characterization and training data generation protocol. We also propose a novel UGV prediction model called slip-based BLR. We show that training our model with our protocol offers improved prediction performances when comparing common training approaches and similar learning-based models. We also show that with our protocol, model convergence is reached with four times less driving time than the shortest similar protocol. We conclude that our protocol represents an efficient option for generating an initial motion model for UGVs. Future work would include generalizing our protocol to any vehicle geometry (e.g., Ackermann steering) and adapting our model formulation for complete dynamic models for extreme slip situations such as driving on surface ice. Adaptive modeling, relying on DRIVE to provide the initial training, should also be investigated. Fig. 6: Translational and rotational prediction errors for all models studied in this work. In yellow is the manufactured-defined naive model, in orange is the powertrain-aware model described in Section III-B1, in red is the acceleration-based BLR model and in purple is our slip-based BLR model. Fig. 7: The relation between training time and our slip-based BLR model prediction performance, for translation and rotation. Three datasets are shown, namely the Husky on snow in real, the HD2 on tile in pink and the wheeled Warthog on gravel, in blue. We highlight at \(46\,\mathrm{s}\) the converged value with the red line, for which our model converges for all UGVs tested. For all subplots, both axes are in log scale.
2309.16369
Bringing the Discussion of Minima Sharpness to the Audio Domain: a Filter-Normalised Evaluation for Acoustic Scene Classification
The correlation between the sharpness of loss minima and generalisation in the context of deep neural networks has been subject to discussion for a long time. Whilst mostly investigated in the context of selected benchmark data sets in the area of computer vision, we explore this aspect for the acoustic scene classification task of the DCASE2020 challenge data. Our analysis is based on two-dimensional filter-normalised visualisations and a derived sharpness measure. Our exploratory analysis shows that sharper minima tend to show better generalisation than flat minima -even more so for out-of-domain data, recorded from previously unseen devices-, thus adding to the dispute about better generalisation capabilities of flat minima. We further find that, in particular, the choice of optimisers is a main driver of the sharpness of minima and we discuss resulting limitations with respect to comparability. Our code, trained model states and loss landscape visualisations are publicly available.
Manuel Milling, Andreas Triantafyllopoulos, Iosif Tsangko, Simon David Noel Rampp, Björn Wolfgang Schuller
2023-09-28T12:13:23Z
http://arxiv.org/abs/2309.16369v2
# Bringing the Discussion of Minima Sharpness to the Audio Domain: ###### Abstract The correlation between the sharpness of loss minima and generalisation in the context of deep neural networks has been subject to discussion for a long time. Whilst mostly investigated in the context of selected benchmark data sets in the area of computer vision, we explore this aspect for the audio scene classification task of the DCASE2020 challenge data. Our analysis is based on two-dimensional filter-normalised visualisations and a derived sharpness measure. Our exploratory analysis shows that sharper minima tend to show better generalisation than flat minima -even more so for out-of-domain data, recorded from previously unseen devices-, thus adding to the dispute about better generalisation capabilities of flat minima. We further find that, in particular, the choice of optimisers is a main driver of the sharpness of minima and we discuss resulting limitations with respect to comparability. Our code, trained model states and loss landscape visualisations are publicly available. Manuel Milling\({}^{1}\), Andreas Triantafyllopoulos\({}^{1}\), Iosif Tsangko\({}^{1}\), Simon David Noel Rampp\({}^{1}\), Bjorn Wolfgang Schuller\({}^{1,2}\)\({}^{1}\)Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany \({}^{2}\)GLAM - Group on Language, Audio, & Music, Imperial College London, UK audio scene classification, sharp minima, loss landscape, generalisation, deep neural networks ## 1 Introduction When training _artificial neural networks_ (ANNs) on a specific task, one of the key challenges lies in the network's ability to generalise to unseen data. As can be interpreted from the universal approximation theorem [1], ANNs are well capable of representing the underlying data distribution of any task. In practice -especially given a network with enough depth- good fits of the training data with converging loss values and perfect evaluation metrics are often easy to find. However, this does not translate to unseen data, as the generalisation error can vary hugely for almost perfect training loss and can be influenced by the amount of training data, the choice of network architecture, optimiser or batch size [2], among other things. Models with a high generalisation gap are considered to be overfitted and often perform even worse if the unseen data is _out-of-domain_ (OOD). This can, for instance, be observed in the yearly DCASE _audio scene classification_ (ASC) challenge, in which the organisers added new recording conditions, such as different recording devices or cities, only to the test data. Critically, model selection, in the form of choosing hyperparameters or 'early stopping', is predominantly performed based on validation performance, which on its own can bring quite some limitations as, for instance, reported for OOD performance [3]. An alternative perspective on model states can be gained by examining the behaviour of loss functions. Specifically, some characteristics of a model state's minimum have been pointed out to show an important connection to the generalisation error. _Flatness_ and _sharpness_ play a particular role here, with flatter minima often believed to have better generalisation [4], at least since the work of Hochreiter and Schmidhuber [5]. Intuitively, these terms are related to the Hessian matrix, which contains all second-order derivatives at a given point of a function, for all directions and can thus represent the local curvature behaviour of the function. Yet, an undisputed definition of flatness and sharpness in the high-dimensional parameter space of ANNs is still lacking. Nevertheless, several approaches to quantify flatness and sharpness have been developed over the years, but they have failed to paint a complete picture of the generalisation capabilities based on geometry, as a universal correlation between flatness and generalisation has been disputed [6, 7]. In particular authors in [8] claim that the conclusion that flat minima should generalise better than sharp ones cannot be applied as is without further context. Likewise, Andriushchenko et al. [9] recently observed in multiple cases that sharper minima can generalise better in some modern experimental settings. Arguably, the most impactful sharpness measure, the \(\epsilon\)-sharpness, was introduced by Keskar et al. [2]. It decodes the information from the eigenvalues of the Hessian matrix, while at the same time avoiding the computation-heavy calculation of the Hessian matrix itself. Li et al. [10], however, show that a problem in the interpretability of sharpness measures, such as the \(\epsilon\)-sharpness, may lie in the scaling of the weights. An apparent example is optimisers with weight penalties, which enforce smaller parameters, and are thus more prone to disturbance, leading to sharp minima with good generalisation. In order to overcome this limitation, they suggest to use filter-normalisation for the visualisation of loss landscapes and argue that flatter minima in low-dimensional visualisations with filter-normalised directions go hand-in-hand with better generalisation capabilities, even when compared across different ANN architectures. Even though this relationship is made evident in several instances on a qualitative level, a quantitative measure of the sharpness in the context of filter-normalisation and a corresponding analysis are not provided. Beyond, a core weakness with respect to the universal validity of the results in most previously mentioned contributions is that experiments are limited to established benchmark data sets for image classification, such as CIFAR-10 [11] or ImageNet [12], and should thus be further verified in different research areas and contexts. In this work, we focus on exploring the ASC task of the DCASE2020 challenge, which belongs to the same category of tasks as CIFAR-10 (10-class classification problem), but comprises a different modality (audio instead of images) and more challenges of real-world data. The DCASE ASC challenge has seen tremendous influence on the computer audition community [13]. The yearly updated data sets have been the basis for ASC studies ranging from the development of new model architectures [14] and the evaluation of model robustness [15, 16], to investigations of fairness in performance amongst different recording devices and locations [3]. In this contribution, we suggest a new approach to quantitatively measure the sharpness of a local minimum -or at least of the neighbourhood of a 'well-trained' model state- and find correlations to the generalisability of ASC models. We design our experiments considering different architectures, training parameters, and optimisation algorithms in order to address the following research questions: * Is the sharpness derived from a two-dimensional filter-normalised visualisation stable across random directions? * How does the sharpness of ASC models correlate with the generalisation error for _in-domain_ (ID) and OOD data? * Which hyperparameters of model training are drivers for sharp minima? These investigations might give insights relevant to the selection of models that generalise better to OOD data, as well as drive the understanding of different factors affecting this generalisation for computer audition, which are both important open questions for ASC. ## 2 Methodology ### Filter-Normalisation The basis for our characterisation of minima is low-dimensional filter-normalised visualisations of the loss minima as introduced in [10]. The prerequisite for such a visualisation is an ANN with parameters \(\theta\), which was trained to a model state \(\theta^{\star}\), close to a local minimum of the loss function, given a training set \(X\). The precise minimum, however, will most likely not be reached in practice, given a finite time for training, finite numerical precision, and in particular, through techniques such as early stopping. The loss function around the trained model state will nevertheless in most cases increase, when varying any of the parameters \(\theta_{i}\) of the network. With common ANNs having millions or even billions of parameters, this leads to very high-dimensional loss landscapes. The immediate surroundings of the minimum can best be described with the Hessian matrix. The high dimensionality however makes the calculation of the Hessian matrix very computation-heavy and thus not practical [17], whereas significant attempts are addressed in this direction [18]. Instead, a common approach to look at the loss landscape is through low-dimensional visualisations. In two dimensions, this can be realised through the choice of random Gaussian vectors \(\delta\) and \(\eta\), both of the same dimension as \(\theta\), which are in the following used to project the loss function as \[f(\alpha,\beta)=L(\theta^{\star}+\alpha\delta+\beta\eta). \tag{1}\] By varying the scalar variables \(\alpha\) and \(\beta\), we can depict a 2-dimensional projection of the loss landscape. However, Li et al. point out some weaknesses of the visualisation, as different models -and even different model states of the same architecture- can have differently scaled parameters, thus making them more or less vulnerable to perturbations of the same magnitude [10]. Therefore, they suggest adjusting the perturbations relative to the magnitude of the weights, thus rescaling the random gaussian directions \(\delta\) and \(\eta\) choosing a filter-level normalisation. This can be formulated as \[\delta_{i,j}\leftarrow\frac{\delta_{i,j}}{||\delta_{i,j}||}||\theta_{i,j}||, \tag{2}\] where the indices of \(\delta_{i,j}\) and \(\theta_{i,j}\) refer to the components of \(\delta\) corresponding to the \(j\)th filter of the \(i\)th layer in a convolutional neural network. Figure 1 shows two examples of filter-normalised loss landscapes in 2D around a minimum with \(\alpha\) and \(\beta\) ranging from -1 to 1, thus varying the filters of the network by around \(\pm 100\%\). We will use plots of this kind for the following analyses with the adapted code provided by the authors [10]. As the filter-normalised plots are solving the problem of different scales of filters, the authors claim that flatter minima in this representation, despite the heavy reduction in dimensionality, indicate better generalisation, which is underlined with a qualitative analysis of several model states, trained on the CIFAR-10 dataset. ### Sharpness In order to quantitatively evaluate these claims for our ASC problem, we base our analysis on the \(\epsilon\)-sharpness, which is prominently used in the literature. This measure focuses on a small neighbourhood of a minimum and computes the largest value potentially attained by the loss function and is considered a good approximation of the curvature of the minimum and thus, of the sharpness or flatness of the minimum. Formally, it is defined as \[s_{\epsilon}=\frac{\max_{\theta\in B(\epsilon,\theta^{\star})}(L(\theta)-L( \theta^{\star}))}{1+L(\theta^{\star})}\times 100, \tag{3}\] where \(\epsilon\) determines the radius of the ball \(B(\epsilon,\theta^{\star})\) around \(\theta^{\star}\). Alternative measures of sharpness include the consideration of local entropy around a minimum [19] or of the size of the connected region around the minimum where the loss is relatively similar [5]. Inspired by the \(\epsilon\)-sharpness, we calculate the sharpness for our two-dimensional visualisation based on the largest value of \(L(\theta)\) with a maximum distance of \(\epsilon\) to the minimum of the visualisation. We will utilise this sharpness measure in the following to analyse the influences certain experimental settings have on the sharpness of minima and, further, what sharpness can tell us about the generalisation of an ASC model on unseen data. ## 3 Experiments and Discussion ### Dataset As our dataset, we use the development partition of the DCASE 2020 Acoustic Scene Classification dataset [20] and evaluate the experiments based on the standard metric accuracy, which is defined as the ratio of correctly classified samples over all samples. The dataset includes 64 hours of audio segments from 10 different acoustic scenes, recorded in 10 European cities with 3 real devices (denoted as \(A\), \(B\), _C_), as well as data from 6 simulated devices (denoted as _S1-S6_). We use the official training/evaluation splits with devices S4-S6 only appearing in the test set (OOD). The data is evenly distributed across cities, whereas device A (Soundman OKM II Klassik/studio A3) is Figure 1: Visualisation of the two-dimensional filter-normalised loss landscape for two different model states with different architectures and training paradigms. dominating over B, C, and the simulated devices. We extract 64-bin log-Mel spectrograms with a hop size of 10 ms and a window size of 32 ms, additionally resampling the 10 s long audio segments to 16 kHz. ### Model training Our initial experiments involved two _convolutional neural network_ (CNN)-based architectures, the _pre-trained audio neural networks_ (PANNs) CNN10 and CNN14 [21] with random initialisation and around 5.2 million and 80.8 million parameters, respectively, which have frequently been applied to computer audition tasks, including the DCASE ASC task [3, 22, 23]. Their convolutional nature is well in line with the CNNs for which the filter-normalisation was developed. We explored widely-used optimisers, such as _Adaptive Moment Estimation_ (Adam) and _stochastic gradient descent_ (SGD) with momentum, as well as less common optimisation algorithms, such as the second-order _Kronecker-factored approximate curvature_ (KFAC) and _gradient descent: the ultimate optimiser_ (GDTUO). KFAC utilises approximations to the Hessian matrix to improve convergence speed, while GDTUO automatically adjusts hyperparameters using a stack of multiple optimisers, which in this case involves two stacked Adam optimisers, called hyperoptimisers. However, both KFAC and GDTUO resulted in higher computational costs in terms of runtime and memory requirements per optimisation step. We ran a grid-search for hyperparameters as manifested in Table 1. We additionally applied a learning rate of 1e-5 for the KFAC optimiser and excluded the learning rate 1e-4 for CNN14 with the SGD optimiser in order to prevent suboptimal convergence. Given some hardware limitations for the experiments, we only utilised the second-order optimisers for the CNN10 architecture, leading to overall 38 trained model states. Besides the learning rate, we used default parameters for the optimisers, with SGD using a momentum of 0.9. In all cases, the training was stopped after 50 epochs and the best model state of the epoch with the highest accuracy on the development set used for testing. The training is implemented in PyTorch 1.13.1+cu117 and models were trained on a NVIDIA GeForce GTX TITAN X and a NVIDIA TITAN X (Pascal), both with 12GB RAM. The training time per epoch mostly varied depending on the chosen optimiser, ranging from approximately four minutes for the SGD and Adam optimisers to slightly over six minutes for KFAC, and up to around 18 minutes for GDTUO. Our code and trained model states are publicly available.1 Footnote 1: [https://github.com/EIHW/ASC_Sharpness](https://github.com/EIHW/ASC_Sharpness) ### On the robustness towards random directions Even though not emphasised by the authors of the filter-normalisation method, the choice of the random Gaussian direction should have some impact on the measured or perceived sharpness of a given minimum. To mitigate this impact the authors in [24] use more directions in the parameter space, while in [25], it is suggested to analyse projections along Hessian directions as an alternative method. Nevertheless, most interpretations of the sharpness of minima are limited to (statistics of) a low-dimensional analysis and often show consistent trends across different random directions [26, 27, 28]. We tested the robustness of our sharpness measure by calculating it based on three plots with different random directions. In order to stay in line with the visual argumentation of the plots, as well as the characteristics of the filter-normalisation, we chose a neighbourhood of radius \(0.25\) to calculate the sharpness. Due to the high computational costs of such visualisations, the resolution was set to 0.025 in each direction, leading to 121 loss values per visualisation. The time required to compute one sharpness value in this scenario is around 45 minutes on a single NVIDIA A40 GPU with 16GB RAM. Figure 2 shows the mean sharpness and standard deviation for each trained model based on three different plots per model. Most model states show a relatively low standard deviation compared to the mean sharpness, allowing us to further interpret the sharpness in different settings. A few exceptions with high standard deviations indicate some limitations of this approach, which might, however, be mitigated by sampling more sharpness-measures per model. Similar analyses of the stability of sharpness-measures with respect to different random directions have previously been reported [27]. ### On the impact of sharpness on generalisation In order to gain insights into the generalisation capabilities of flat and sharp minima in ASC, we plot the test accuracies of the trained model states against their mean sharpness value in Figure 3. We thereby consider the accuracy for ID and OOD separately. To that end, we define OOD performance as the accuracy evaluated on the devices not represented in the training data, namely S4, S5, and S6, whilst ID performance is evaluated on the devices A, B, C, S1, S2 and S3, which are known at training time. Note that all discussed model states show a nearly 100% accuracy on the training data, such that one minus the test accuracy can be interpreted as the generalisation gap. Firstly, we note a tendency that, in our experiments, sharper minima show a better generalisation than flat minima. This is a rather surprising finding, as most of the existing literature reports preferable characteristics of flat minima in the computer vision domain, e.g., [5, 19, 2, 29, 30, 31, 32], whilst only few studies report \begin{table} \begin{tabular}{l c} Parameter & Values \\ Network & CNN10, CNN14 \\ Optimiser & SGD, Adam, GDTUO2, KFAC \\ Learning Rate & \(1\mathrm{e}-3\), \(1\mathrm{e}-4\), \(1\mathrm{e}-5\) \\ Batch Size & \(16\), \(32\) \\ Random Seeds & \(42\), \(43\) \\ \end{tabular} \end{table} Table 1: Overview of the grid search parameters for model training. Figure 2: Distribution of sharpness-measures. Each bar indicates the mean sharpness value with the standard deviation of a trained model state in three two-dimensional plots with different random directions. on good generalisation in context of sharp minima [33, 9]. Further investigations are necessary to unravel, whether our results are an indication of a general disparity of the impact of sharpness on generalisation in acoustic scene classification and image classification. Critical differences in the learning of computer audition models compared to computer vision models have been reported in our previous work: when fine-tuning a CNN for a computer audition task, the first layers were subject to more changes than the later layers [34]. This finding contradicts the common understanding, resulting from computer vision analyses, of earlier filters being trained to recognise low-complexity objects, such as edges, and are thus transferable without major changes amongst different tasks. Moreover, this effect seems to be considerably higher for OOD accuracy compared to ID accuracy, as we observe a correlation of \(.49\) in the former and a correlation of \(.28\) in the latter case. Based on our exploratory analysis, we hypothesise that flatter minima are over-optimised for the ID devices -in particular, device A which dominates the training set- and thus fail to generalise well to unseen devices. Nevertheless, the reasons for positive correlations between sharpness and generalisation are not obvious at this moment and should be further looked into. ### On the impact of hyperparameters on sharpness As a final aspect, we analyse the impact of the choice of different hyperparameters or experimental settings on the sharpness and compare these to the corresponding impact on test accuracy. Figure 4 suggests that both sharpness and accuracy are similarly affected by the training parameters. Certain hyperparameters lead to a higher value in both subplots compared to the other hyperparameters in the group, except for the batch size. This result is in line with our previous findings of sharper minima tending to have better generalisation. However, upon closer examination, it becomes apparent that the amount by which both subplots are affected by a certain group can vary considerably, as the selection of optimisers seems to have the highest impact on sharpness, which is not the case for the test accuracy. This provides us with some insights about when a deduction of generalisation from sharpness might be more reasonable, as, for instance, different optimisers seem to bring different tendencies in sharpness, which might not fully translate to generalisation. A remarkable similarity between average mean sharpness and average test accuracy can, however, be observed for the two model architectures, whose sharpness derives from a different(-dimensional) loss landscape. Note that the choice of learning rates and optimisers were not independent of each other, which limits their separate expressiveness. ### Limitations One of the limitations of our approach lies in the robustness of the sharpness measure, which might, however, be overcome by more efficient implementations, allowing for the consideration of additional random directions. Beyond that, a more thorough analysis of the convergence status of models and its impact on the sharpness measure and generalisation seems desirable. Especially, considering that not all experimental details could be investigated in depth, this contribution can only be a piece in the debate about flat versus sharp minima in ASC in particular and computer audition in general. Beyond, the reasons for good generalisation capabilities of sharp minima in our exploratory study need to be further investigated as the impact of individual hyperparameters on the training needs to be better understood. ## 4 Conclusions In this contribution, we explored the sharpness of minima in the loss function for acoustic scene classification models and its impact on the generalisation capabilities in different, practice-relevant, experimental settings. We found that for our trained models, sharper minima generalised better to unseen (in particular to OOD) data, which has rarely been observed in the computer vision domain. Our approach shows some limitations, as for instance, the choice of optimisers has a higher impact on the sharpness of minima than on the generalisation. In future work, we plan to focus on more efficient and interpretable implementations of sharpness measures and to better understand individual effects of hyperparameters before our findings can be put into practice. ## 5 Acknowledgements This work was partially funded by the DFG's Reinhart Koselleck project No. 442218748 (AUDIONMOUS). Figure 4: Disaggregated distribution of mean sharpness and accuracy across hyperparameters. Each bar averages the mean sharpness or accuracy of all trained models states, grouped by the different types of hyperparameters. Figure 3: Correlation plot between sharpness of minima (the higher, the sharper) and test accuracy for all trained models. Showing best-fit line and 95% confidence intervals for different models.
2309.09249
LiteTrack: Layer Pruning with Asynchronous Feature Extraction for Lightweight and Efficient Visual Tracking
The recent advancements in transformer-based visual trackers have led to significant progress, attributed to their strong modeling capabilities. However, as performance improves, running latency correspondingly increases, presenting a challenge for real-time robotics applications, especially on edge devices with computational constraints. In response to this, we introduce LiteTrack, an efficient transformer-based tracking model optimized for high-speed operations across various devices. It achieves a more favorable trade-off between accuracy and efficiency than the other lightweight trackers. The main innovations of LiteTrack encompass: 1) asynchronous feature extraction and interaction between the template and search region for better feature fushion and cutting redundant computation, and 2) pruning encoder layers from a heavy tracker to refine the balnace between performance and speed. As an example, our fastest variant, LiteTrack-B4, achieves 65.2% AO on the GOT-10k benchmark, surpassing all preceding efficient trackers, while running over 100 fps with ONNX on the Jetson Orin NX edge device. Moreover, our LiteTrack-B9 reaches competitive 72.2% AO on GOT-10k and 82.4% AUC on TrackingNet, and operates at 171 fps on an NVIDIA 2080Ti GPU. The code and demo materials will be available at https://github.com/TsingWei/LiteTrack.
Qingmao Wei, Bi Zeng, Jianqi Liu, Li He, Guotian Zeng
2023-09-17T12:01:03Z
http://arxiv.org/abs/2309.09249v1
# LiteTrack: Layer Pruning with Asynchronous Feature Extraction ###### Abstract The recent advancements in transformer-based visual trackers have led to significant progress, attributed to their strong modeling capabilities. However, as performance improves, running latency correspondingly increases, presenting a challenge for real-time robotics applications, especially on edge devices with computational constraints. In response to this, we introduce LiteTrack, an efficient transformer-based tracking model optimized for high-speed operations across various devices. It achieves a more favorable trade-off between accuracy and efficiency than the other lightweight trackers. The main innovations of LiteTrack encompass: 1) asynchronous feature extraction and interaction between the template and search region for better feature fusion and cutting redundant computation, and 2) pruning encoder layers from a heavy tracker to refine the balance between performance and speed. As an example, our fastest variant, LiteTrack-B4, achieves 65.2% AO on the GOT-10k benchmark, surpassing all preceding efficient trackers, while running over 100 _fps_ with ONNX on the Jetson Orin NX edge device. Moreover, our LiteTrack-B9 reaches competitive 72.2% AO on GOT-10k and 82.4% AUC on TrackingNet, and operates at 171 _fps_ on an NVIDIA 2080Ti GPU. The code and demo materials will be available at [https://github.com/TsingWei/LiteTrack](https://github.com/TsingWei/LiteTrack). ## I Introduction Visual object tracking is a fundamental task in computer vision, which aims to track an arbitrary object given its initial state in a video sequence. In recent years, with the development of deep neural networks [1, 2, 3, 4], tracking has made significant progress. In particular, the utilization of transformers [4] has played a pivotal role in the development of several high-performance trackers [5, 6, 7, 8, 9, 10, 11]. Unfortunately, a majority of recent research efforts [5, 12, 13] has concentrated solely on achieving high performance without considering tracking speed. While these state-of-the-art trackers might deliver real-time performance on powerful GPUs, their efficiency diminishes on devices with limited computational resources. For instance, ARTrack [14], considered as a top-tier tracker, reaches a tracking speed of 37 frames per second (_fps_) on the NVIDIA RTX 2080Ti GPU but drops to 5 _fps_ on the Nvidia Jetson Orin NX, a common edge device. This underscores the pressing need for trackers that effectively strike a balance between performance and speed. The one-stage structure has gained popularity in tracking applications [10, 16, 9, 17]. This structure combines feature extraction and fusion as a joint process as pictured in Fig. 2 (a), leveraging the capabilities of the transformer network, expectedly the ViT [18] that has been pre-trained by mask-image-modeling(MIM) [19, 20]. Conversely, two-stage trackers [5, 6, 21], operating by sequentially extracting features and then fusing them, benefit from caching template features during the testing phase, as shown in Fig. 2 (b). However, the two-stage trackers who extract feature first then perform feature fusion, can cache the template feature during testing, while the one-stage trackers can not. Even though most of one-stage trackers are running faster than the two-stage, we can further accelerate the former by the similar caching technique. Inspired by ViTDet [22], we find that only the last layer of template feature is sufficient and better for fusion with the search feature of various earlier layer, which can be cached in the testing like two-stage trackers. Therefore, this naturally decides our overall design: the feature extraction of template is performed first and individually, then the extracted last-layer template features interact with the feature extraction of the search region, as shown in Fig. 2(c). Traditional efficient trackers have primarily sought to achieve faster runtimes by directly incorporating an initially lightweight-designed network as their backbone. These lightweight networks are designed for efficiency, which results in relatively mediocre performance in their upstream tasks like image classification. Consequently, when such Fig. 1: Performance comparison of LiteTrack against state-of-the-art trackers on GOT-10k in terms of Average Overlap and RTX 2080Ti Speed. \(\mathtt{o}\) and \(\mathtt{o}\) represent for non-real-time and real-time trackers respectively, based on Nvidia Jetson Orin NX speed (see Tab.2). Our LiteTrack (\(\mathtt{o}\)) family offers comparable accuracy to all other trackers, significantly outpacing them in inference speed. Notably, LiteTrack-B4 achieves over 300 _fps_ on 2080Ti and 100 _fps_ (ONNNX) on edge device. Notice that our LiteTrack delivering the best real-time accuracy trained without extra data, unlike the other efficient trackers. networks are utilized in visual tracking, their performance leaves much to be desired. In contrast, our approach derive efficient model by scaling down a high-performing heavy tracker, instead of starting with a lightweight architecture. This strategy is inspired by our observation, as depicted in Fig.3, that early layers pay sufficient attention to the target. By pruning network layers and integrating our novel asynchronous feature extraction technique, we ensure only a marginal drop in performance even when multiple layers are excised. Consequently, LiteTrack not only rivals the performance of its heavyweight peers but also competes in runtime with lightweight models, presenting an optimal trade-off. Fig.1 reinforces this assertion, showcasing LiteTrack's commendable performance on the challenging GOT-10k [23] benchmark, standing shoulder-to-shoulder with state-of-the-art (SOTA) trackers. Our contributions are summarized as follows: * A efficient tracking achitechture which feature extractions of template and search region are asynchronous is proposed for reducing redundant computation. * A novel scaling principle of tracking model is introduced by adjusting encoder layers for trade-off between accuracy and speed. * Comprehensive evaluations on authoritative generic visual tracking benchmarks have validated the excellent performance of LiteTrack compared with other SOTA trackers. Edge device deployment are tested with promising performance, demonstrating the superior effectiveness of LiteTrack on robotics applicability. ## II Related Works ### _Visual Tracking with Transformers._ Visual tracking has seen the rise of Siamese-based methods [24, 25, 26, 27, 28, 29, 30, 12] that typically employ dual-backbone networks with shared parameters. They have been instrumental in the field due to their efficiency in feature extraction of the template and search region images. Further advancements introduced transformers [4] into the tracking community [31, 32, 33, 34, 5, 7], leveraging them for feature interaction training from scratch. The emergence of the one-stream framework [16, 17, 35, 9] showcased improved performance by integrating feature extraction and fusion within the backbone network, enjoying the powerful mask-image-modeling (MIM) pretraining method [19, 20, 36]. Despite their effectiveness, these methods, tailored for powerful GPUs, often farler in speed on edge devices. In response, our research incorporates the last layer feature of the template directly into the search region's feature extraction, provide a simliar cache-in-testing ability like two-stage tracker while also enjoying the powerful pretraining. ### _Lightweight Trackers._ Efficiency in tracking is crucial for practical robotics applications, especially on edge devices. Early methods such as ECO [37] and ATOM [38] focused on real-time operation but didn't achieve the accuracy levels of newer trackers. Recent advancements [39, 40, 41] have employed lightweight-designed backbones for efficient real-time tracking. However, these solutions still show a performance gap when compared to SOTA heavyweight trackers [10, 6, 9]. There have been efforts to refine these advanced trackers: OSTrack [10] considered pruning non-essential features unrelated to the foreground, while SimTrack [16] suggested removing the last four layers to cut computational costs. Despite these modifications, there remains a lack of deep exploration into true real-time lightweight tracking architectures. Our proposed LiteTrack fills this gap, combining efficiency and performance for effective tracking on edge devices. ## III Proposed Method This section presents the LiteTrack method in detail. First, we briefly overview our LiteTrack framework. Then, we depict the asynchronous featrue extraction process, layer pruning of our model, and the head network plus with the training obejctive. Fig. 3: Visualization of attention map (average attention value over all template features attending to the search features) of 2nd, 6th and 8th layer in the twelve-layer encoder of JNTrack [15]. The model focuses nearly precisely on the target even in the early stage of the encoder. Fig. 2: Comparison of the popular architectures for visual tracking. Our method (c) is able to cache the template features like two-stage (b) in testing and also enjoy the powerful pretrain technique like one-stage method (a). ### _Overview_ As shown in Fig. 4, LiteTrack is a combination of one-stage and two-stage tracking framework consisting of two components: the lightweight transformer encoder and the head network. The template of target to be tracked is fed into the lightweight transformer encoder first for feature extraction individually. Then the image of search region is also fed into the same encoder but only the first n layers. We call these first n layers as _Feature Extraction Stage_ (FE). Next, the extracted template features of last layer together with the intermediate search features after the featrure extraction stage, are fed as a concatenated sequence into the remaining encoder layers. We call these final layers as _Asynchronous Interaction Stage_ (AI). Finally, only the part belongs to the search of the final sequence are selected and flatten to 2D feature map, and fed into the head network for the tracking result. ### _Asynchronous Feature Extraction_ The so-called _asynchronous_ means that we extract the template feature first and then the search feature. The feature extraction is done by the transformer encoder, in which each layer mainly consists of multi-head attention. Specifically, for a template image \(\mathbf{Z}\in\mathbb{R}^{3\times H_{z}\times W_{z}}\), the patch embedding layer transform the image into sequence of tokens \(\mathbf{Z_{p}}\in\mathbb{R}^{C\times\frac{H_{z}}{16}\times\frac{W_{p}}{16}}\). In each layer of the transofmer encoder, the main operation is multi-head self attention: \[\mathrm{Attn_{z}=}\mathrm{softmax}(\frac{\mathbf{Q}_{z}\mathbf{K}_{z}^{\top} }{\sqrt{d_{k}}})\mathbf{V}_{z}. \tag{1}\] The featrue extraction of template is performed by a serially stacked encoder layers. For a search image \(\mathbf{X}\in\mathbb{R}^{3\times H_{x}\times W_{x}}\), the same patch embedding layer also transform the image into sequence of tokens \(\mathbf{X_{p}}\in\mathbb{R}^{C\times\frac{H_{z}}{16}\times\frac{W_{p}}{16}}\). In the _feature extraction stage_, the search tokens only attend to itself in the multi-head attention operation: \[\mathrm{Attn_{x}=}\mathrm{softmax}(\frac{\mathbf{Q}_{x}\mathbf{K}_{x}^{\top} }{\sqrt{d_{k}}})\mathbf{V}_{x}. \tag{2}\] In the _asynchronous interaction stage_, the tokens of extracted template features of last layer concatenated together with the intermediate search features after the _feature extraction stage_ are concatenated, as the input of the encoder layer. Inspired by MixFormer [9], the attention in the layers within the interaction stage is a little different from the standard self-attention: we generate queries \(\mathbf{Q}\) only by the search features. Thus the attention process can be written as: \[\mathrm{Attn_{xx}=}\mathrm{softmax}(\frac{\mathbf{Q}\mathbf{K}^{ \top}}{\sqrt{d_{k}}})\mathbf{V} \tag{3}\] \[=\mathrm{softmax}(\frac{[\mathbf{Q}_{x}][\mathbf{K}_{x}^{\top}; \mathbf{K}_{z}^{\top}]}{\sqrt{d_{k}}})[V_{x};V_{z}].\] Though the attention computation are different between two stages, the parameters of the networks are still in the same structure. Therefore the template branch and search branch of the network can share the weight. _Analysis:_ Our method works depending on one critical development of recent trackers: the application of homogeneous structure backbone such as the ViT [18] encoder. The chanel of the features, or the dimension of the tokens in context of the transformer, remains unchange during the layers of encoder, therefore the template features can interact with the search features from any intermediate encoder layer. This introduce our first model scaling principle: adjusting the layers for the feature extraction stage and the asynchronous stage for the accuracy-speed trade-off as discussed in Sec. IV-C. During testing, synchronous and symmetric feature extraction methods, such as OSTrack [10] shown in Fig.5(a), often Fig. 4: Overview of the proposed LiteTrack-B6 tracker, consist of 3 layers in feature extraction (FE) stage and 3 layers in asynchronous interaction (AI) stage. For simplicity, we omit the position encoding, skip connection and MLP in the figure. Two branchs of network for template and search region share the same weights. result in redundant computations for the template. Given that the template, typically the initial frame of a video sequence, remains unchanged, its feature also remains constant. By caching these template features, we eliminate unnecessary computations during testing. In contrast to MixFormer, which caches every layer of template features as depicted in Fig.5(b), our method conserves memory by only storing the last layer of template features, as shown in Fig. 5(c). ### _Layer Pruning_ In the pursuit of enhancing object tracking performance, deep neural networks have grown increasingly complex, often at the expense of computational efficiency. Layer pruning offers an avenue to mitigate this by systematically reducing the number of layers in the network. Starting with a 12-layer ViT encoder, we adopted a top-down pruning strategy, progressively eliminating layers and assessing performance against a baseline. As illustrated in Fig.6, the performance dropped with the layers pruned while the speed rise up significantly. However, when paired with asynchronous feature extraction, the decreasing of performance by the layer pruning is moderated. For example, our 9-layer variant combined with asynchronous as outperformed its 12-layer counterpart as shown in Fig.6. ### _Head and Training Objective_ We employ the center head [10] for prediction, which consists of three convolutional branches for center classification, offset regression and size regression, respectively. The center classification branch outputs a centerness score map, where each score represents the confidence of the target center locating at the corresponding position. The prediction of offset regression branch is for the discretization error of the center. The size regression branch predicts the height and width of the target. The position with the highest confidence in the center score map is selected as the target position and the corresponding regressed coordinates are used to compute a bounding box as the final prediction. We apply the weighted focal loss [42] for classification. For localization, we combine \(\ell_{1}\) loss and the generalized GIoU loss [43] as the training objective. The overall loss function can be formulated as \[\mathcal{L}=\mathcal{L}_{\mathrm{focal}}+\lambda_{G}\mathcal{L}_{\mathrm{GIoU} }+\lambda_{l}\mathcal{L}_{l}, \tag{4}\] where \(\lambda_{G}=2\) and \(\lambda_{l}=5\) are trade-off weights following [10] to balance optimization. ## IV Experiments 9 hours for GOT-10k and 24 hours for the other benchmarks on one RTX 3090 GPU, and the lighter model trains faster. _Testing._ During inference, the template is initialized in the first frame of a video sequence. For each subsequent frame, the search region is cropped based on the target's bounding box of the previous frame. We adopt Hanning window penalty to utilize positional prior like scale change and motion smoothness in tracking, following the common practice [10, 33]. The output scores are simply element-wise multiplied by the Hanning window with the same size, and we choose the box with the highest multiplied score as the target box. ### _State-of-the-art Comparisons_ LiteTrack is benchmarked against state-of-the-art trackers, both real-time and non-real-time, across six tracking datasets. We evaluated the speed of these trackers on two distinct platforms: an Nvidia GeForce RTX 2080Ti GPU (with an Intel i5-11400F CPU) and an Nvidia Jetson Orin NX 16GB edge device. For these tests, we utilized PyTorch 1.12.0 on the former and PyTorch 2.0.0 @ JetPack 5.1 on the latter. Trackers are categorized into real-time and non-real-time based on their PyTorch speed on the Orin NX device, following the 20 _fps_ real-time setting of VOT [56]. Detailed comparative results are showcased in Tables II and III. We also report our tracker's speed accelerated with ONNX fp16 in Tab. I. _GOT-10k._ GOT-10k [23] is a large-scale and challenging dataset that contains 10k training sequences and 180 test sequences which are zero-overlapping in classes. The official one-shot protocol reuqires the evaluated tracker training without extra data, which encourages the model designed for challenging scenes like unseen objects. We report the Average Overlap (AO), Success Rate over a overlapping rate of 50% (SR\({}_{0.5}\)) and the same one over 75% (SR\({}_{0.75}\)) obtained by submitting the result to the official evaluation server. As shown in Table II, LiteTrack-B9 achieves the best real-time results of 72.2% AO score, which is also competitive to the best non-real-time tracker ARTrack-256 [14] (73.5% AO score). Our LiteTrack-B4 surpassing all the real-time trackers with AO score of 65.2%, even though our trackers are trained without extra data. _TrackingNet._ TrackingNet [44] is a large-scale dataset containing a variety of situations in natural scenes and multiple categories, and its test set includes 511 video sequences. We report the Area Under Curve (AUC), Normalized Precision (P\({}_{Norm}\)) and Precision (P) obtained by submitting the tracking result to the official evaluation server. As reported in Table II, LiteTrack series achieve competitive results \begin{table} \begin{tabular}{l|c|c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Method} & \multirow{2}{*}{Source} & \multicolumn{3}{c}{TrackingNet [44]} & \multicolumn{3}{c}{LaSOT [45]} & \multicolumn{3}{c}{GOT-10k* [23]} & \multicolumn{3}{c}{Speed (_fps_)} \\ \cline{3-13} & & AUC & P\({}_{Norm}\) & P & AUC & P\({}_{Norm}\) & P & AO & SR\({}_{0.5}\) & SR\({}_{0.75}\) & 2080Ti & OrinNX \\ \hline \multirow{8}{*}{LiteTrack-B9} & \multirow{2}{*}{Ours} & LiteTrack-B9} & **82.4** & **87.3** & **80.4** & **67.0** & **77.0** & **72.7** & **82.3** & **69.3** & 171 & 21 \\ & LiteTrack-B8 & Ours & 81.4 & 86.4 & 79.4 & 66.4 & 76.4 & 71.4 & 70.4 & 80.1 & 66.4 & 190 & 25 \\ & LiteTrack-B6 & Ours & 80.8 & 85.7 & 78.2 & 64.6 & 73.9 & 68.9 & 68.7 & 78.2 & 64.2 & 237 & 31 \\ & LiteTrack-B4 & Ours & 79.9 & 84.9 & 76.6 & 62.5 & 72.1 & 65.7 & 65.2 & 74.7 & 57.7 & 315 & 44 \\ & HiT-Base1 [41] & ICCV’23 & 80.0 & 84.4 & 77.3 & 64.6 & 73.3 & 68.1 & 64.0 & 72.1 & 58.1 & 175 & - \\ & E.T.Track [46] & WACV’23 & 75.0 & 80.3 & 70.6 & 59.1 & - & - & - & - & 67 & 21 \\ & E.T.Track-S [46] & ECCV’22 & - & - & - & 53.5 & - & 54.5 & 61.9 & 72.2 & - & 182 & 50 \\ & HCAT [47] & ECCV’22 & 76. & 82.6 & 72.9 & 59.3 & 68.7 & 61.0 & 65.1 & 76.5 & 56.7 & 235 & 34 \\ & LightTrack [39] & CVPR’21 & 72.5 & 77.8 & 69.5 & 53.8 & - & 53.7 & 61.1 & 71.0 & - & 107 & 38 \\ & HiT [48], & ICCV’21 & 66.7 & 73.8 & 60.9 & 45.1 & 52.7 & 42.1 & - & - & - & 230 & 50 \\ & ECO [37] & CVPR’17 & 55.4 & 61.8 & 49.2 & 32.4 & 33.8 & 30.1 & 39.5 & 40.7 & 17.0 & 113 & 22 \\ \hline \multirow{8}{*}{LiteTrack-256 [14]} & \multirow{2}{*}{CVPR’23} & \multirow{2}{*}{_84.2_} & _88.7_ & 83.5 & _70.4_ & _79.5_ & _76.6_ & _73.5_ & 82.2 & _70.9_ & 37 & 6 \\ & GRM-256 [35] & CVPR’23 & 84.0 & 88.7 & 83.3 & 69.9 & 79.3 & 75.8 & 73.4 & 82.9 & 70.4 & 79 & 10 \\ & OSTrack-256 [10] & ECCV’22 & 83.1 & 87.8 & 82.0 & 69.1 & 78.7 & 75.2 & 71.0 & 80.4 & 68.2 & 140 & 18 \\ & MiT-Gorner [9] & CVPR’22 & 83.1 & 88.1 & 81.6 & 69.2 & 78.7 & 74.7 & 70.7 & 80.0 & 67.8 & 48 & 12 \\ & Sim-B/16 [16] & ECCV’22 & 82.3 & 86.5 & - & 69.3 & 78.5 & - & 68.6 & 78.9 & 62.4 & 131 & 15 \\ & STARK-ST50 [6] & ICCV’21 & 81.3 & 86.1 & - & 66.6 & - & - & 68.0 & 77.7 & 62.3 & 61 & 13 \\ & TransT [5] & CVPR’21 & 81.4 & 86.7 & 80.3 & 64.9 & 73.8 & 69.0 & 67.1 & 76.8 & 60.9 & 84 & 13 \\ & TriDiffor [7] & CVPR’21 & 78.4 & 83.3 & 73.1 & 63.9 & - & 61.4 & 67.1 & 77.7 & 58.3 & 36 & 6 \\ & PDiMP [50] & CVPR’20 & 75.8 & 81.6 & 70.4 & 59.8 & 68.8 & 60.8 & 63.4 & 73.8 & 54.3 & 47 & 12 \\ & DiMP [13] & ICCV’19 & 74.0 & 80.1 & 68.7 & 56.9 & 65.0 & 56.7 & 61.1 & 71.7 & 49.2 & 100 & 16 \\ & SimRPN++ [26] & CVPR’19 & 73.3 & 80.0 & 69.4 & 49.6 & 56.9 & 49.1 & 51.7 & 61.6 & 32.5 & 83 & 16 \\ & ATOM [38] & CVPR’19 & 70.3 & 77.1 & 64.8 & 51.5 & 57.6 & 50.5 & 55.6 & 63.4 & 40.2 & 175 & 15 \\ \hline \hline \end{tabular} \end{table} TABLE II: State-of-the-art comparison on TrackingNet [44], LaSOT [45], and GOT-10k [23] benchmarks. The best three real-time results are shown in **red**, blue and green fonts, and the best non-real-time results are shown in **underline** font. * denotes results on GOT-10k obtained following the official one-shot protocol, with gray font indicating training using extra data. \begin{table} \begin{tabular}{l|c|c c} \hline \hline \multirow{2}{*}{Method} & NFS [51] & UAV123 [52] & VOT’21 [53] \\ & AUC & AUC & EAO \\ \hline LiteTrack-B9 & **65.4** & **67.7** & **0.269** \\ LiteTrack-B8 & 64.6 & 67.1 & 0.261 \\ LiteTrack-B6 & 64.4 & 66.2 & 0.254 \\ LiteTrack-B4 & 63.4 & 66.4 & 0.251 \\ HiT-Base [41] & 63.6 & 65.6 & 0.252 \\ HCAT [47] & 63.5 & 62.7 & - \\ FEAR compared with the previous real-time trackers. LiteTrack-B9 gets the best AUC of 82.4%, surpassing the previous best real-time tracker HiT-Base [41] by 2.4%. Compared to non-real-time tracker ARTrack [14], LiteTrack-B9 achieves comparable performance to it in AUC (82.4 \(vs.\) 84.2) while being \(4.5\times\) faster on the GPU and \(6\times\) faster on the Jetson edge platform. _LaSOT._ LaSOT [45] is a large-scale, long-term dataset containing 1400 video sequences, with 1120 training videos and 280 test videos. We report the same matrices as in TrackingNet evaluated by PyTracking2 tools. The results on LaSOT are shown in Table II. LiteTrack-B9 achieves the best real-time results of 67.0%, 77.0%, and 72.7% in AUC, P\({}_{Norm}\), and P, respectively. LiteTrack-B8 and LiteTrack-B6 achieves the second-best and the third-best AUC score. Compared with the recent efficient tracker HiT-Base [41], LiteTrack-B9 outperform it by 2.4% in AUC. Footnote 2: [https://github.com/visionml/ptytracking](https://github.com/visionml/ptytracking) _NFS, UAV123 and VOT2021_. On the NFS dataset [51], known for its fast-moving objects spanning 100 video sequences, our LiteTrack variants B9, B8, and B6 emerge as the top three in real-time performance as highlighted in Table III. Meanwhile, on the UAV123 dataset [52], which features 123 video clips from low-altitude UAVs, even our fastest LiteTrack-B4 takes the lead among real-time trackers with an AUC score of 66.4%, surpassing competitors such as HiT [41] and HCAT [47] by margins of 0.8% and 3.7%, respectively. Similarly, our VOT-2021 real-time experiments on the VOT2021 benchmark [53] witnessed LiteTrack-B9 achieving the highest EAO score of 26.9% among real-time trackers, as tabulated in Table III. ### _Ablation Study and Visualization_ _Component-wise Analysis._ The significance of our proposed methods is underscored through a comparative study built upon OSTrack [10]. For setting a solid baseline, we enhanced OSTrack by substituting its MAE [19] pretrained weights with those of CAE [20], the outcome of which is enumerated in Tab. IV, Row 2. Direct layer pruning, as seen in Row 3, led to a marked decline in performance. However, when integrated with our novel asynchronous feature extraction (Row 4), not only was the deficit recovered, but the model also achieved superior accuracy and efficiency, surpassing even the strong baseline. _Layer Configuration Analysis._ We explored various configurations concerning the ratio of feature extraction (FE) layers to asynchronous interaction (AI) layers, as depicted in Tab. V. For configurations with 8 total layers, peak performance was achieved with a majority of the layers dedicated to FE. The 6-layer configuration showed comparable results, especially with an even FE-to-AI ratio. Notably, in the 4-layer configurations, a balanced 2:2 FE-to-AI setup still produced respectable results. The data highlights the model's adaptability across different layer configurations and offers insights into achieving an optimal balance between FE and AI layers. _Qualitative Results._ To better present the superiority of LiteTrack, we highlight representative scenes in Fig. 7. In a challenging UAV tracking scenario under a noisy and jittery UAV camera feed, LiteTrack consistently maintains its track, outperforming other trackers. Similarly, when tracking a moving car from a UAV's perspective, LiteTrack demonstrates pinpoint precision, ensuring more accurate alignment with the ground truth than competing methods. These real-world tests underscore LiteTrack's proficiency in handling diverse tracking challenges. ## V Conclusions In this work, we've presented LiteTrack, a pioneering approach to object tracking tailored for robotics applications and edge devices. By combining layer pruning with asynchronous feature extraction, we've achieved significant improvements in both accuracy and execution speed across diverse datasets. Our results underscore LiteTrack's potential, as it not only outperforms leading real-time trackers but also addresses the constraints of computational resources often found in robotics and edge deployments. With its efficient design, LiteTrack promises to be a valuable basline for real-time robotics applications. \begin{table} \begin{tabular}{c|c c|c c c} \hline \hline \# Total & \# FE & \# AI & \multirow{2}{*}{AO} & \multirow{2}{*}{SR\({}_{0.5}\)} & \multirow{2}{*}{_fps_} \\ Layers & Layers & & Layers & & \\ \hline \multirow{3}{*}{8} & 6 & 2 & 70.3 & **80.4** & 190 \\ & 5 & 3 & **70.4** & 80.1 & 185 \\ & 0 & 8 & 68.3 & 77.9 & 173 \\ \hline \multirow{3}{*}{6} & 4 & 2 & 68.0 & 77.5 & 241 \\ & 3 & 3 & **68.7** & **78.2** & 237 \\ \hline \multirow{3}{*}{4} & 3 & 1 & 64.6 & 73.7 & 318 \\ & 2 & 2 & **65.2** & **75.5** & 315 \\ \hline \hline \end{tabular} \end{table} TABLE V: Performance comparison based on varying ratios of feature extraction layers to asynchronous interaction layers. We use gray color to denote our final configuration. Fig. 7: Prediction comparison from UAV123 [52]. We use green lines to demonstrate the Groud Truth bounding box of the target. Blue boxes represent our LiteTrack’s predictions, while yellow and red boxes denote the predictions of trackers HCAT [47] and E.T.Track [46] respectively.
2308.00144
Logical Synchrony and the bittide Mechanism
We introduce logical synchrony, a framework that allows distributed computing to be coordinated as tightly as in synchronous systems without the distribution of a global clock or any reference to universal time. We develop a model of events called a logical synchrony network, in which nodes correspond to processors and every node has an associated local clock which generates the events. We construct a measure of logical latency and develop its properties. A further model, called a multiclock network, is then analyzed and shown to be a refinement of the logical synchrony network. We present the bittide mechanism as an instantiation of multiclock networks, and discuss the clock control mechanism that ensures that buffers do not overflow or underflow. Finally we give conditions under which a logical synchrony network has an equivalent synchronous realization.
Sanjay Lall, Calin Cascaval, Martin Izzard, Tammo Spalink
2023-07-31T20:25:30Z
http://arxiv.org/abs/2308.00144v3
# Logical Synchrony and the bittide Mechanism ###### Abstract We introduce logical synchrony, a framework that allows distributed computing to be coordinated as tightly as in synchronous systems without the distribution of a global clock or any reference to universal time. We develop a model of events called a logical synchrony network, in which nodes correspond to processors and every node has an associated local clock which generates the events. We construct a measure of logical latency and develop its properties. A further model, called a multiclock network, is then analyzed and shown to be a refinement of the logical synchrony network. We present the bittide mechanism as an instantiation of multiclock networks, and discuss the clock control mechanism that ensures that buffers do not overflow or underflow. Finally we give conditions under which a logical synchrony network has an equivalent synchronous realization. ## 1 Introduction In this paper we introduce _logical synchrony_, a property where machines share a common notion of time sufficient to reason about causality but without the need to share a system-wide clock. We discuss what this notion of time is and how it corresponds to existing models. We also discuss the relationship between logical synchrony and the more constrained purer form that has been the focus of much prior work. We finally relate the bittide mechanism which allows efficient implementation of logically synchrony on modern networks and thereby allows for cycle-accurate coordination across nodes. Synchronous execution models have been used successfully in realtime systems [1, 2, 3] to reason about correctness, in particular meeting deadlines. Often, synchronous abstractions are decoupled from implementation and are used to validate system functional behavior. When mapping synchronous abstractions to asynchronous non-deterministic hardware, work has been done to automate code generation that matches the functional semantics, hiding the non-deterministic behavior of the hardware with explicit synchronization, for example [4]. Logical Execution Time (LET) was introduced by Henzinger and Kirsch [5] to support the design of reactive, cyber-physical systems. More recently, Lingua Franca [6, 7] supports concurrent and distributed programming using time-stamped messages. Lingua Franca exposes to programmers the notion of _reactors_ that are triggered in logical time, allowing deterministic reasoning about four common design patterns in distributed systems: alignment, precedence, simultaneity, and consistency. We argue that the causality reasoning in the logical synchrony framework subsumes such design patterns - they are all effectively enabling reasoning about ordering of events in a system that exchanges messages, and as we will show in the paper, this is exactly the class of applications for which logical synchrony determines precisely the causality relationships. Alternatively, synchronous execution can be implemented using a single global clock. For small real-time systems, cyber-physical systems, and control systems, a global clock can be distributed from a single oscillator. Scaling such systems is difficult because large clock distribution networks introduce delays which must be corrected. For the majority of systems using wall-clock time as their global clock, synchronization implies exchanging timestamps [8, 9]. Techniques such as TrueTime [10] and Sundial [11] attempt to reduce the latency uncertainty, and thus the time-uncertainty bounds, from milliseconds in TrueTime to nanoseconds in Sundial. To achieve desired levels of performance using existing network protocols requires expensive time references such as dedicated atomic clocks and networking hardware enhancements to reduce protocol overhead. Time uncertainty is exposed to programmers through an uncertainty interval which guarantees that current time is within interval bounds for all nodes in the system, such that every node is guaranteed to have passed current time when the bound elapses. To provide an example use case, this method guarantees concurrency control correctness in (lock-free) database transactions by ensuring that all distributed system nodes observe the same order of events. Logical synchrony, formalized in Section 2, abstracts the notion of shared time and allows us to avoid a global reference clock or wall-clock. Time is defined only by local clocks decoupled from physical time. The idea is that events at the same node are ordered by local time, and events at different nodes are ordered by causality. As we will show, logical synchrony requires no system-wide global clock and no explicit synchronization (timestamp exchanges or similar), which thereby allows for potentially infinitely scalable systems. Reasoning about ordering of events in logically synchronous systems follows the partial order semantics of Lamport [12] and thus provides equivalence with any synchronous execution that generates identical event graphs. To establish how logical synchrony can be realized in practice, we first define what logical synchrony means within an abstract model of distributed systems with multiple clocks. We then explain how bittide [13, 14, 15] is a mechanism to efficiently implement logical synchrony with real hardware and thereby bring desirable synchronous execution properties to distributed applications efficiently at scale. ### Mathematical preliminaries and notation An _undirected graph_\(\mathcal{G}\) is pair \((\mathcal{V},\mathcal{E})\) where \(\mathcal{V}\) is a set and \(\mathcal{E}\) is a subset of the set of 2-element subsets of \(\mathcal{V}\). A _directed graph_\(\mathcal{G}\) is pair \((\mathcal{V},\mathcal{E})\) where \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) and \((v,v)\not\in\mathcal{E}\) for all \(v\in\mathcal{V}\). An edge \(e\in\mathcal{E}\) in a directed graph may be denoted \((u,v)\) or \(u\to v\). A directed graph may contain a 2-cycle, that is a pair of edges \(u\to v\) and \(v\to u\). An _oriented graph_ is a directed graph in which there are no 2-cycles. Suppose \(G=(\mathcal{V},\mathcal{E})\) is a directed graph, and number the vertices and edges so that \(\mathcal{V}=\{1,\ldots,n\}\) and \(\mathcal{E}=\{1,\ldots,m\}\). Then the _incidence matrix_\(B\in\mathbb{R}^{n\times m}\) is \[B_{ij}=\begin{cases}1&\text{if edge $j$ starts at node $i$}\\ -1&\text{if edge $j$ ends at node $i$}\\ 0&\text{otherwise}\end{cases}\] for \(i=1,\ldots,n\) and \(j=1,\ldots,m\). A _walk_ in a directed graph \(G\) is a non-empty alternating sequence \(v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) in which \(v_{i}\in\mathcal{V}\), \(s_{i}\in\mathcal{E}\), and either \(s_{i}=v_{i}\to v_{i+1}\) or \(s_{i}=v_{i+1}\to v_{i}\). In the former case we say \(s_{i}\) has _forward_ or \(+1\) orientation, otherwise we say it has _backward_ or \(-1\) orientation. A _path_ is a walk in which all vertices are distinct. A _cycle_ is a walk in which vertices \(v_{0},\ldots,v_{k-1}\) are distinct, all edges are distinct, and \(v_{0}=v_{k}\). Walks, paths, and cycles are called _directed_ if all edges are in the forward orientation. In a directed graph \(G\), given a walk \[W=(v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k})\] the corresponding _incidence vector_\(x\in\mathbb{R}^{m}\) is such that \(x_{i}=1\) if there exists \(j\) such that \(i=s_{j}\) and \(s_{j}\) has forward orientation, and \(x_{i}=-1\) if there exists \(j\) such that \(i=s_{j}\) and \(s_{j}\) has reverse orientation, and \(x_{i}=0\) otherwise. For a directed graph with 2-cycles, there is an edge \(u\to v\) and \(v\to u\), and we assign one of these directions as primary and the other as secondary. This is simply a choice of sign convention. From a directed graph we construct an associated oriented graph by discarding all secondary edges. From an oriented graph we construct an associated undirected graph by discarding all orientations. The concepts of spanning tree and connectedness when applied to a directed graph always refer to the associated undirected graph. The following two results are well-known. **Theorem 1**.: _Suppose \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is a directed graph with incidence matrix \(B\), and suppose edges \(1,\ldots,n-1\) form a spanning tree. Partition \(B\) according to_ \[B=\begin{bmatrix}B_{11}&B_{12}\\ -\mathbf{1}^{\mathsf{T}}B_{11}&-\mathbf{1}^{\mathsf{T}}B_{12}\end{bmatrix}\] _then \(B_{11}\) is unimodular. Further_ \[B=\begin{bmatrix}B_{11}&0\\ -\mathbf{1}^{\mathsf{T}}B_{11}&1\end{bmatrix}\begin{bmatrix}I&0\\ 0&0\end{bmatrix}\begin{bmatrix}I&N\\ 0&I\end{bmatrix}\] _where \(N=B_{11}^{-1}B_{12}\)._ Proof.: _See for example Theorem 2.10 of [16]._ For convenience, denote by \(Z\) the \(m\times(m-n+1)\) matrix \[Z=\begin{bmatrix}-N\\ I\end{bmatrix}\] Then we have the following important property. **Theorem 2**.: _Every column of \(Z\) is the incidence vector of a cycle in \(\mathcal{G}\)._ Proof.: _See, for example, Chapter 5 of [16]._ Theorem 1 implies that the columns of \(Z\) are a basis for the null space of \(B\), since \(BZ=0\) and \(\operatorname{null}(Z)=\{0\}\). The columns of \(Z\) are called the _fundamental cycles_ of the graph. Note that each of the fundamental cycles is associated with exactly one of the non-tree edges of the graph. ## 2 Logical synchrony networks We start with a formal definition of a logical synchrony network as a directed graph with edge weights, as follows. **Definition 1**.: _A **logical synchrony network** is a directed graph \((\mathcal{V},\mathcal{E})\) together with a set of edge weights \(\lambda:\mathcal{E}\rightarrow\mathbb{R}\)._ In this model, each node corresponds to a processor, and an edge between nodes \(i\to j\) indicates that node \(i\) can send data along a physical link to node \(j\). Sent data is divided into tokens which we refer to as _frames_. Local clocks.Every node has an infinite sequence of _events_ associated with it, which can be thought of as compute steps. The events at node \(i\) are denoted \((i,\tau)\), where \(\tau\) is referred to as a _localtick_ and thereby implicitly defines a local clock. We define the set of all events \[\mathcal{V}_{\text{ext}}=\{(i,\tau)\mid i\in\mathcal{V},\tau\in\mathbb{Z}\}\] Events at one node are aligned to events at other nodes by the transmission of frames. At localtick \(\tau\) and node \(i\), a frame is sent from node \(i\) to node \(j\), and it arrives at node \(j\) at localtick \(\tau+\lambda_{i\cdot j}\). The constant \(\lambda_{i\cdot j}\) is called the _logical latency_. We define the following binary relation. **Definition 2**.: _Event \((i,\tau)\) is said to **directly send to** the event \((j,\rho)\) if \((i,j)\in\mathcal{E}\) and \(\rho=\tau+\lambda_{i\cdot j}\), or \(i=j\) and \(\rho=\tau+1\). We use the notation_ \[(i,\tau)\rightarrow(j,\rho)\] _to mean \((i,\tau)\) directly sends to \((j,\rho)\), and define the set_ \[\mathcal{E}_{\text{ext}}=\{\left((i,\tau),(j,\rho)\right)\mid(i,\tau) \rightarrow(j,\rho)\}\] _The graph \(\mathcal{G}_{\text{ext}}=(\mathcal{V}_{\text{ext}},\mathcal{E}_{\text{ext}})\) is called the **extended graph** of the logical synchrony network._ This relation may be viewed as an infinite directed graph with vertex set \(\mathcal{V}_{\text{ext}}\) and directed edges \((i,\tau)\rightarrow(j,\rho)\). In this graph, those edges \((i,\tau)\rightarrow(j,\rho)\) for which \(i=j\) are called _computational edges_. An edge that is not a computational edge is called a _communication edge_. Figure 1 illustrates a logical synchrony network and its corresponding extended graph. The localticks define a separate and ideal notion of local duration at each node by counting events (_i.e._, frame transmissions or receptions.) We can speak of the event \((i,\tau)\) as occurring at time \(\tau\) localticks on node \(i\). We say that event \((i,\tau+a)\) happens \(a\) localticks after event \((i,\tau)\), for any \(a\in\mathbb{Z}\). We cannot in general compare clock values at two different nodes. Execution.This model captures the local evolution of time at each node \(i\in\mathcal{V}\), and the transmission of frames between them. Although we do not investigate execution models in this paper, it is possible to define many different execution semantics. One simple choice is the functional model, where frames carry data, and associated with each event \((i,\tau)\in\mathcal{V}_{\text{ext}}\) in the extended graph we have a function, which maps data from incoming edges to data on outgoing edges. Another possibility is to have a more procedural model, where events in \(\mathcal{V}_{\text{ext}}\) correspond to the clock ticks of a processor in the corresponding \(\mathcal{V}\). For the purposes of this paper it is not necessary to specify how many bits each frame contains but we assume all frames on a given link are equally sized. The abstract models considered in this paper consist of sequences of events which extend infinitely far into both the future and the past. It is possible to extend this model to include system startup, for example by introducing a minimum node within the extended graph, or by modifying the execution model. We do not address startup within this paper. Frames and logical latency.If \(A\) denotes a particular frame sent \(i\to j\), then we will make use of the notation receive(\(A\)) to refer to the localtick at node \(j\) when \(A\) arrives at \(j\). Similarly send(\(A\)) refers to the localtick at node \(i\) when \(A\) was sent. This notation leaves implicit the source and destination of frame \(A\), in that \(i,j\) are not included as arguments of the send and receive functions. We do not as yet assume any particular mechanism for transmission of frames, but we assume that frames are received in the order that they are sent, without any loss. Note that the logical latency has no connection to _physical latency_. If we were to measure the send and receive times with respect to a global notion of time, we would know that, for example, the receive time must be greater than the send time. In the framework presented here, that is not the case; the localticks are strictly local, and as a result there is no such requirement on their numerical value; the logical latency \(\lambda_{i\cdot j}\) may be negative. This is, of course, a statement about the clocks, not about causality. In words, the logical latency is the time of arrival _in the receiver's clock_ minus the time of departure _in the sender's clock_. There are several observations worth making about logical latency. * Logical latency is _constant_. For any two nodes \(i,j\), every frame sent \(i\to j\) has the same logical latency. It is a property of the edge \(i\to j\) in \(\mathcal{E}\). * Despite the name, logical latency is not a measure of length of time or duration. It is not the case that if \(\lambda_{i\cdot j}\) is greater than \(\lambda_{p\cdot q}\) then it takes longer for frames to move from \(i\) to \(j\) than it does for frames to move from \(p\) to \(q\). (In fact, we do not have a way within this framework to compare two such quantities.) Figure 1: A logical synchrony network (edges labeled with \(\lambda\)) and corresponding extended graph. * The logical latency can be negative. Logical latencies and paths.Logical latencies add along a path. Suppose node \(i\) sends a frame \(B\) along edge \(i\to j\) to node \(j\), and then node \(j\) forwards it \(j\to k\). Then we have \[\operatorname{receive}(B)=\operatorname{send}(B)+\lambda_{i\text{-}j}+\lambda_ {j\text{-}k}\] This means that we can speak of the logical latency of the path \(i\to j\to k\) as being \(\lambda_{i\text{-}j}+\lambda_{j\text{-}k}\), and more generally we can define the logical latency of a directed path \(\mathcal{P}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) from node \(v_{0}\) to node \(v_{k}\) in \(\mathcal{G}\). The logical latency is path dependent; two paths with the same endpoints may have different logical latencies. We have \[\lambda_{\mathcal{P}}=\sum_{i=0}^{k-1}\lambda_{s_{i}}\] This makes sense, which is potentially surprising because we are measuring arrival and departure times with different clocks. Since frames are being relayed, there may be additional delay at intermediate nodes (_i.e._, additional compute steps) which would need to be included when determining the destination event. Logical latencies are defined such that they do not included this additional delay. ### Ordering of events A fundamental question regarding causality arises in the study of distributed systems. Given two events, we would like to determine which happened first. In a nonrelativistic physical setting, such a question is well-defined. In a relativistic setting, there are events which are separated in space for which the relative order is undetermined -- the order depends on the observer. Something similar happens in distributed systems, as was pointed out by Lamport [12]. Given two events, instead of asking which event happened first, a more useful question is to ask which event, if any, _must have_ happened first. The framework for distributed clocks developed by Lamport [12] established that there is a partial ordering on events determined by one event's ability to influence another by the sending of messages. In that paper the author defines a global notion of time consistent with said partial order. Subsequent work [17, 18] defines _vector clocks_ which assign a vector-valued time to events for which the partial ordering is equivalent to that defined by message-passing. We would like to construct the corresponding notion of causality in a logical synchrony network. We define below the \(\sqsubset\) relation, which can be used to define a partial order on \(\mathcal{G}_{\text{ext}}\) provided we can ensure that it is acyclic. To do this, we consider round-trip times. Round trip times.Logical latencies are not physical latencies, despite the additive property. However, there is one special case where logical latency is readily interpreted in such physical terms, specifically the time for a frame \(A\) to traverse a cycle in the graph, the cycle round-trip time. Suppose \(\mathcal{C}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) is a directed cycle, then \[\lambda_{\mathcal{C}}=\operatorname{receive}(A)-\operatorname{send}(A)\] is the round-trip time measured in localticks. Two different cycles from a single node \(i\) may have different round-trip times, and these are comparable durations since they are both measured in localticks at that node. We have \[\lambda_{\mathcal{C}}=\sum_{i=0}^{k-1}\lambda_{s_{i}}\] We make the following definition. **Definition 3**.: _A logical synchrony network is said to have **positive round-trip times** if, for every directed cycle \(\mathcal{C}\) in the graph \(\mathcal{G}\) we have \(\lambda_{\mathcal{C}}>0\)._ We then have the following result, which says that if the round-trip times around every directed cycle in the logical synchrony network are positive, then the extended graph is acyclic. **Theorem 3**.: _If a logical synchrony network has positive round-trip times then its extended graph is acyclic._ Proof.: _Suppose for a contradiction that the extended graph is cyclic. Then there exists a directed cycle \(\mathcal{C}_{1}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) where each \(v_{j}\in\mathcal{V}_{\text{ext}}\) is a pair \(v_{j}=(i_{j},\tau_{j})\). Since the start and end node is the same, we have_ \[\begin{split} 0&=\sum_{j=1}^{k-1}(\tau_{j+1}-\tau_{j})\\ &=\sum_{j\in C_{\text{comp}}}(\tau_{j+1}-\tau_{j})+\sum_{j\notin C _{\text{comp}}}(\tau_{j+1}-\tau_{j})\end{split} \tag{1}\] _where \(C_{\text{comp}}\) is the set of indices \(j\) such that \((v_{j},v_{j+1})\) is a computational edge. Each of the computational edges has \(\tau_{j+1}-\tau_{j}=1\). If all of the edges in the graph are computational then the right-hand side is positive. If there are some communication edges, then the second of the two terms on the right-hand side is positive due to the assumption that the logical synchrony graph has positive round-trip times, and again the right-hand-side is positive. This contradicts the claim that the sum is zero._ This acyclic property is necessary for an execution model based on function composition to be well-defined. It also allows us to define a temporal partial ordering between events in \(\mathcal{G}_{\text{ext}}\). Since a logical synchrony network with positive round-trip times has an extended graph which is acyclic, the reachability relation on the extended graph defines a partial order. Specifically, we write \[(i,\tau)\sqsubset(j,\rho)\] if there is a directed path from \((i,\tau)\) to \((j,\rho)\) in the extended graph. Here, the notation is meant to be similar to \(<\), indicating _comes before_. Under these conditions, a logical synchrony network is a distributed system in the sense of Lamport [12], with logical latencies providing strict inter-event timings at any node \(i\in\mathcal{V}\). The partial ordering on the induced logical synchrony network has exactly the property that, if \(u\sqsubset v\), then \(u\) must have happened before \(v\). ## III Equivalence of LSNs Two logical synchrony networks may have different logical latencies, but be nonetheless equivalent for the purpose of executing processes. An example is given by the graphs in Figure 2. This arises because we can relabel the events. Specifically, given a logical synchrony network with events \(\mathcal{V}_{\mathrm{ext}}\), we define a new logical synchrony network. Given \(c_{1},\ldots,c_{n}\in\mathbb{Z}\), we relabel event \((i,\tau)\) as \((i,\tau+c_{i})\). This is a relabeling of the vertices of the graph \(\mathcal{G}_{\mathrm{ext}}\). In \(\mathcal{G}_{\mathrm{ext}}\) we have edges \[(i,\tau)\rightarrow(j,\tau+\lambda_{i\text{-}j})\] for every \(i\neq j\in\mathcal{V}\) and \(\tau\in\mathbb{Z}\). Under the relabeling, these are mapped to \[(i,\tau+c_{i})\rightarrow(j,\tau+\lambda_{i\text{-}j}+c_{j})\] and since there is such an edge for all \(\tau\in\mathbb{Z}\) the edge set of the relabeled extended graph is \[\hat{\mathcal{E}}_{\mathrm{ext}}=\left\{\left((i,\tau),(j,\tau+\lambda_{i \text{-}j}+c_{j}-c_{i})\right)\mid i,j\in\mathcal{V},\tau\in\mathbb{Z}\right\}\] This is the extended graph for a logical synchrony network with logical latencies \[\hat{\lambda}_{i\text{-}j}=\lambda_{i\text{-}j}+c_{j}-c_{i}\] This leads us to the following definition of equivalence. **Definition 4**.: _Suppose we have two logical synchrony networks on a directed graph \((\mathcal{V},\mathcal{E})\), with edge weights \(\lambda\) and \(\hat{\lambda}\). We say these LSNs are **equivalent** if there exists \(c_{1},\ldots,c_{n}\in\mathbb{Z}\) such that, for all \(i,j\in\mathcal{V}\),_ \[\hat{\lambda}_{i\text{-}j}=\lambda_{i\text{-}j}+c_{j}-c_{i} \tag{2}\] We can write this equation as \[\lambda-\hat{\lambda}=B^{\mathsf{T}}c\] where \(B\) is the incidence matrix of \(\mathcal{G}\). Relabeling the clocks results in a relabeling of the corresponding extended graph. Since this only changes the labels of the nodes, not how the nodes are interconnected, any code which is executable on one graph may also be executed on the other (but any references to particular localticks will need to be changed.) Physically measurable properties such as round-trip times cannot change under such a simple relabeling. We have **Proposition 1**.: _If two LSNs are equivalent, they will have the same round trip times on every directed cycle._ Proof.: _The round-trip times for a directed cycle \(\mathcal{C}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) in \(\mathcal{G}\) satisfy_ \[\sum_{j=0}^{k-1}\lambda_{s_{j}}=\sum_{j=0}^{k-1}\hat{\lambda}_{s_{j}}\] _which follows from equation (2)._ The converse is not generally true, as the following example shows. **Example 1**.: _Consider the logical synchrony networks shown in Figure 3. Both networks have the same underlying graph, which has no directed cycles, and so the round trip times on every directed cycle are trivially equal on both networks. If we order the edges \(((1\to 2),(2\to 3),(1\to 3))\) then we have incidence matrix_ \[B=\begin{bmatrix}1&0&1\\ -1&1&0\\ 0&-1&-1\end{bmatrix}\] _which has \(\mathrm{rank}(B)=2\). In the left-hand network of Figure 3 the logical latencies are \(\lambda_{1}=2\), \(\lambda_{2}=3\) and \(\lambda_{3}=4\), and in the right-hand network they are \(\hat{\lambda}_{1}=2\)\(\hat{\lambda}_{2}=3\) and \(\hat{\lambda}_{3}=3\). Therefore_ \[\lambda-\hat{\lambda}=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix} \tag{3}\] _and there is no vector \(c\) such that \(\lambda-\hat{\lambda}=B^{\mathsf{T}}c\)._ Figure 3: Two non-equivalent logical synchrony graphs with no directed cycles (edges labeled with \(\lambda\)) Figure 2: Two equivalent logical synchrony graphs (edges labeled with \(\lambda\)). Relabeling the clocks using \(c=(1,2,3)\) maps the left-hand graph to the right-hand one. If the round trip times are equal around every cycle, accounting for signs and orientations, then the two logical synchrony networks are equivalent. To show this, we need a preliminary result. **Lemma 1**.: _Let the graph be connected. Suppose \(y\in\mathbb{Z}^{m}\), and for every cycle \(\mathcal{C}\) we have \(y^{\mathsf{T}}x=0\) for the corresponding incidence vector \(x\). Then \(y=B^{\mathsf{T}}c\) for some \(c\in\mathbb{Z}^{n}\)._ Proof.: _Pick a spanning tree, and partition \(B\) according to the spanning tree. Let \(N=B_{11}^{-1}B_{12}\). Partition \(y\) according to_ \[y=\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}\] _where \(y_{1}\in\mathbb{Z}^{n-1}\). We choose_ \[c=\begin{bmatrix}B_{11}^{-\mathsf{T}}y_{1}\\ 0\end{bmatrix}\] _and note that since \(B_{11}\) is unimodular \(c\) must be integral. Then Theorem 1 implies_ \[B^{\mathsf{T}}c =\begin{bmatrix}I&0\\ N^{\mathsf{T}}&I\end{bmatrix}\begin{bmatrix}I&0\\ 0&0\end{bmatrix}\begin{bmatrix}B_{11}^{\mathsf{T}}&-B_{11}^{\mathsf{T}} \mathbf{1}\\ 0&1\end{bmatrix}\begin{bmatrix}B_{11}^{-\mathsf{T}}y_{1}\\ 0\end{bmatrix}\] \[=\begin{bmatrix}I&0\\ N^{\mathsf{T}}&I\end{bmatrix}\begin{bmatrix}y_{1}\\ 0\end{bmatrix}\] \[=\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}\] _as desired, where in the last line we use Theorem 2 to show that_ \[y^{\mathsf{T}}\begin{bmatrix}-N\\ I\end{bmatrix}=0\] _since \(y\) is orthogonal to the incidence vectors of the fundamental cycles._ We now state and prove a variant of Proposition 1 which is both necessary and sufficient. **Theorem 4**.: _Suppose we have two logical synchrony networks on a connected directed graph \((\mathcal{V},\mathcal{E})\), with edge weights \(\lambda\) and \(\hat{\lambda}\). These networks are equivalent if and only if they have the same signed round trip times on every cycle in \(\mathcal{G}\). That is, for every cycle \(\mathcal{C}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) we have_ \[\sum_{j=0}^{k-1}\lambda_{s_{j}}o_{j}=\sum_{j=0}^{k-1}\hat{\lambda}_{s_{j}}o_{j} \tag{4}\] _where \(o_{j}\) is the orientation of edge \(s_{j}\) on the cycle \(\mathcal{C}\)._ Proof.: _Equation (4) means that for every cycle \(C\) with incidence vector \(x\) we have_ \[(\lambda-\hat{\lambda})^{\mathsf{T}}x=0\] _Then Lemma 1 implies that \(\lambda-\hat{\lambda}=B^{\mathsf{T}}c\) for some integer vector \(c\), and hence \(\lambda\) and \(\hat{\lambda}\) are equivalent._ What this means, in particular, is that in Example 1 the graph does not have a directed cycle but it does have a cycle, where edges \(1\to 2\) and \(2\to 3\) are oriented in the forward direction, and edge \(1\to 3\) is oriented in the backward direction. Then \(\lambda\) and \(\hat{\lambda}\) are equivalent if and only if \[\lambda_{1}+\lambda_{2}-\lambda_{3}=\hat{\lambda}_{1}+\hat{\lambda}_{2}-\hat{ \lambda}_{3}\] Since this does not hold for \(\lambda\) and \(\hat{\lambda}\) in that example, those two networks are not equivalent. One cannot verify equivalence by checking pairs of nodes. That is, it is not sufficient to simply check the length-2 round trip times, as the following example shows. **Example 2**.: _Suppose \(\mathcal{G}\) is the complete graph with 3 nodes. For the two logical synchrony networks, shown in Figure 4, the length-2 round trip times are_ \[\lambda_{1\text{-}2\text{-}1} =5\] \[\lambda_{2\text{-}3\text{-}2} =4\] \[\lambda_{1\text{-}3\text{-}1} =2\] _and they are the same for \(\hat{\lambda}\). However, these networks are not equivalent. There is no way to relabel so that the logical latencies are the same. This is because the length-3 round trip times are \(\lambda_{1\text{-}2\text{-}3\text{-}1}=6\) and \(\hat{\lambda}_{1\text{-}2\text{-}3\text{-}1}=4\)._ Invariants.As shown by the above results, round-trip times around directed cycles are invariant under relabeling. Cycles which are not directed also result in invariants which may be physically measured and interpreted. We give some examples below. **Example 3**.: _Figure 5 shows a triangle graph in which node 1 sends frame \(A\) to node 3, and simultaneously sends frame \(B\) to node 3 via node 2. Then \(\operatorname{receive}(B)-\operatorname{receive}(A)\) is measured in localticks at node 3, and it is invariant under relabeling._ **Example 4**.: _Figure 6 shows a square graph. Here node 1 sends frame \(A\) to node 2 and simultaneously sends frame \(B\) to node 4. Node 3 sends frame \(C\) to node 2 and simultaneously sends frame \(D\) to node 4. Note that the Figure 4: Logical synchrony networks for Example 2 transmissions of node 1 and node 3 are not synchronized with each other. Then the quantity_ \[(\mathrm{receive}(A)-\mathrm{receive}(C))-(\mathrm{receive}(B)-\mathrm{ receive}(D))\] _is invariant under clock relabelings._ Equivalent networks can have different logical latencies, but must have the same round-trip times. The question of how much freedom this leaves is interesting, and has an important consequence which we discuss below. We first show that one can set the logical latencies arbitrarily on any spanning tree. **Theorem 5**.: _Suppose \(\mathcal{G},\lambda\) is a logical synchrony network, where \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Suppose \(\mathcal{T}\subset\mathcal{E}\) is a spanning tree. Then for any \(\gamma:\mathcal{T}\rightarrow\mathbb{Z}\) there exists \(c\in\mathbb{Z}^{n}\) such that_ \[\gamma_{i\cdot j}=\lambda_{i\cdot j}+c_{j}-c_{i}\text{ for all }i\to j\in \mathcal{T}\] Proof.: _We would like to show that there exists \(c\in\mathbb{Z}^{n}\) such that_ \[\begin{bmatrix}I&0\end{bmatrix}(\lambda-\gamma)=\begin{bmatrix}I&0\end{bmatrix} B^{\mathsf{T}}c\] _Let \(y_{1}\) be the left-hand side, then using Theorem 1, this is equivalent to_ \[y_{1}=\begin{bmatrix}B_{11}^{\mathsf{T}}&-B_{11}\mathbf{1}\end{bmatrix}c\] _and hence we may choose_ \[c=\begin{bmatrix}B_{11}^{-\mathsf{T}}y_{1}\\ 0\end{bmatrix}\] _which is integral since \(B_{11}\) is unimodular._ We can use this result in the following way. There is no requirement within this framework that logical latencies be nonnegative. However, it turns out that any logical synchrony network which has nonnegative round-trip times is equivalent to one with nonnegative logical latencies. We state and prove this result below. This result will be useful when we discuss multiclock networks in the subsequent section. **Theorem 6**.: _Suppose \(\mathcal{G},\lambda\) is a logical synchrony network with \(\mathcal{G}\) strongly connected, and for every directed cycle \(\mathcal{C}\) the round-trip logical latency \(\lambda_{\mathcal{C}}\) is nonnegative. Then there exists an equivalent LSN with edge weights \(\hat{\lambda}\) which are nonnegative._ Proof.: _Pick a node \(r\). Since the graph has no negative cycles, there exists a spanning tree \(\mathcal{T}\), rooted at \(r\), with edges directed away from the root, each of whose paths is a shortest path [19]. Use Theorem 5 to construct \(c\) such that_ \[\lambda_{i\cdot j}+c_{j}-c_{i}=0\text{ for all }i\to j\in\mathcal{T}\] _As a result, we have \(\lambda_{i\cdot j}=c_{i}-c_{j}\) for all edges \(i\to j\) in the tree \(\mathcal{T}\). Denote by \(t_{i\cdot k}\) the length of the path from \(i\) to \(k\) in the tree. Then we have \(t_{i\cdot k}=c_{i}-c_{k}\)._ _Since this is a shortest path tree, we have for any edge \(i\to j\)_ \[t_{r\cdot i}+\lambda_{i\cdot j}\geq t_{r\cdot j}\] _because the path in the tree from \(r\) to \(j\) must be no longer than the path via node \(i\). Therefore_ \[c_{r}-c_{i}+\lambda_{i\cdot j}\geq c_{r}-c_{j}\] _Setting \(\hat{\lambda}_{i\cdot j}=\lambda_{i\cdot j}+c_{j}-c_{i}\) for all edges we find \(\hat{\lambda}_{i\cdot j}\geq 0\) as desired._ This result says that, if we have a shortest path tree, we can relabel the clocks so that the logical latency is zero on all edges of that tree, and with that new labeling the logical latency will be nonnegative on every tree edge. An example is given in Figure 7. Note also that an edge having zero logical latency does not imply that communication between the endpoints is instantaneous; only that the numerical value of the time at which the frame is received is equal to the numerical value of the time at which it was sent. ## IV Multiclock networks In this section we formulate the relationship between events on a network in terms of physical clocks, leading to a mathematical definition called the _multiclock network_. We show that multiclock networks are special types of logical synchrony networks. We will use \(t\) to denote an idealized notion of time, called _wall-clock time_, or _ideal time_[20]. Time on the network is _multiform_[1], in the sense that the nodes on the network each maintain their own sense of time. At each node, there is a real-valued clock, denoted by \(\theta_{i}\). Its units are the _localticks_. We refer to the value \(\theta_{i}\) as Figure 5: Triangle invariant Figure 6: Diamond invariant the _local time_ or _phase_ at node \(i\). Local time has no quantitative relationship to physical or wall-clock time. In particular, we do not view \(\theta_{i}\) as an approximation to wall-clock time and consequently clocks at two distinct nodes are inherently unrelated. At a node \(i\), a processor can read the value \(\theta_{i}\), its own clock, but cannot access the value \(\theta_{j}\) at any other node \(j\neq i\). We mathematically model \(\theta_{i}\) as a function of physical time \(t\), so that \(\theta_{i}:\mathbb{R}\rightarrow\mathbb{R}\), without implying anything about its construction; it simply means that if at physical time \(t\) a hypothetical outside observer were to read clock \(i\), it would read value \(\theta_{i}(t)\). What is required is that \(\theta_{i}\) is continuous and increasing, so that \(\theta_{i}(s)<\theta_{i}(t)\) if \(s<t\). We emphasize again that this does not imply that any processes running on the system can access wall-clock time \(t\). The quantity \(\theta_{i}\) is not related to physical time. At times \(t\) where \(\theta_{i}\) is differentiable, we define the frequency \(\omega_{i}\) of the clock \(\theta_{i}\) by \[\omega_{i}(t)=\frac{d\theta_{i}(t)}{dt}\] At a node \(i\), a clock generates an infinite sequence of events, also referred to as _localticks_, which happen whenever \(\theta_{i}\) is an integer. Clocks are not required to be periodic, and this definition of frequency is applicable in the general aperiodic case. Clocks at different nodes may have very different frequencies. If the frequency at node \(i\) is large, then events at that node occur more often. We model the process of frame transmission from node \(i\) to node \(j\) as a FIFO, but real-world implementations are likely to consist of uninterrupted physical communication streams feeding into memory buffers. Every node can access the output (or head) of the FIFO corresponding to each of its incoming links, and the input (or tail) of the FIFO corresponding to each of its outbound links. We will discuss below the requirement that FIFOs neither overflow nor underflow. Logical synchrony in multiclock networks.With every localtick, node \(i\) inserts a frame at the tail of each of its outgoing link FIFOs and removes a frame from the head of each of its incoming link FIFOs. This lock-step alignment of input and output is the fundamental synchronization mechanism that imposes logical synchrony upon the network. At each node, with every localtick, one frame is removed from each incoming FIFO and one frame is sent on each outgoing FIFO. Formal definition of multiclock network.We now turn to a mathematical model that will enable us to analyze the behavior of this system. **Definition 5**: _A **multiclock network** is a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) together with continuous increasing functions \(\theta_{i}:\mathbb{R}\rightarrow\mathbb{R}\) for each \(i\in\mathcal{V}\), and edge weights \(\lambda:\mathcal{E}\rightarrow\mathbb{Z}\)._ This definition contains the entire evolution of the clock phases \(\theta_{i}\), and the link properties \(\lambda_{i\cdot j}\). We will discuss the physical meaning of \(\lambda_{i\cdot j}\) below. Unlike the logical synchrony network, where events are abstract and have no physical time associated with them, in a multiclock network the global timing of all events is defined by the clocks \(\theta\). We will show that a multiclock network is a special case of a logical synchrony network, and the constants \(\lambda\) are the associated logical latencies. To do this, we model the behavior of the FIFOs connecting the nodes. Fifo model.If \(i\to j\) in the graph \(\mathcal{G}\), then there is a FIFO connecting node \(i\) to node \(j\). With every localtick at node \(i\), a frame is added to this FIFO, and with every localtick at node \(j\), a frame is removed from the FIFO. We number the frames in each FIFO by \(k\in\mathbb{Z}\), according to the localtick at the sender, and the frames in the FIFO are those with \(k\) satisfying \[\alpha_{i\cdot j}(t)\leq k\leq\beta_{i\cdot j}(t)\] where \(\alpha\) and \(\beta\) specify which frames are currently in the FIFO at time \(t\). The FIFO model is as follows. \[\beta_{i\cdot j}(t) =\left\lfloor\theta_{i}(t)\right\rfloor \tag{5}\] \[\alpha_{i\cdot j}(t) =\left\lfloor\theta_{j}(t)\right\rfloor-\lambda_{i\cdot j}+1 \tag{6}\] Figure 7: Relabeling so that logical latencies are nonnegative. The upper graph shows edges labeled with \(\lambda\). The root node is in the lower left, and the shortest-path spanning tree is shown in red. The lower graph shows an equivalent LSN, with nodes \(i\) labeled with \(c_{i}\), and the corresponding logical latencies \(\hat{\lambda}_{i\cdot j}=\lambda_{i\cdot j}+c_{j}-c_{i}\). All logical latencies in this graph are nonnegative. Equation (5) means that frames are added with each localclick at the sender, and numbered according to the sender's clock. Equation (6) means that frames are removed with each localtick at the receiver. The constant \(\lambda\) is to account for the offset between the frame numbers in the FIFO and the clock labels at the receiver. (We add 1 for convenience.) This offset must be constant, since one frame is removed for each receiver localtick. This constant is specified by the multiclock network model in Definition 5. This model precisely specifies the location of every frame on the network at all times \(t\). In particular, this determines the FIFO occupancy at startup. For any time \(t_{0}\), the specification of \(\lambda\) is equivalent to specifying the occupancy of the FIFOs at time \(t_{0}\). This allows us to have a well-defined FIFO occupancy without requiring an explicit model of startup. Logical latencyLogical latency is the fundamental quantity which characterizes the discrete behavior of a network, and allows us to ignore the details of the clocks \(\theta_{i}\). The idea is that we can understand the logical structure of the network, such as the events, the execution model, and causality, without needing to know specific wall-clock times at which these things occur. We now show that the quantity \(\lambda_{i\text{-}j}\) corresponds to the logical latency. Suppose a frame is sent from node \(i\) at localtick \(k\in\mathbb{Z}\), and wall-clock time \(t^{k}_{\text{send}}\). Then \(\theta_{i}(t^{k}_{\text{send}})=k\). Let the time which it is received at node \(j\) be denoted by \(t^{k}_{\text{rec}}\). Both \(t^{k}_{\text{send}}\) and \(t^{k}_{\text{rec}}\) are wall-clock times, and apart from the causality constraint that the frame must be received after it is sent, there is no constraint on the difference between these times; that is, the _physical latency_\(t^{k}_{\text{rec}}-t^{k}_{\text{send}}\) may be large or small. In general, physical latency will be affected by both the number of frames in the FIFO \(i\to j\) as well as the time required for a frame to be physically transmitted. We do not presuppose requirements on the physical latency. **Lemma 2**.: _Suppose frame \(k\) is sent from node \(i\) to node \(j\). Then \(t^{k}_{\text{send}}\) and \(t^{k}_{\text{rec}}\) satisfy_ \[\theta_{i}(t^{k}_{\text{send}}) =k \tag{7}\] \[\theta_{j}(t^{k}_{\text{rec}}) =k+\lambda_{i\text{-}j} \tag{8}\] _and hence the logical latency is given by_ \[\lambda_{i\text{-}j}=\theta_{j}(t^{k}_{\text{rec}})-\theta_{i}(t^{k}_{\text{ send}}) \tag{9}\] Proof.: _Since frames in the FIFO \(i\to j\) are numbered according to the sender's clock, we have_ \[t^{k}_{\text{send}}=\inf\{t\mid\beta_{i\text{-}j}(t)=k\}\] _that is, \(t^{k}_{\text{send}}\) is the earliest time at which frame \(k\) is in the FIFO from \(i\) to \(j\). Since the floor function is right continuous, this gives equation (7). Similarly, we have_ \[t^{k}_{\text{rec}}=\inf\{t\mid\alpha_{i\text{-}j}(t)=k+1\}\] _and this implies equation (8), and the logical latency follows._ Unlike the physical latency \(t_{\text{rec}}-t_{\text{send}}\), the logical latency \(\theta_{j}(t^{k}_{\text{rec}})-\theta_{i}(t^{k}_{\text{send}})\) does not change over time. Note also that the logical latency is an integer. Since the logical latency is constant, we can conclude that every multiclock network is a logical synchrony network; more precisely, the logical latencies defined by the multiclock network satisfy the same properties as those of a logical synchrony network. ### Realizability We now turn to an analysis of the occupancy of the FIFOs in more detail. A frame is considered _in-transit_ from \(i\to j\) at time \(t\) if it has been sent by node \(i\) but not yet received by node \(j\); that is, if it is in the FIFO from \(i\) to \(j\). Define \(\nu_{i\text{-}j}(t)\) to be the number of frames in transit \(i\to j\). Then we have \[\nu_{i\text{-}j}(t) =\beta_{i\text{-}j}(t)-\alpha_{i\text{-}j}(t)+1\] \[=\lfloor\theta_{i}(t)\rfloor-\lfloor\theta_{j}(t)\rfloor+\lambda_ {i\text{-}j} \tag{10}\] and this holds for all \(t\). Here we can see that the constant \(\lambda_{i\text{-}j}\) is a property of the link \(i\to j\), which determines the relationship between the clock phases at each end of the link and the number of frames in transit. So far in this model, there is nothing that prevents the FIFO occupancy on an edge \(i\to j\) from becoming negative. If the clock at node \(\theta_{j}\) has a higher frequency than the clock at \(\theta_{i}\), and if that frequency difference is maintained for long enough, then the FIFO \(i\to j\) will be rapidly emptied. In this case, \(\theta_{j}\) will become much larger than \(\theta_{i}\), and from (10) we have that \(\nu_{i\text{-}j}\) will become negative. Similarly, the FIFO will overflow if the frequencies become imbalanced in the other direction. In [15] a technique using a dynamically switching control algorithm is presented that allows prevention of such behaviors. We make the following definition. **Definition 6**.: _A multiclock network is called **realizable** if there exists \(\nu_{max}\in\mathbb{R}\) such that for all edges \(i\to j\)_ \[0\leq\nu_{i\text{-}j}(t)\leq\nu_{max}\quad\text{for all $t\in\mathbb{R}$} \tag{11}\] Note that this requirement must hold for all positive and negative time \(t\). The terminology here is chosen to be suggestive, in that we would like a condition which implies that we can physically implement a multiclock network. A physically necessary condition is that the FIFO occupancies are bounded and cannot be negative. Cycles and conservation of framesCycles within a multiclock network have several important properties. The first is _conservation of frames_, as follows. Theorem 7.: _Suppose \(\mathcal{C}=v_{0},s_{0},v_{1},s_{1},\ldots,s_{k-1},v_{k}\) is a directed cycle in a multiclock network. Then_ \[\sum_{i=0}^{k-1}\nu_{s_{i}}(t)=\lambda_{\mathcal{C}}\] _In particular, the number of frames in transit around the cycle is constant, and is the sum of the logical latencies on the cycle._ Proof.: _The proof follows immediately from (10)._ An immediate corollary of this is that, in a physical network, if every edge of \(\mathcal{G}\) is on a cycle, then the number of frames in the network is finite and the upper bound condition for realizability is satisfied. This is the case, for example, in a strongly connected graph. Note that this holds because, in a physical network, the FIFO occupancy cannot be negative. It is not the case that the FIFO model used here implies that \(\nu\) is upper bounded, since in the model some FIFO lengths may become large and negative while others become large and positive. This theorem is particularly evocative in the simple and common case where we have two nodes \(i\), \(j\) connected by links in both directions. In this case, whenever \(i\) receives a frame, it removes it from it's incoming FIFO from \(j\), and adds a new frame to the outgoing FIFO to \(j\). Thus the sum of the occupancies of the two FIFOs is constant. The following result relates round trip times to realizability. Theorem 8.: _Suppose \(\mathcal{C}\) is a cycle in a realizable multiclock network. Then \(\lambda_{\mathcal{C}}\geq 0\)._ Proof.: _This follows immediately from Theorem 7 and Definition 6._ That is, a realizable multiclock network has the important physical property that all round-trip times are nonnegative. The monotonic property of \(\theta\) implies that this holds in both localticks and wall-clock time. No matter what path a frame takes around the network, it cannot arrive back at its starting point before it was sent. However, it is possible, within the class of realizable networks defined so far, for this sum to be equal to zero. In this case one would have a frame arrive at the time it is sent. This would require some pathological conditions on the clocks. ### Equivalent synchronous systems We now consider the class of perfectly synchronous systems, where all of the nodes of the graph share a single clock. The links between the nodes are FIFOs as before, and as a result of the synchronous assumption their occupancies are constant. This is a particular instance of the multiclock network where all clocks \(\theta_{i}\) are equal. Such a system has an extended graph, and it has logical latencies which do not change with time, and are equal to the occupancies of the FIFOs, according to (10). Because the system is synchronous, the FIFOs behave like a chain of delay buffers. The corresponding execution model, defined by the extended graph, is identical to that of a logical synchrony network with the same logical latencies. Said another way, a logical synchrony network is equivalent to a perfectly synchronous network of processors connected by delay buffers with occupancies given by the logical latencies. This suggests the following question; what happens if we have a logical synchrony network where one or more of the edges has a negative logical latency? Using Theorem 6, we know that if a network has nonnegative round-trip times, one can relabel the clocks so that all logical latencies are nonnegative. Hence any physically constructible multiclock network is equivalent to a perfectly synchronous network. ## V The bittide mechanism We now turn to physical implementation of logically synchronous systems. A hardware implementation can be found at [21]. In Section IV we have already discussed one of the key components of this, specifically that with each localtick, a node removes one frame from the head of every incoming FIFO, and sends one frame on every outgoing FIFO. However, this is not enough for implementation, since we must ensure that the occupancies of the FIFOs neither underflow nor overflow. In the bittide model, the FIFO connecting node \(i\) to node \(j\) is composed of two parts, connected sequentially. The first part is a communication link, which has a latency \(l_{i\text{-}j}\), the number of _wall-clock_ seconds it takes to send a frame across the link. The second part is called the _elastic buffer_. It is a FIFO which is located at the destination node \(j\). Node \(i\) sends frames, via the communication link to node \(j\), where they are inserted at the tail end of the elastic buffer. We assume that the communication link cannot reorder frames, and so together the communication link and the elastic buffer behave as a single FIFO. Each node has an elastic buffer for each of its incoming links. With each clock localtick, it does two things; first, it removes a frame from the head of each of the elastic buffers and passes that frame to the processor core; second, the core sends one frame on each outgoing communication link. The purpose of this structure is as follows. At each node, the system can observe the occupancy of all of the elastic buffers. These occupancies provide information regarding the relative clock frequencies of the node compared to its incoming neighbors. Specifically, if we have an edge \(i\to j\), and node \(i\) has a lower clock frequency that node \(j\), then the corresponding elastic buffer at node \(j\) will start to drain. Conversely, if node \(i\) has a higher clock frequency, the elastic buffer will start to fill. Node \(j\) can therefore use the occupancy of the elastic buffers to adjust its own clock frequency. If, on average, it's buffers are falling below half-full, the node can reduce its clock frequency, and conversely. This mechanism was originally proposed in [22]. The exact details for how it is implemented (such as how much to increase or decrease the frequency) were further developed in [13, 14, 15]. These papers show that, provided the frequency corrections are chosen appropriately, this mechanism will ensure that elastic buffers never underflow or overflow. A functional simulation of bittide is available at [23], and a simulation of the clock synchronization dynamics is at [24]. ## VI Related work The seminal work of Lamport [12] presents a formal framework for clocks in distributed systems, which in particular defined an ordering on a directed graph corresponding to temporal relationships between events, and a global scalar clock which was consistent with that ordering. Subsequent work [17, 18] developed the notion of vector clocks, where each node in a network maintains a vector notion of time which captures exactly the ordering defined by the graph. The synchronization mechanism of bittide was first proposed in [22]. Subsequent works include [13], which developed a mathematical model of the synchronization layer, and [14], which analyzed its performance properties. Ever since the first distributed systems, synchronous execution has been a gold standard for formal reasoning, provable correctness properties, and ability to express efficient algorithms [25, 26, 27, 28]. As a consequence, the domain of synchronous execution has been studied extensively, in particular in the context of cyber-physical systems. Cyber-physical systems interact with physical processes, and Lee [29] argues that integrating the notion of time in system architecture, programming languages and software components leads to the development of predictable and repeatable systems. Reasoning about distributed systems has lead to the definition of both execution models and parallel programming models. Kahn Process Networks [30] is one of the most general; while it does not involve time or synchronization explicitly, processes in a Kahn process network communicate through blocking FIFOs, and thus synchronize implicitly through the communication queues. An important distinction between bittide and the Kahn Process Networks is that the former does not make use of blocking. Synchrony, and its most common representation as a global time reference, led to the definition of multiple models of computation. For example, Synchronous Dataflow [31] enables static scheduling of tasks to resources; Timed Concurrent Sequential Processes (Timed CSP) [32] develop a model of real-time execution in concurrent systems; Globally Asynchronous, Locally Synchronous (GALS) communication models [33] address the issue of mapping a synchronous specification to existing systems which are asynchronous. Henzinger et al. [34] introduce the concept of _logical execution_ and Kopetz et al. [35] introduce Time-Triggered Architectures (TTAs) as a system architecture where time is a first-order quantity and they take advantage of the global time reference to exploit some of the desirable properties of synchronous execution: precisely defined interfaces, simpler communication and agreement protocols, and timeliness guarantees. Synchronous programming models led to synchronous programming languages, e.g., Esterel [36], Lustre [37], Signal [38], and the development of tools to formally analyze their execution correctness as well as compilers to generate correct synchronizing code for embedded [2] or multicore platforms [4]. This created a virtuous cycle - as researchers understood better properties and embedded them into languages and tools, they drove the adoption of synchronous execution and formal tools for a number of industrial control applications, avionics, and critical system components. ## VII Conclusions This paper has presented logical synchrony, a model where processes on distributed network cores behave as if they were synchronized, even if the clocks on the individual cores are imperfectly synchronized. A logical synchrony network is an abstraction which characterizes the causality relationship between events, and the logical latencies of the network have the striking property that they specify the causality relationships exactly. When we consider implementations of a logical synchrony network, that leads to defining local clocks in a multiclock network. In this setting, the logical latency combines the FIFO occupancies causality relationship with the offsets between neighboring clocks, and this combination is enough to determine the causality relationships. This offers a model where the logical latencies are sufficient to allow static scheduling of both communications and computation. The bittide mechanism gives a simple method for implementing this scheme. The result is a mechanism for distributed computation in which scheduling requires knowledge only of the graph topology and the logical latencies, and which has very low overhead. The main advantage of the bittide approach is that it enables _synchrony_ and not wall-clock time as the first order abstraction. The logical synchrony framework presented in this paper and the bittide mechanisms bring the guarantees available in synchronous execution to distributed systems without the need of a global time reference. This model has a natural utility for those applications with analyzable and predictable behavior. We expect that future work on abstractions and programming models that utilize logical synchrony will enable larger classes of applications. Examples may include probabilistically statically scheduled applications, where an application behavior is predictable with high probability, or slow changing applications, where the behavior is evolving from state to state, each of them predictable, but with enough latency that the system can adapt and reconfigure. ## VIII Acknowledgments The ideas for this paper came about through much collaboration. In particular, we would like to thank Nathan Allen, Pouya Dormiani, Chase Hensel, Logan Kenwright, Robert O'Callahan, Chris Pearce, Dumitru Potop-Butucaru, and Partha Roop for many stimulating discussions about this work. Robert had the idea for the proof of Theorem 6.
2309.15106
Improved constraints for axion-like particles from 3-photon events at $e^+e^-$ colliders
Axions and axion-like particles (ALPs) are one of the most widely discussed extensions of the Standard Model when it comes to the strong CP problem and dark matter candidates. Current experiments are focused on the indirect searches of invisible pseudoscalars in a wide parameter range. In this paper we investigate limits on ALP mass, and its couplings to photons and leptons from 3-photon annihilation at $e^+e^-$ colliders. We provide detailed calculations and apply them to the particular kinematics of the Belle II experiment, covering the ALP mass range from few hundred MeV to around 10 GeV. Our results, which improve upon previous analyses by also including the ALP coupling to electrons, show that such future analyses will allow to significantly extend the ALP search range and impose much more stringent restrictions on their couplings.
Aleksandr Pustyntsev, Marc Vanderhaeghen
2023-09-26T17:54:38Z
http://arxiv.org/abs/2309.15106v2
# Improved constraints for axion-like particles from 3-photon events at \(e^{+}e^{-}\) colliders ###### Abstract Axions and axion-like particles (ALPs) are one of the most widely discussed extensions of the Standard Model when it comes to the strong CP problem and dark matter candidates. Current experiments are focused on the indirect searches of invisible pseudoscalars in a wide parameter range. In this paper we investigate limits on ALP mass, and its couplings to photons and leptons from 3-photon annihilation at \(e^{+}e^{-}\) colliders. We provide detailed calculations and apply them to the particular kinematics of the Belle II experiment, covering the ALP mass range from few hundred MeV to around 10 GeV. Our results, which improve upon previous analyses by also including the ALP coupling to electrons, show that such future analyses will allow to significantly extend the ALP search range and impose much more stringent restrictions on their couplings. ## I Introduction Initially proposed in 1977, the Peccei-Quinn theory so far is considered to be the most compelling strong CP problem resolution [1; 2]. In this model a CP-violating phase is dynamically driven to zero, giving rise to a new pseudoscalar particle called axion [3; 4]. During the last four decades numerous attempts have been made to find a signal of this particle, including both lab searches and astronomical observations [5; 6]. Current constraints show that the QCD axion (in case it exists) must be very weakly interacting and thus is called "invisible", which forces one to concentrate on the possible indirect detection of this particle [7; 8]. A key property of the QCD axion is the linear proportionality between its couplings to the Standard Model particles and the axion mass [9]. The exact relation being model-dependent and usually refers to KSVZ [10; 11] or DFSZ mechanisms [12; 13]. With the current limits taken into account, both scenarios result in a very small axion mass, \(m_{a}\lesssim 10^{-3}\) eV [14]. However, recent studies report the possibility of MeV mass range for the QCD axion not yet excluded by experiments [15]. Given the significance of the problem it is important to assure that there are no loopholes left at this scale and to reinvestigate the parameter space in the MeV to GeV range. In addition to the QCD axion mechanism, various Standard Model extensions with axion-like particles were proposed [16; 17; 18]. The main difference to the original model is that ALPs are not restricted to a linear mass-coupling relation. During the past few years there was an increasing interest in the MeV to GeV range [19; 20; 21; 22]. Furthermore, ALPs are considered to be promising dark matter candidates as being both very long-living and weakly-interacting with the mass unconstrained by their interactions with other particles [23; 24]. In this work we investigate the mass and coupling constraints of ALPs in the MeV to GeV range from 3-photon events in \(e^{+}e^{-}\) annihilation. We focus on the kinematical setting of the Belle II experiment. Section 2 provides a general overview of the given formalism with the discussion of couplings, matrix elements and cross sections. Section 3 illustrates the main results and provides predictions for Belle II kinematics and constraints which follow from the calculated processes. Section 4 summarizes our work. ## II ALP formalism In this work we assume that ALPs in the MeV to GeV mass range couple predominantly to photons and electrons, i.e. decay only to visible states. The following section provides a short review of the relevant ALP interactions. The parameter space includes three variables - the ALP mass \(m_{a}\) and its couplings to photons and electrons, which are denoted by \(g_{a\gamma\gamma}\) and \(g_{aee}\), respectively. We detail the calculations of ALP contributions to 2- and 3-photon annihilation of \(e^{+}e^{-}\) pairs. ### Interaction with photons In this section we analyze the interaction of ALPs with the electromagnetic field. The corresponding effective Lagrangian has the form \[\mathcal{L}_{a\gamma\gamma}=-\frac{g_{a\gamma\gamma}}{4}aF^{\mu\nu}\tilde{F}_{ \mu\nu}, \tag{1}\] \(a\) stands for the pseudoscalar ALP field, \(F^{\mu\nu}\) is the electromagnetic field tensor with the corresponding dual pseudostensor \(\tilde{F}_{\mu\nu}=\frac{1}{2}\varepsilon_{\mu\nu\lambda\sigma}F^{\lambda \sigma}\), \(g_{a\gamma\gamma}\) is the coupling constant of the dimension \(\text{GeV}^{-1}\). The matrix element for the \(a\to 2\gamma\) decay shown on Fig. 1 is given by \[\begin{split} M_{a\rightarrow\gamma\gamma}\left(k_{1},k_{2}\right)& =-ig_{a\gamma\gamma}k_{1,\kappa}k_{2,\beta}\varepsilon^{\kappa \beta\mu\nu}\\ &\times\epsilon_{\mu}^{\ast}\left(k_{1},\lambda_{1}\right) \epsilon_{\nu}^{\ast}\left(k_{2},\lambda_{2}\right),\end{split} \tag{2}\] where \(\epsilon_{\mu}\left(k_{1},\lambda_{1}\right)\) and \(\epsilon_{\mu}\left(k_{2},\lambda_{2}\right)\) are the polarization vectors of the photons with 4-momenta \(k_{1}\), \(k_{2}\) and helicities \(\lambda_{1}\), \(\lambda_{2}\), respectively. Summing over the final helicities, we obtain \[\sum_{f}\left|M_{a\rightarrow\gamma\gamma}\left(k_{1},k_{2}\right)\right|^{2}=2g _{a\gamma\gamma}^{2}\left(k_{1}k_{2}\right)^{2}. \tag{3}\] The corresponding decay width is then obtained as \[\Gamma_{a\gamma\gamma}=\frac{g_{a\gamma\gamma}^{2}m_{a}^{3}}{64\pi}. \tag{4}\] ### Interaction with leptons We next discuss the ALP-fermion coupling and the corresponding decay rate. The generic interaction of ALPs with fermions is of the form \[\mathcal{L}_{aff}=-\frac{g_{aff}}{2m_{f}}\partial_{\mu}a\bar{f}\gamma^{5} \gamma^{\mu}f, \tag{5}\] where \(f\) stands for the fermion field, \(m_{f}\) denotes its mass and \(g_{aff}\) is the dimensionless coupling constant. From \(\mathcal{L}_{aff}\) it is clear that lepton universality requires the large enhancement of ALP coupling to muon, namely \[g_{a\mu\mu}\approx\frac{m_{\mu}}{m_{e}}g_{aee}. \tag{6}\] In this paper we follow Alves and Wiener work Alves and Wiener (2010) and consider ALPs coupled only to electrons in order to avoid effects induced by this enhanced coupling, such as \(\left(g-2\right)_{\mu}\) corrections on the muon anomalous magnetic moment. At tree level \(\mathcal{L}_{aff}\) can be equivalently reduced to a pseudoscalar coupling \[\mathcal{L}_{aff}=-ig_{aff}\bar{f}\gamma^{5}f. \tag{7}\] We are interested in the \(a\to e^{+}e^{-}\) decay shown on Fig. 2, which has the amplitude \[M_{a\rightarrow e^{+}e^{-}}=g_{aee}\bar{u}\left(p_{-},s_{-}\right)\gamma^{5}v \left(p_{+},s_{+}\right), \tag{8}\] where \(u\left(p_{-},s_{-}\right)\) and \(v\left(p_{+},s_{+}\right)\) are the bispinors describing electron and positron with momenta \(p_{\pm}\) and helicities \(s_{\pm}\), respectively. In the domain of interest we can assume \(m_{e}\ll m_{a}\). After summing over the final helicities, the squared amplitude for this process is given by \[\sum_{f}\left|M_{a\to e^{+}e^{-}}\right|^{2}=2g_{aee}^{2}m_{a}^{2}. \tag{9}\] The corresponding decay width then has the form \[\Gamma_{aee}=\frac{g_{aee}^{2}m_{a}}{8\pi}. \tag{10}\] In the absence of interaction with other fields, the total ALP decay width is assumed to consist of two contributions \[\Gamma_{a}=\Gamma_{aee}+\Gamma_{a\gamma\gamma}. \tag{11}\] ### ALP production at \(e^{+}e^{-}\) colliders An ALP contributes to the 2-photon annihilation of \(e^{+}e^{-}\) through the diagram shown on Fig. 3. The matrix element is \[\begin{split} M_{e^{+}e^{-}\rightarrow\gamma\gamma}& =ig_{aee}\frac{\bar{v}\left(p_{+},s_{+}\right)\gamma^{5}u\left(p_{ -},s_{-}\right)}{s-m_{a}^{2}+im_{a}\Gamma_{a}}\\ &\times M_{a\rightarrow\gamma\gamma}\left(k_{1},k_{2}\right), \end{split} \tag{12}\] with \(M_{a\rightarrow\gamma\gamma}\left(k_{1},k_{2}\right)\) given in Eq. (2). As a function of \(m_{a}\), this cross section is significantly different from zero only in a small region around \(m_{a}^{2}=s=4E^{2}\), where \(E\) denotes the initial electron (positron) energy in the center-of-momentum frame. Thus, for a fixed collider energy, the \(e^{+}e^{-}\rightarrow\gamma\gamma\) process is not providing constraints on ALP parameters in a broad \(m_{a}\), \(g_{a\gamma\gamma}\), \(g_{aee}\) parameter space in \(e^{+}e^{-}\) annihilation. Therefore, in the following we investigate 3-photon final states. Figure 1: ALP decay to two photons in the lowest order. Figure 2: ALP decay to the lepton-antilepton pair in the lowest order. Fig. 4 shows the contribution of ALP-photon coupling resulting in 3-photon events. The corresponding amplitudes are given by \[M_{e^{+}e^{-}\rightarrow\gamma\gamma\gamma}(ALP_{1})=i\frac{H_{e^{+}e ^{-}\rightarrow\gamma^{*}\to a\gamma}\left(k_{1}\right)}{K_{23}^{2}-m_{a}^{2}+ im_{a}\Gamma_{a}} \tag{13}\] \[\times M_{a\rightarrow\gamma\gamma}\left(k_{2},k_{3}\right)+ \text{crossed terms},\] where \(H_{e^{+}e^{-}\rightarrow\gamma^{*}\to a\gamma}\) stands for \(e^{+}e^{-}\to a\gamma_{i}\) amplitude \[H_{e^{+}e^{-}\rightarrow\gamma^{*}\to a\gamma}\left(k_{i}\right) =-ieg_{a\gamma\gamma}\,\varepsilon_{\alpha\beta\mu\gamma}q^{\alpha }k_{i}^{\beta}\epsilon^{\gamma}\left(k_{i},\lambda_{i}\right) \tag{14}\] \[\times\frac{\bar{v}\left(p_{+},s_{+}\right)\gamma^{\mu}u\left(p_{ -},s_{-}\right)}{s},\] with \(e\) being the positron charge and the internal photon 4-momentum \(q=p_{+}+p_{-}\). We denote the ALP 4-momenta as \(K_{23}=k_{2}+k_{3}\). It is generally assumed that ALPs are long-lived particles, i.e. their decay width \(\Gamma_{a}\) is a small quantity, typically much smaller than the experimental resolution of the invariant mass of the two-photon system in which the ALP decays. Thus the integration over the phase space gives the main contribution only in a very small range of variables where the invariant mass of the photon pair produced by the ALP is close to \(m_{a}^{2}\). In such kinematics the interference terms become unobservable and can be omitted. After the integration over the phase space the total cross section can be represented as a cross section obtained from only Feynman diagrams shown in Fig. 4 multiplied by a factor of three to account for the 3 channels. Thus we obtain \[\begin{split}&\sum_{i}\sum_{f}\left|H_{e^{+}e^{-}\rightarrow\gamma \gamma^{*}\to a\gamma}\left(k_{1}\right)\right|^{2}\\ &=\frac{2e^{2}g_{a\gamma\gamma}^{2}}{s^{2}}\left[\left(k_{1}p_{+} \right)^{2}+\left(k_{1}p_{-}\right)^{2}\right]\left(p_{-}p_{+}\right),\end{split} \tag{15}\] where \(\sum_{i}\sum_{f}\) denotes the average over initial helicities states and the sum over final. We next discuss the 3-photon production in \(e^{+}e^{-}\) annihilation which results from the ALP-electron coupling contribution, shown in Fig. 5. The corresponding amplitudes can be expressed as \[\begin{split}& M_{e^{+}e^{-}\rightarrow\gamma\gamma\gamma}\left( ALP_{2}\right)\\ &=i\frac{H_{e^{+}e^{-}\rightarrow\gamma\gamma,1}\left(k_{1}\right) +H_{e^{+}e^{-}\rightarrow a\gamma,2}\left(k_{1}\right)}{K_{23}^{2}-m_{a}^{2}+ im_{a}\Gamma_{a}}\\ &\times M_{a\rightarrow\gamma\gamma}\left(k_{2},k_{3}\right)+ \text{crossed terms},\end{split} \tag{16}\] where \(H_{e^{+}e^{-}\rightarrow a\gamma,i}\) denote amplitudes for the corresponding \(2\to 2\) process \[\begin{split} H_{e^{+}e^{-}\rightarrow a\gamma,1}\left(k_{i} \right)&=eg_{aee}\,\epsilon_{\eta}^{*}\left(k_{i},\lambda_{i} \right)\\ &\times\bar{v}\left(p_{+},s_{+}\right)\gamma^{\underline{i}}\frac{ \bar{t}_{i}}{\bar{t}_{i}^{2}}\gamma^{5}u\left(p_{-},s_{-}\right),\\ H_{e^{+}e^{-}\to a\gamma,2}\left(k_{i}\right)& =eg_{aee}\,\epsilon_{\lambda}^{*}\left(k_{i},\lambda_{i}\right)\\ &\times\bar{v}\left(p_{+},s_{+}\right)\gamma^{5}\frac{\bar{f}_{i} }{\bar{f}_{i}^{2}}\gamma^{\lambda}u\left(p_{-},s_{-}\right),\end{split} \tag{17}\] with the internal electron momenta \[l_{i}=k_{i}-p_{+},\quad f_{i}=p_{-}-k_{i}. \tag{19}\] It is worth noticing that there is no interference between this set of diagrams and the diagrams shown in Fig. 4. Using the same arguments as before, we conclude that for the cross section calculation, we only need to evaluate the two topologies shown in Fig. 5, as \[\begin{split}&\sum_{i}\sum_{f}\left|H_{e^{+}e^{-}\to a\gamma,1} \left(k_{1}\right)+H_{e^{+}e^{-}\to a\gamma,2}\left(k_{1}\right)\right|^{2} \\ &=e^{2}g_{aee}^{2}\left(\frac{p_{-}k_{1}}{p_{+}k_{1}}+\frac{p_{+}k_ {1}}{p_{-}k_{1}}+2\frac{\left(p_{+}K_{23}\right)\left(p_{-}K_{23}\right)}{\left( p_{-}k_{1}\right)\left(p_{+}k_{1}\right)}\right).\end{split} \tag{20}\] ### Cross section and observables The cross section of \(e^{+}e^{-}\rightarrow\gamma\gamma\gamma\) process is given by the expression \[\begin{split}&\sum_{i}\sum_{f}\left|H_{e^{+}e^{-}\to a\gamma,1} \left(k_{1}\right)+H_{e^{+}e^{-}\to a\gamma,2}\left(k_{1}\right)\right|^{2}\\ &=e^{2}g_{aee}^{2}\left(\frac{p_{-}k_{1}}{p_{+}k_{1}}+\frac{p_{+}k_ {1}}{p_{-}k_{1}}+2\frac{\left(p_{+}K_{23}\right)\left(p_{-}K_{23}\right)}{ \left(p_{-}k_{1}\right)\left(p_{+}k_{1}\right)}\right).\end{split} \tag{21}\] Figure 4: \(e^{+}e^{-}\) annihilation into three photons involving the \(g_{a\gamma\gamma}\) coupling. Graphs obtained from these by crossing are not shown, but are evaluated too. Figure 3: \(e^{+}e^{-}\) annihilation into two photons through an intermediate ALP. \[\sigma = \frac{1}{3!}\int d_{LIPS}\left(2\pi\right)^{4}\delta^{\left[4\right]} \left(p_{-}+p_{+}-k_{1}-k_{2}-k_{3}\right) \tag{21}\] \[\times \frac{1}{2s}\underset{i}{\sum}\sum_{f}\left|M_{e^{+}e^{-}\to\gamma \gamma\gamma}\right|^{2},\] where \(d_{LIPS}\) stands for the Lorentz-invariant phase space of the three final photons \[d_{LIPS}=\frac{d^{3}k_{1}}{2\omega_{1}\left(2\pi\right)^{3}}\frac{d^{3}k_{2}}{ 2\omega_{2}\left(2\pi\right)^{3}}\frac{d^{3}k_{3}}{2\omega_{3}\left(2\pi \right)^{3}}. \tag{22}\] After the integration with the delta function, the phase space can be expressed as \[\begin{split}& d_{LIPS}\left(2\pi\right)^{4}\delta^{\left[4\right]} \left(p_{-}+p_{+}-k_{1}-k_{2}-k_{3}\right)\\ &=\frac{1}{2^{8}\pi^{5}}\frac{\omega_{1}\omega_{2}}{2E+\omega_{ 1}\left(\cos\theta_{12}-1\right)}d\omega_{1}d\Omega_{1}d\Omega_{2},\end{split} \tag{23}\] with \(\theta_{12}\) denoting the angle between \(\mathbf{k}_{1}\) and \(\mathbf{k}_{2}\) momenta. The remaining phase space is parameterized as \[d\Omega_{1}d\Omega_{2}=2\pi d\phi\,d\cos\theta_{1-}d\cos\theta_{2-}, \tag{24}\] where \(\theta_{i-}\) is the angle between \(\mathbf{p}_{-}\) and \(\mathbf{k}_{i}\), which leads to \[\cos\theta_{12}=\sin\theta_{1-}\sin\theta_{2-}\cos\phi+\cos\theta_{1-}\cos \theta_{2-}. \tag{25}\] Furthermore, in the center-of-momentum frame it holds \[\begin{cases}\omega_{1}+\omega_{2}+\omega_{3}=2E,\\ \mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}=0,\end{cases} \tag{26}\] allowing to express \(\omega_{2}\) as \[\omega_{2}=\frac{2E\left(E-\omega_{1}\right)}{2E+\omega_{1}\left(\cos\theta_ {12}-1\right)}. \tag{27}\] For the ALP-associated process, the photon which is opposite to the ALP in center-of-momentum frame is denoted by \(k_{1}\). In this case, we can remove the integration over \(d\omega_{1}\) using the definition of the delta function \[\begin{split}&\frac{1}{\left(K_{23}^{2}-m_{a}^{2}\right)^{2}+ \left(m_{a}\Gamma_{a}\right)^{2}}\to\frac{\pi}{m_{a}\Gamma_{a}}\delta\left(K_ {23}^{2}-m_{a}^{2}\right)\\ &=\frac{\pi}{m_{a}\Gamma_{a}}\frac{1}{4E}\,\delta\left(\omega_{1} -\frac{4E^{2}-m_{a}^{2}}{4E}\right).\end{split} \tag{28}\] Due to the resonant behavior of the amplitude, one photon is always emitted with a fixed energy \[\omega=\frac{4E^{2}-m_{a}^{2}}{4E}. \tag{29}\] We note that, as the branching fraction, represented by the ratio \(\Gamma_{a\gamma\gamma}/\Gamma_{a}\), is always less than 1, the total cross-section of the process under investigation with an intermediate ALP may actually become smaller with a non-zero value of \(g_{aee}\), compared to when this quantity is equal to zero. After the integration over the full phase space, the cross section of the \(2\to 3\) process with the intermediate ALP can be written in the compact form \[\sigma_{e^{+}e^{-}\to\gamma\gamma\gamma}=\sigma_{e^{+}e^{-}\to a\gamma}\times \frac{\Gamma_{a\gamma\gamma}}{\Gamma_{a}}. \tag{30}\] If \(g_{aee}=0\), the ALP decays directly to photons and this formula can be simplified further (notably, it is independent of \(s\) if \(m_{a}^{2}\ll s\)) as \[\sigma_{e^{+}e^{-}\to a\gamma}=\frac{\alpha g_{a\gamma\gamma}^{2}}{24}\left(1- \frac{m_{a}^{2}}{s}\right)^{3}, \tag{31}\] where \(\alpha\equiv e^{2}/4\pi\). For a realistic detector, one has to integrate the cross section formula over the phase space, restricted by the experimental setup, as discussed below. Figure 5: \(e^{-}e^{+}\) annihilation into three photons involving the \(g_{aee}\) coupling. Graphs which are obtained by crossing are not shown, but are evaluated too. ## III Results and discussion The ALP signal detection strategy can be based on searches for a narrow peak in the squared mass distribution \(m_{\gamma\gamma}\) of photon pairs, or a narrow peak in the photon energies distributions, due to the fact that the photon which accompanies the ALP is always monoenergetic in this process. If no significant ALP signal is observed, it is possible to constrain ALP parameters in the corresponding mass range. In this section we illustrate our results with exclusion plots for the kinematics of an \(e^{+}e^{-}\) collider. For this purpose, we first split the total \(e^{+}e^{-}\to\gamma\gamma\gamma\) cross section into three terms \[\sigma_{ALP+B}=\sigma_{ALP}+\sigma_{B}+\sigma_{int}, \tag{32}\] with \(\sigma_{ALP}\) referring to the ALP-associated process (shown in Fig. 4 and 5), while \(\sigma_{B}\) is the background. The interference term \(\sigma_{int}\) does not contribute since the ALP decay width \(\Gamma_{a}\) is assumed to be much smaller than the experimental resolution of the invariant mass of the final photon pair. The dominant part of the background originates from QED 3-photon annihilation [19], i.e. \(\sigma_{B}=\sigma_{QED}\). The aimed sensitivity is then expressed by the formula \[\frac{\sigma_{ALP}}{\sigma_{QED}}=\frac{N}{\sqrt{L\cdot\sigma_{QED}}}, \tag{33}\] where \(L\) denotes the integrated luminosity and \(N\) is the number of standard deviations that determines whether or not a fluctuation is considered as a signal. We conventionally set \(N=2\), which refers to 95% confidence level. In our study we neglect the potential hadronic background from \(\pi^{0}\), \(\eta\) and \(\eta^{\prime}\) mesons. In a complete analysis, however, those contributions must be included. Therefore, the parameter space for \(m_{a}\) in the vicinity of the \(\pi^{0}\), \(\eta\) and \(\eta^{\prime}\) masses can be expected to be modified. The \(e^{+}e^{-}\to a\gamma\to\gamma\gamma\gamma\) cross section is a function of three variables. For purposes of illustration, we use the available independent constraints for \(g_{ace}\) to show two-dimensional projections of \(g_{a\gamma\gamma}\) as a function of \(m_{a}\). Experimental searches in the MeV to GeV region are mostly focused on ALP-muon interaction [22] and therefore not able to constrain \(g_{aee}\). However, it is possible to convert constraints on visibly decaying dark photons to limits on the ALP-electron mixing [15]. Indeed, the processes of \(X\to e^{+}e^{-}\) and \(a\to e^{+}e^{-}\) achieve comparable signal strengths in case of \(g_{Xee}\sim g_{aee}\). This relation, of course, is only approximate, since the two processes have different angular distributions, but using it one can estimate \(g_{aee}\lesssim 10^{-4}\)[22]. In the following, we discuss the reach on \(m_{a}\), \(g_{aee}\) and \(g_{a\gamma\gamma}\) which can be obtained from \(e^{+}e^{-}\to\gamma\gamma\gamma\) data that are already available from the Belle II experiment or are expected from future running. ### Belle II kinematics To obtain the exclusion plots for Belle II kinematics, we start by discussing the detector acceptance. Belle II is an asymmetric collider, for which electron and positron have energies of \(7\,\mathrm{GeV}\) and \(4\,\mathrm{GeV}\), respectively. This requires a boost with a relative velocity \(\beta\approx 0.27\) to the center-of-momentum frame, where particles have energies of \(E=5.29\,\mathrm{GeV}\). The angular coverage of the electromagnetic calorimeter in the lab frame is \(12.4^{\circ}<\theta<155.1^{\circ}\). The angular region \(37.3^{\circ}<\theta<123.7^{\circ}\) provides the best energy resolution, avoiding regions close to detector gaps, and offers the lowest beam background levels [25]. Following the work of [19], we set the photon energy selection threshold of \(0.25\,\mathrm{GeV}\) in the center-of-momentum frame. Our analysis requires all three photons to be in this acceptance range and, unless otherwise specified, these experimental cuts are used for all the plots shown below. The angular distributions for the ALP process are presented in Fig. 6 for two different values of \(m_{a}\) and two different values of \(g_{aee}\). For a given \(g_{aee}\), there is more than an order of magnitude difference between \(m_{a}=0.3\,\mathrm{GeV}\) and \(m_{a}=3\,\mathrm{GeV}\) curves due to the fact that for a relatively light ALP the decay width is dominated by the \(a\to e^{+}e^{-}\) channel, see Eqs. (4) and (10). For the particular case of \(g_{a\gamma\gamma}=10^{-4}\,\mathrm{GeV}^{-1}\) and \(g_{aee}=10^{-4}\), one obtains \[\frac{\Gamma_{a\gamma\gamma}}{\Gamma_{a}} \approx 0.01,\quad\text{for}\quad m_{a}=0.3\,\mathrm{GeV},\] \[\frac{\Gamma_{a\gamma\gamma}}{\Gamma_{a}} \approx 0.53,\quad\text{for}\quad m_{a}=3\,\mathrm{GeV}.\] ### QED background We next discuss the QED background process. The cross section of leading order QED \(e^{+}e^{-}\) annihilation in 3 photons in the massless electron limit is given by [26] \[\begin{split}&\sum_{i}\sum_{f}\left|M_{e^{+}e^{-}\to\gamma\gamma \gamma\,(QED)}\right|^{2}=s\left(4\pi\alpha\right)^{3}\\ &\quad\times\frac{\sum_{i=1}^{3}\left(p_{+}k_{i}\right)\left(p_{- }k_{i}\right)\left[\left(p_{+}k_{i}\right)^{2}+\left(p_{-}k_{i}\right)^{2} \right]}{\prod_{i=1}^{3}\left(p_{+}k_{i}\right)\left(p_{-}k_{i}\right)}.\end{split} \tag{34}\] For the total cross section an additional factor \(1/3!\) must be added due to the 3 identical bosons in the final state. Fig. 7 shows the corresponding QED background angular and energy distributions. In contrast to the ALP-related process (see Fig. 6), which exhibits a rather uniform angular distribution, the QED three-photon annihilation is characterized by an enhanced angular distribution in both the forward and backward directions. The presence of a distinct peak in the photon energy distribution would serve as an indication of ALP creation. ### Belle II results from 2018 data set In the 2018 data run Belle II achieved an integrated luminosity of \(445\,\mathrm{pb}^{-1}\)[25], which was used for the ALP searches in a simplified way by converting the cross section limit to the coupling using Eq. (31). The latter formula does not take into consideration the fact that all three photons in the ALP-associated process must be detected in the acceptance range of the electromagnetic calorimeter. We require three resolved photons with energies higher than \(0.65\,\mathrm{GeV}\) in the center-of-momentum frame as a crite Figure 7: QED background distributions for the softest, middle and hardest photons in the \(e^{+}e^{-}\to\gamma\gamma\gamma\) process in Belle II kinematics. ria for this event selection. These requirements are slightly different from those used in the Belle II report [25], where the selection of photons with energies above \(0.65\,\mathrm{GeV}\) (for \(m_{a}>4\,\mathrm{GeV}\)) and \(1\,\mathrm{GeV}\) (for \(m_{a}\leq 4\,\mathrm{GeV}\)) in the lab frame was performed. The difference is negligible since \(g_{a\gamma\gamma}\) is sensitive to \(\sigma_{QED}^{-1/4}\). Our result based on Eq. (31) is shown on Fig. 8 (left panel) by the black curve. It shows a good agreement with the analysis of [25] in the higher ALP mass region. In the lower mass region some deviations are seen. This is expected because in the case of a light ALP the invariant mass of a photon pair also becomes low, i.e. two photons travel in a very narrow cone with each other, oppositely to the third photon. This produces very asymmetric kinematics and the QED background becomes suppressed. In our analysis we do not take this into consideration, but more detailed investigation can be performed in future work. ### Belle II projection from upcoming data collection Belle II is expected to reach an integrated luminosity of \(50\,\mathrm{ab}^{-1}\) after around 10 years of running. The resulting constraints which such future data would yield were investigated in [19] for the case where ALPs are only coupled to photons (i.e. for \(g_{aee}=0\)). Fig. 8 right panel shows the projected sensitivity in two scenarios: ALPs coupled only to photons and ALPs coupled to photons and electrons with different \(g_{aee}\) coupling strength. Our results for the scenario \(g_{aee}=0\) are in reasonably good agreement with the exclusion limits deduced in [19] in the high \(m_{a}\) region. For lower values of \(m_{a}\) one expects a similar deviation as for the 2018 Belle II data discussed above. Fig. 8 also shows that the inclusion of a non-zero interaction of ALPs with electrons significantly affects the final result, especially in ALP mass range \(m_{a}\lesssim 2\,\mathrm{GeV}\). The assumption \(g_{aee}=0\) generally leads to an overestimated \(g_{a\gamma\gamma}\) limit, which may be incorrect if the ALP has other decay channels besides the 2-photon mode. In such case, more detailed models with additional parameters are required to constrain invisible particles in a more rigorous way. ## IV Conclusion In this paper we discussed ALPs coupled to electrons and photons in a minimal way. The contributions of ALP states to 2- and 3-photon \(e^{+}e^{-}\) annihilation events were calculated. In this way, we obtained new constraints for possible ALPs in the MeV to GeV mass range, which can be tested at \(e^{+}e^{-}\) colliders. Results were shown for Belle II kinematics both from existing data and from forthcoming data with projected integrated luminosity of \(50\,\mathrm{ab}^{-1}\). Our results indicate that the \(g_{a\gamma\gamma}\) limits can be vastly affected in the presence of an additional decay mode, especially in the lower \(m_{a}\) region. Using current best limits for \(g_{aee}\), it is possible to improve the \(g_{a\gamma\gamma}\) limits by at least an order of magnitude, which allows to significantly narrow down the search area for potential ALPs and to test the possible solution of the strong CP problem in the MeV to GeV mass range. This result can be improved further if a better way to constrain \(g_{aee}\) independently is Figure 8: Left panel: Belle II constraints for \(g_{a\gamma\gamma}\) based on the 2018 data set, with the analytical result shown by the black dashed curve. Right panel: projected results on the \((m_{a},g_{a\gamma\gamma})\) reach for the future data collection at Belle II corresponding to \(50\,\mathrm{ab}^{-1}\) of integrated luminosity. available. There are many possible ways to further extend this work. First of all, a more precise background modeling and experimental analysis must be performed in order to refine the exclusion plots, especially around the \(\pi^{0}\), \(\eta\) and \(\eta^{\prime}\) masses. The comparison with the Belle II 2018 data analysis shows that with more detailed background analysis it is possible to even further improve \(g_{a\gamma\gamma}\) constraints in the \(m_{a}^{2}\ll s\) case. Furthermore, the ALP coupling to photons can be replaced with a more general model of ALPs interacting with the electroweak sector of the Standard Model as discussed e.g. in [19]. This implies the interaction of ALPs with Z-bosons, which was not taken into consideration here. In the presence of such coupling, the exclusion plots will be modified accordingly. Additionally, in this paper we assumed that ALPs interact only with the electrons and photons. Hidden decay channels were not considered. However, the contribution of light dark matter particles of sub-GeV masses may change the obtained constraints if the pair production threshold is surpassed. At the same time, the inclusion of such particles makes it more complicated to set any constraints, as new free parameters appear. Finally, we restricted ourselves with ALPs which are not coupled to muons (and taus). However, lepton universality leads to an increase in the coupling constant \(g_{a\mu\mu}\) by around two orders of magnitude compared to \(g_{aee}\). It may notably affect the results if \(m_{a}\) is larger than \(2m_{\mu}\). The parameter space in such case will get additional restrictions from the requirement of compatibility with current \((g-2)_{\mu}\) data [27; 28]. ###### Acknowledgements. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), in part through the Research Unit [Photon-photon interactions in the Standard Model and beyond, Projektnummer 458854507 - FOR 5327], and in part through the Cluster of Excellence [Precision Physics, Fundamental Interactions, and Structure of Matter] (PRISMA\({}^{+}\) EXC 2118/1) within the German Excellence Strategy (Project ID 39083149).
2303.17986
A new boson expansion theory utilizing a norm operator
We propose a new boson expansion method using a norm operator. The small parameter expansion, in which the boson approximation becomes the zeroth-order approximation, requires the double commutation relations between phonon operators that are not closed between the phonon excitation modes adopted as boson excitations. This results in an infinite expansion regardless of whether the type of the boson expansion is Hermitian or non-Hermitian. The small parameter expansion does not hold when the commutation relations are closed. The norm operator is expressed as a function of the number operator in the physical subspace, which enables us to obtain substantially a finite boson expansion regardless of the Hermitian or non-Hermitian type. We also point out the problems of the conventional boson expansion methods. The normal-ordered linked-cluster expansion theory has failed to refute Marshalek's claim that KT-1 and KT-2 are of chimerical boson expansion. The Dyson boson expansion theory does not have exceptional superiority over other types. Previous studies using the boson expansion methods should be re-examined.
Kimikazu Taniguchi
2023-03-31T11:49:26Z
http://arxiv.org/abs/2303.17986v2
# A new boson expansion theory utilizing a norm operator ###### Abstract We propose a new boson expansion method using a norm operator. The small parameter expansion, in which the boson approximation becomes the zeroth-order approximation, requires the double commutation relations between phonon operators that are not closed between the phonon excitation modes adopted as boson excitations. This results in an infinite expansion regardless of whether the type of the boson expansion is Hermitian or non-Hermitian. The small parameter expansion does not hold when the commutation relations are closed. The norm operator is expressed as a function of the number operator in the physical subspace, which enables us to obtain substantially a finite boson expansion regardless of the Hermitian or non-Hermitian type. We also point out the problems of the conventional boson expansion methods. The normal-ordered linked-cluster expansion theory has failed to refute Marshalek's claim that KT-1 and KT-2 are of chimerical boson expansion. The Dyson boson expansion theory does not have exceptional superiority over other types. XXXX-XXXX Introduction Microscopic elucidation of the large-amplitude collective motion of atomic nuclei remains one of the important and challenging tasks, and its achievement requires developing a method to overcome small-amplitude-oscillation approximations like the Tamm-Dancoff approximation and the random phase approximation. The boson expansion theory is one of the methods going beyond the small-amplitude-oscillation approximations [1]. The boson expansion theory was initially formulated by replacing fermion quasi-particle pair operators with the boson polynomials that reproduce their commutation relations [2]. Later, referring to the preceding work [3], the boson expansion theory has been given as a mapping theory by utilizing a one-to-one correspondence between the basis vector in fermion space and the completely antisymmetric state vector in boson space [4]. The Holstein-Primakoff and the Dyson Boson expansions have been formulated also in the same way [5]. These formulations target all pair operator excitations in fermion space. Practical use, however, has not adopted all excitation modes but the collective and, in case of need, some non-collective excitation modes of Tamm-Dancoff type phonons. Initially, there were two methods: One is to construct the mapping operator by the phonons with only the crucial excitation modes [6; 7], and the other is the method to pick up only these phonons and seek boson expansions that reproduce their commutation relations [8]. Thus formulated boson expansion methods were used for elucidating the large-amplitude collective motions such as the shape transitions of nuclei in the transitional region [8; 9; 10]. The boson expansion method called KT-2 [8] formulated in the latter method was, however, claimed to result in incorrect boson expansions [11; 12] and reformulated into so-called KT-3 [13] according to the former method. The Dyson boson expansion theory (DBET), finite expansion of the non-Hermitian type, has been also formulated by the former method [14]. Seeming to establish the formulation and achieve certain results, the problems remain with the boson expansion methods. One is on the approximate treatment of the algebra of the Tamm-Dancoff type phonons. The double commutators among the Tamm-Dancoff type phonons generally do not close within partially selected excitation modes. Until now without exception, the boson expansion methods with restriction of the phonon excitation modes have used approximations that neglect the modes not selected for the boson excitation modes. The normal-ordered linked-cluster expansion theory (NOLCEXPT) [13; 15] neglects these in the inverse of the norm matrices of the multi-phonon state vectors to obtain its boson expansions and finally abandons all the still-remaining modes. DBET truncates the unselected phonon operators by adopting the approximation named phonon-truncation approximation [9], which is also called _closed-algebra approximation_[14]. Each of the approximations above is essential for NOLCEXPT and DBET. NOLCEXPT adopts it for obtaining the same expansions as KT-2, and DBET to obtain the finite expansions. These approximations all bring to make the double commutators among the selected phonon operators closed. It is claimed that the validity of this approximation has been verified for the specific nuclei, and it is also shown that the norm of the multi-phonon state vector obtained under this approximation rapidly approaches 0 as the number of phonon excitations increases, which brings rapid convergence of the boson expansions [7; 16]. Such behavior of the norm is due to the effect of the Pauli exclusion principle [16]. Its rapid decrease means that the effect is strong. On the other hand, NOLCEXPT claims that its boson expansion is of a small parameter expansion with good convergence. Therefore, in the fermion subspace spanned by the multi-phonon state vectors with selected excitation modes, the effect of the Pauli exclusion principle should be weak. If this is correct, then the norms of the multi-phonon state vectors would not approach zero rapidly as the number of phonon excitations increases. We should investigate the cause of contradictory conclusions. The other is about the phonon excitation number. Until now, for the multi-phonon state vectors, used as the basis vectors of the fermion subspace to be mapped, the sorts of the excitation modes have been limited, while the number of phonon excitations has not [13; 14; 15]. Without restricting the phonon excitation number, the eigenvalues of the norm matrices of the multi-phonon state vectors become zero when the number of excitations becomes large enough even with restricting the sorts of excitation modes. Nevertheless, NOLCEXPT is formulated assuming that zero eigenvalues do not appear regardless of the number of phonon excitations[13; 15]. There is, however, no clear explanation for the validity of this assumption. We have proposed a boson-fermion expansion theory (BFEXP) [17; 18] as an alternative to the boson expansion theory. The boson expansion theory treats all the adopted phonon excitation modes as bosons, while BFEXP, in the zeroth order approximation, represents only the phonons with collective excitation modes as bosons and those with non-collective remains as original phonons. We can derive the boson expansions from this method by extending the boson part up to the non-collective modes necessary and depressing the fermion excitations. Since the formulation of BFEXP has not used the approximation for commutation relations among the phonon operators, it would be worthwhile to formulate a new boson expansion method without the approximation for the commutation relations among phonon operators and compare its boson expansions with those derived from BFEXP. In this article, we propose a new boson expansion theory, naming it the norm operator method (NOM), which enables us to handle both Hermitian and non-Hermitian types, the case with or without limiting the phonon excitation modes and the number of excitations, and the contribution of the phonon excitation modes which are neglected in the conventional boson expansion methods. In section 2, we deal with the Tam-Dancoff type phonons, the multi-phonon state vectors, and the ideal boson state vectors. In section 3, we give a mapping utilizing a norm operator. As specific examples, we deal with the case of mapping all modes of phonon excitations with and without the restriction of the phonon excitation number and the restricted case where the maximum number of phonon excitations is one. Section 4 deals with the boson expansions. First, we confirm the conditions for using the ideal boson state vectors and then give the formulae used in the boson expansions. Next, we provide the conditions that boson expansions become of a small parameter expansion, offer an order estimation method for the expansion terms, perform the boson expansions, show that all types of mapping of the small parameter expansion give infinite boson expansions, and provide the boson expansions of the phonon operators and the scattering operators up to terms that have not been obtained so far. We also deal with non-small parameter boson expansions, where we obtain DBET and the boson expansions that are finite and Hermite. Finally, we point out and stress the essential role of the norm operator in the boson expansion method. In section 5, we take up the conventional methods and point out their problems. Section 6 is a summary. ## 2 Fermion space and boson space ### Tamm-Dancoff type phonon operators, scattering operators, and their commutation relations We introduce pair operators, \[X_{\mu}^{\dagger}=\sum_{\alpha<\beta}\psi_{\mu}(\alpha\beta)a_{\alpha}^{ \dagger}a_{\beta}^{\dagger}, \tag{1a}\] \[X_{\mu}=\sum_{\alpha<\beta}\psi_{\mu}(\alpha\beta)a_{\beta}a_{\alpha},\] (1b) \[B_{q}=\sum_{\alpha\beta}\varphi_{q}(\alpha\beta)a_{\beta}^{ \dagger}a_{\alpha},\] (2a) \[B_{\bar{q}=}B_{q}^{\dagger}. \tag{2b}\] Here, \(a_{\alpha}^{\dagger}\) and \(a_{\alpha}\) are quasi-particle creation and annihilation operators in a single-particle state \(\alpha\). The coefficients satisfy the following relations: \[\psi_{\mu}(\beta\alpha)=-\psi_{\mu}(\alpha\beta) \tag{3a}\] \[\sum_{\alpha<\beta}\psi_{\mu}(\alpha\beta)\psi_{\mu^{\prime}}(\alpha\beta)= \delta_{\mu,\mu^{\prime}},\] (3b) \[\sum_{\mu}\psi_{\mu}(\alpha\beta)\psi_{\mu}(\alpha^{\prime}\beta^{\prime})= \delta_{\alpha,\alpha^{\prime}}\delta_{\beta,\beta^{\prime}}-\delta_{\alpha, \beta^{\prime}}\delta_{\beta,\alpha^{\prime}}, \tag{3c}\] \[\varphi_{\bar{q}}(\alpha\beta)=\varphi_{q}(\beta\alpha). \tag{4a}\] \[\sum_{\alpha\beta}\varphi_{q}(\alpha\beta)\varphi_{q^{\prime}}(\alpha\beta)= \delta_{q,q^{\prime}},\] (4b) \[\sum_{q}\varphi_{q}(\alpha\beta)\varphi_{q}(\alpha^{\prime}\beta^{\prime})= \delta_{\alpha,\alpha^{\prime}}\delta_{\beta,\beta^{\prime}}. \tag{4c}\] These are the most common orthogonal transformations of the quasi-particle pairs \(a_{\alpha}^{\dagger}a_{\beta}^{\dagger}\), \(a_{\beta}a_{\alpha}\) and \(a_{\beta}^{\dagger}a_{\alpha}\). These are used to couple the angular momenta of quasi-particles to those of the quasi-particle pairs. Some of \(X_{\mu}\) and \(X_{\mu}^{\dagger}\) are composed by the further superposition of such pair operators to reflect the dynamics into the selected phonons. Tamm-Dancoff approximation or a similar approximation is usually applied to them for identifying collective excitation modes and non-collective ones. Hereafter, \(X_{\mu}^{\dagger}\) and \(X_{\mu}\) are called phonon creation and annihilation operators, and \(B_{q}\) is called a scattering operator. The phonon and scattering operators satisfy the following commutation relations: \[[X_{\mu},X_{\mu^{\prime}}^{\dagger}]=\delta_{\mu,\mu^{\prime}}-\sum_{q}\Gamma_ {q}^{\mu\mu^{\prime}}B_{q}, \tag{5a}\] \[[B_{q},X_{\mu}^{\dagger}]=\sum_{\mu^{\prime}}\Gamma_{q}^{\mu\mu^{\prime}}X_{ \mu^{\prime}}^{\dagger},\] (5b) \[[X_{\mu},B_{q}]=\sum_{\mu^{\prime}}\Gamma_{q}^{\mu^{\prime}\mu}X_{\mu^{\prime}}, \tag{5c}\] where the definition of \(\Gamma_{q}^{\mu\mu^{\prime}}\) is as follows: \[\Gamma_{q}^{\mu\mu^{\prime}}=\sum_{\alpha\beta}\varphi_{q}(\alpha\beta)\Gamma_ {\alpha\beta}^{\mu\mu^{\prime}},\quad\Gamma_{\alpha\beta}^{\mu\mu^{\prime}}= \sum_{\gamma}\psi_{\mu}(\alpha\gamma)\psi_{\mu^{\prime}}(\beta\gamma). \tag{6}\] The following relation holds: \[\Gamma_{\bar{q}}^{\mu_{1}\mu_{2}}=\Gamma_{q}^{\mu_{2}\mu_{1}}. \tag{7}\] From Eqs. (5a) and (5b), we obtain \[[[X_{\mu_{1}},X^{\dagger}_{\mu_{2}}],X^{\dagger}_{\mu_{3}}]=-\sum_{\mu^{\prime}}Y( \mu_{1},\mu_{2},\mu_{3},\mu^{\prime})X^{\dagger}_{\mu^{\prime}}, \tag{8}\] where the definition of \(Y(\mu_{1}\mu_{2}\mu_{3}\mu_{4})\) is \[Y(\mu_{1}\mu_{2}\mu_{3}\mu_{4})=\sum_{q}\Gamma^{\mu_{1}\mu_{2}}_{q}\Gamma^{\mu _{3}\mu_{4}}_{q}=\sum_{\alpha\beta}\Gamma^{\mu_{1}\mu_{2}}_{\alpha\beta}\Gamma ^{\mu_{3}\mu_{4}}_{\alpha\beta}. \tag{9}\] The following relation holds: \[Y(\mu_{1}\mu^{\prime}_{1}\mu^{\prime}_{2}\mu_{2}) = Y(\mu_{2}\mu^{\prime}_{1}\mu^{\prime}_{2}\mu_{1}) \tag{10}\] \[= Y(\mu_{1}\mu^{\prime}_{2}\mu^{\prime}_{1}\mu_{2})\] \[= Y(\mu^{\prime}_{1}\mu_{1}\mu_{2}\mu^{\prime}_{2}).\] ### Multi-phonon and multi-boson state vectors We divide the phonon excitation modes \(\{\mu\}\) into two groups, \(\{t\}\) and \(\{\overline{t}\}\),and prepare the multi-phonon state vectors, \[|N;t\rangle\rangle=|t_{1},t_{2},\cdots,t_{N}\rangle\rangle=X^{\dagger}_{t_{1}} X^{\dagger}_{t_{2}}\cdots X^{\dagger}_{t_{N}}|0\rangle\quad(0\leq N\leq N_{max}). \tag{11}\] \(\{t\}\) usually consists of collective modes and some non-collective modes if necessary, selected by the small amplitude approximation. We treat not only these cases but also the case where all modes are adopted, that is, \(\{t\}=\{\mu\}\). Next we introduce boson creation and annihilation operators, \(b^{\dagger}_{t}\) and \(b_{t^{\prime}}\), having the same indices as those of the multi-phonons, \(X^{\dagger}_{t}\) and \(X_{t^{\prime}}\): \[[b_{t},b^{\dagger}_{t^{\prime}}]=\delta_{t,t^{\prime}}. \tag{12}\] The multi-boson states, \[|N;t\rangle)=|t_{1},t_{2},\cdots,t_{N}\rangle)=b^{\dagger}_{t_{1}}b^{\dagger} _{t_{2}}\cdots b^{\dagger}_{t_{N}}|0\rangle, \tag{13}\] are orthogonal to one another, and are normalized by their norms, \[{\cal N}_{B}(N;t)=((N:t|N;t)), \tag{14}\] such as \[|N;t\rangle=|t_{1},t_{2},\cdots,t_{N}\rangle={\cal N}_{B}(N;t)^{-1/2}|N;t \rangle). \tag{15}\] They are so-called ideal boson state vectors. Boson mapping This section deals with boson mapping. We introduce a norm operator and construct a mapping operator that can handle both Hermitian and non-Hermitian types, both with and without limiting the types and number of phonon excitation modes. The norm operator is defined as \[\hat{Z}=\sum_{N=0}^{N_{max}}\hat{Z}(N), \tag{16a}\] \[\hat{Z}(N) = \sum_{tt^{\prime}}|N,t\rangle\langle N;t|N;t^{\prime}\rangle(N;t^ {\prime}|\] \[= \sum_{t_{1}\leq\cdots\leq t_{N}}\sum_{t^{\prime}_{1}\leq\cdots \leq t^{\prime}_{N}}|t_{1}\cdots t_{N}\rangle\langle t_{1}\cdots t_{N}|t^{ \prime}_{1}\cdots t_{N}\rangle(t^{\prime}_{1}\cdots t^{\prime}_{N}|, \tag{16b}\] where \[|N;t\rangle=\mathcal{N}_{B}(N;t)^{-1/2}|N;t\rangle\rangle. \tag{17}\] This norm operator is a modified one of the previously introduced [13] by adding the restriction \(N_{max}\), which allows us to constrain the number of phonon excitations and corresponding boson excitations. \(\hat{Z}(N)\) satisfies the eigenequation, \[\hat{Z}(N)|N;a)=z_{a}(N)|N;a), \tag{18}\] where \(|N;a)\) is a normalized eigenvector and \(z_{a}(N)\) is an eigenvalue. The eigenvalues \(z_{a}(N)\) become positive or zero and \(a_{0}\) represents \(z_{a_{0}}(N)=0\). Using these, we obtain the spectral decomposition of \(\hat{Z}(N)\) as \[\hat{Z}(N)=\sum_{a\neq a_{0}}|N;a)z_{a}(N)(N;a|. \tag{19}\] Functions of \(\hat{Z}(N)\) are defined by \[f(\hat{Z}(N))=\sum_{a\neq a_{0}}|N;a)f(z_{a}(N))(N;a|, \tag{20}\] and we obtain \[f(\hat{Z})=\sum_{N=0}^{N_{max}}f(\hat{Z}(N)). \tag{21}\] Introducing \({u_{a}^{t}(N)=(N;t|N;a)}\), then Eq. (18) becomes the eigenequation of the multi-phonon norm matrix, \[\sum_{t^{\prime}}\langle N;t|N;t^{\prime}\rangle u_{a}^{t^{\prime}}(N)=z_{a}(N )u_{a}^{t}(N), \tag{22}\] The eigenvectors are orthonormalized as \[\sum_{t}u^{t}_{a}(N)u^{t}_{a^{\prime}}(N)=\delta_{a,a^{\prime}},\] (23a) and satisfy the completeness relations \[\sum_{a}u^{t}_{a}(N)u^{t^{\prime}}_{a}(N)=\delta_{t,t^{\prime}}. \tag{23b}\] Using the norm operator \(\hat{Z}\), we introduce the mapping operator \(U_{\xi}\) as \[U_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}\widetilde{U}, \tag{24}\] where \(\widetilde{U}\) is a mapping operator whose definition is as follows: \[\widetilde{U}=\sum_{N=0}^{N_{max}}\widetilde{U}(N), \tag{25a}\] \[\widetilde{U}(N) = \sum_{t}|N;t\rangle\langle N;t|\] (25b) \[= \sum_{t_{1}\leq t_{2}\leq\cdots\leq t_{N}}|t_{1}t_{2}\cdots t_{N} \rangle\langle t_{1}t_{2}\cdots t_{N}|,\] which satisfies the following relations, \[\widetilde{U}\widetilde{U}^{\dagger}=\hat{Z}, \tag{26a}\] \[\widetilde{U}(N)\widetilde{U}(N)^{\dagger}=\hat{Z}(N). \tag{26b}\] \(\widetilde{U}(N)\) is also expressed as \[\widetilde{U}(N)=\sum_{a\neq a_{0}}z_{a}^{\frac{1}{2}}|N;a\rangle\langle N;a|, \tag{27}\] where \[|N;a\rangle=z_{a}^{-\frac{1}{2}}(N)\sum_{t}u^{t}_{a}(N)|N;t\rangle\qquad(a\neq a _{0}). \tag{28}\] \(|N;a\rangle\) become orthonormalized basis vectors of the fermion subspace spanned by \(|N;t\rangle\). Using \(|N:a\rangle\) and \(|N:a\rangle\), the mapping operator is expressed as \[U_{\xi}=\sum_{N=0}^{N_{max}}U_{\xi}(N);\quad U_{\xi}(N)=\sum_{a\neq a_{0}}z_{ a}(N)^{\xi}|N;a\rangle\langle N;a|. \tag{29}\] The following relations are satisfied: \[U^{\dagger}_{-\xi}U_{\xi}=\hat{T}_{F},\qquad U_{\xi}U^{\dagger}_{-\xi}=\hat{T }_{B}, \tag{30}\] where \[\hat{T}_{F}=\sum_{N=0}^{N_{max}}\hat{T}_{F}(N);\qquad\hat{T}_{F}(N)=\sum_{a\neq a _{0}}|N;a\rangle\langle N;a|, \tag{31}\] \[\hat{T}_{B}=\sum_{N=0}^{N_{max}}\hat{T}_{B}(N);\quad\hat{T}_{B}(N)=\sum_{a\neq a _{0}}|N;a)(N;a|. \tag{32}\] In addition, we define the following operators, \[\breve{1}_{B}=\sum_{N=0}^{N_{max}}\hat{1}_{B}(N);\qquad\hat{1}_{B}(N)=\sum_{t }|N;t)(N;t|. \tag{33}\] If \(\hat{Z}(N)\) has even one zero eigenvalue, then \(\hat{T}_{B}(N)\neq\hat{1}_{B}(N)\) and hence \(\hat{T}_{B}\neq\breve{1}_{B}\). Otherwise, they match one another. The state vectors and operators of fermion space are mapped onto those of boson subspace as \[|\psi^{\prime})_{\xi}=U_{\xi}|\psi^{\prime}),\qquad_{-\xi}(\psi|=\langle\psi| U_{-\xi}^{\dagger}, \tag{34a}\] \[(O_{F})_{\xi}=U_{\xi}O_{F}U_{-\xi}^{\dagger}. \tag{34b}\] These satisfy the following relations: \[|\psi^{\prime})_{\xi}=\left\{{}_{\xi}(\psi^{\prime}|\right\}^{\dagger},\qquad _{-\xi}(\psi|=\left\{{}|\psi{}\rangle_{-\xi}\right\}^{\dagger}, \tag{35a}\] \[(O_{F})_{-\xi}=\left\{(O_{F}^{\dagger})_{\xi}\right\}^{\dagger}. \tag{35b}\] The mapping is of the Hermitian type when \(\xi=0\) and, in other cases, of the non-Hermitian type. A one-to-one correspondence exists between the fermion subspace projected by \(\hat{T}_{F}\) and the boson subspace by \(\hat{T}_{B}\). For the state vectors, \(|\psi\rangle\) and \(|\psi^{\prime}\rangle\), which belong to the fermion subspace projected by \(\hat{T}_{F}\), \[\langle\psi|O_{F}|\psi^{\prime}\rangle = \langle\psi|\hat{T}_{F}O_{F}\hat{T}_{F}|\psi^{\prime}\rangle \tag{36}\] \[= \langle\psi|U_{-\xi}^{\dagger}U_{\xi}\hat{O}_{F}U_{-\xi}^{\dagger }U|\psi^{\prime}\rangle\] \[= {}_{-\xi}(\psi|(O_{F})_{\xi}|\psi^{\prime})_{\xi},\] that is, the matrix element of the fermion subspace becomes equal to that of the corresponding boson subspace. The boson subspace corresponding to the fermion subspace projected by \(\hat{T}_{F}\) is called the physical subspace, and the boson state vectors belonging to that space are called the physical state vector. The projection operator of the physical subspace is \(\hat{T}_{B}\) The relation \[{}_{\xi}(\psi|(O_{F})_{-\xi}|\psi^{\prime})_{-\xi}={}_{-\xi}(\psi|(O_{F})_{\xi}| \psi^{\prime})_{\xi} \tag{37}\] holds, therefore it is sufficient to treat the case \(\xi\geq 0\). The mapping of the product of the fermion operators does not generally result in the product of the mapped fermion operators. That is \[(O_{F}O^{\prime}_{F})_{\xi}\neq(O_{F})_{\xi}(O^{\prime}_{F})_{\xi}, \tag{38}\] and therefore, the commutation relations of the fermion operators are mapped as \[([O_{F},O^{\prime}_{F}])_{\xi}=(O_{F}O^{\prime}_{F})_{\xi}-(O^{\prime}_{F}O_{F })_{\xi}\neq[(O_{F})_{\xi},(O^{\prime}_{F})_{\xi}], \tag{39}\] while under the approximation \(O_{F}O^{\prime}_{F}\approx O_{F}\hat{T}_{F}O^{\prime}_{F}\), \[(O_{F}O^{\prime}_{F})_{\xi}\approx(O_{F})_{\xi}(O^{\prime}_{F})_{\xi}, \tag{40}\] and \[([O_{F},O^{\prime}_{F}])_{\xi}\approx[(O_{F})_{\xi},(O^{\prime}_{F})_{\xi}] \tag{41}\] hold. The conventional practical boson expansion methods use this approximation. If this approximation holds, it is sufficient to map the phonon and scattering operators, otherwise, it becomes necessary to obtain the mapping of the product of these fermion operators. We denote the mapping of \(\widetilde{U}\) as \[\widetilde{|\psi\rangle}=\widetilde{U}|\psi\rangle,\qquad\widetilde{(\psi|}= \langle\psi|\widetilde{U}^{\dagger}, \tag{42a}\] \[\widetilde{O_{F}}=\widetilde{U}O_{F}\widetilde{U}^{\dagger}. \tag{42b}\] The mapping of Eqs. (34) is expressed as \[|\psi^{\prime}\rangle_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}\widetilde{|\psi^{\prime }\rangle},\qquad{}_{-\xi}(\psi|=\widetilde{(\psi|}\hat{Z}^{-\xi-\frac{1}{2}}, \tag{43a}\] \[(O_{F})_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}\widetilde{O_{F}}\hat{Z}^{-\xi-\frac{1 }{2}}, \tag{43b}\] which makes it clear that the different treatment of the norm operator in the mapping operator produces another type of mapping. The mapping of Eqs. (34) is also expressed as \[|\psi^{\prime}\rangle_{\xi}=\hat{Z}^{\xi}|\psi^{\prime}\rangle_{0},\qquad{}_{ -\xi}(\psi|={}_{0}(\psi|\hat{Z}^{-\xi}, \tag{44a}\] \[(O_{F})_{\xi}=\hat{Z}^{\xi}(O_{F})_{0}\hat{Z}^{-\xi}, \tag{44b}\] The mapping of \(\xi=0\) being of the Hermitian type and that of \(\xi\neq 0\) being of the non-Hermitian type transform one another by the similarity transformation operator that becomes the power of the norm operator \(\hat{Z}\). ### The case where all the phonon excitation modes are adopted as the boson excitation modes Hereafter, we attach \((A)\) such as \(\hat{Z}^{(A)}\) in the case that we introduce boson operators corresponding to all phonon excitation modes for no confusion. We start with the following, \[\sum_{\mu_{1}\leq\cdots\leq\mu_{N}}|\mu_{1},\cdots,\mu_{N}\rangle \langle\mu_{1},\cdots,\mu_{N}|=\frac{1}{N!}\sum_{\mu_{1}\cdots,\mu_{N}}|\mu_{1},\cdots,\mu_{N}\rangle\rangle\langle\langle\mu_{1},\cdots,\mu_{N}| \tag{45}\] \[=\frac{1}{2^{N}N!}\sum_{\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_ {N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}\rangle\langle\alpha_{1}\beta _{1}\cdots\alpha_{N}\beta_{N}|\] \[=\frac{(2N)!}{2^{N}N!}\sum_{\alpha_{1}\beta_{1}\leq\cdots\leq \alpha_{N}\beta_{N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}\rangle\langle \alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}|,\] where \[|\alpha_{1}\beta_{1}\cdot\alpha_{N}\beta_{N}\rangle=a^{\dagger}_{\alpha_{1}}a^ {\dagger}_{\beta_{1}}\cdots a^{\dagger}_{\alpha_{N}}a^{\dagger}_{\beta_{N}}|0\rangle, \tag{46}\] and we use that the function \(f(t_{1},\cdots,t_{N}),\) which is completely symmetric for the argument, satisfies the following [19]: \[\sum_{t_{1}\leq\cdots\leq t_{N}}f(t_{1},\cdots,t_{N})=\sum_{t_{1},\cdots,t_{N }}\frac{{\cal N}_{B}(t_{1},\cdots,t_{N})}{N!}f(t_{1},\cdots,t_{N}). \tag{47}\] From above, we obtain the following relation, \[\sum_{\mu_{1}\leq\cdots\leq\mu_{N}}|\mu_{1},\cdots,\mu_{N}\rangle\langle\mu_{1 },\cdots,\mu_{N}|=(2N-1)!!\hat{1}^{(A)}_{F}(N), \tag{48}\] where \[\hat{1}^{(A)}_{F}(N)=\sum_{\alpha_{1}\beta_{1}\leq\cdots\leq\alpha_{N}\beta_ {N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}\rangle\langle\alpha_{1} \beta_{1}\cdots\alpha_{N}\beta_{N}|;\quad(2N-1)!!=\frac{(2N)!}{2^{N}N!}. \tag{49}\] Let \({\bf Z}^{(A)}(N)\) be a matrix composed of the matrix element \(\langle\mu^{\prime}_{1},\cdots\mu^{\prime}_{N}|\mu_{1}\cdots\mu_{N}\rangle,\) we obtain, from this relation, \[{\bf Z}^{(A)}(N)^{2}=(2N-1)!!{\bf Z}^{(A)}(N), \tag{50}\] which indicates that the eigenvalues of this matrix are \((2N-1)!!\) or \(0.\) Zero eigenvalues appear even at \(N=2\)[13], and so do in the case \(N\geq 2.\) From this relation we obtain \[\hat{Z}^{(A)}(N)^{2}=(2N-1)!!\hat{Z}^{(A)}(N), \tag{51}\] and \[\left(\hat{Z}^{(A)}\right)^{2}=(2\hat{N}_{B}^{(A)}-1)!!Z^{(A)};\quad\hat{N}_{B}^{ (A)}=\sum_{\mu}b_{\mu}^{\dagger}b_{\mu}. \tag{52}\] The case N=2 in Eq. (50) is equivalent to the following relations [8], \[\sum_{\mu\mu^{\prime}}Y(\mu_{1}^{\prime}\mu\mu^{\prime}\mu_{2}^{\prime})Y(\mu_ {1}\mu\mu^{\prime}\mu_{2})=4((\mu_{1}^{\prime}\mu_{2}^{\prime}|\mu_{1}\mu_{2}) )-2Y(\mu_{1}^{\prime}\mu_{1}\mu_{2}\mu_{2}^{\prime}). \tag{53}\] We introduce the following operators, \[b_{\alpha\beta}=\sum_{\mu}\psi_{\mu}(\alpha\beta)b_{\mu},\quad b_{\alpha\beta }^{\dagger}=\sum_{\mu}\psi_{\mu}(\alpha\beta)b_{\mu}^{\dagger}, \tag{54}\] which satisfies the commutation relations, \[[b_{\alpha^{\prime}\beta^{\prime}},b_{\alpha\beta}^{\dagger}]=\delta_{\alpha^ {\prime}\alpha}\delta_{\beta^{\prime}\beta}-\delta_{\alpha^{\prime}\beta} \delta_{\beta^{\prime}\alpha}. \tag{55}\] Using these operators, \(\widetilde{U}^{(A)}(N)=\sum_{\mu}|N;\mu\rangle\langle N;\mu|\) are rewritten as \[\widetilde{U}^{(A)}(N)=\sqrt{(2N-1)!!}\sum_{\alpha_{1}<\beta_{1}<\cdots< \alpha_{N}<\beta_{N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{M}\ \langle\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}|, \tag{56}\] where \[|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{M}=\frac{1}{\sqrt{(2N-1)!!}}{ \sum_{P}}^{\prime}(-)^{P}b_{\alpha_{1}\beta_{1}}^{\dagger}\cdots b_{\alpha_{N }\beta_{N}}^{\dagger}|0), \tag{57}\] and \({\sum_{P}}^{\prime}\) means the summation so that the states on the left side become totally antisymmetric [4]. From these, we obtain \[\hat{Z}^{(A)}(N) = \widetilde{U}^{(A)(N)}\widetilde{U}^{(A)}(N)^{\dagger} \tag{58}\] \[= (2N-1)!!\sum_{\alpha_{1}<\beta_{1}<\cdots<\alpha_{N}\beta_{N}}| \alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{MM}(\alpha_{1}\beta_{1}\cdots \alpha_{N}\beta_{N}|,\] which is the spectral decomposition of \(\hat{Z}^{(A)}(N)\) and indicates that the eigenvectors of the eigenvalue \((2N-1)!!\) are \(|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{M}\). We also obtain \[\hat{T}_{F}^{(A)} = \sum_{N=0}^{N_{max}}\hat{T}_{F}^{(A)}(N),\] \[\hat{T}_{F}^{(A)}(N) = \hat{1}_{F}(N), \tag{59}\] \[\hat{1}_{F}(N) = \sum_{\alpha_{1}<\beta_{1}<\cdots\alpha_{N}\beta_{N}}|\alpha_{1} \beta_{1}\cdots\alpha_{N}\beta_{N}\rangle\langle\alpha_{1}\beta_{1}\cdots \alpha_{N}\beta_{N}|,\] \[\hat{T}_{B}^{(A)} = \sum_{N=0}^{N_{max}}\hat{T}_{B}(N), \tag{60}\] \[\hat{T}_{B}^{(A)}(N) = \sum_{\alpha_{1}<\beta_{1}<\cdots\alpha_{N}\beta_{N}}|\alpha_{1} \beta_{1}\cdots\alpha_{N}\beta_{N})_{MM}(\alpha_{1}\beta_{1}\cdots\alpha_{N} \beta_{N}|.\] \(\hat{Z}^{(A)}\) is written as \[\hat{Z}^{(A)}=(2\hat{N}_{B}^{(A)}-1)!!\hat{T}_{B}^{(A)}. \tag{61}\] The mapping operator is given as \[U_{\xi}^{(A)} = \sum_{N=0}^{N_{max}}U_{\xi}(N), \tag{62}\] \[U_{\xi}^{(A)}(N) = \{(2N-1)!!\}^{\xi}\sum_{\alpha_{1}<\beta_{1}<\cdots<\alpha_{N}< \beta_{N}}|\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N})_{M}\ \langle\alpha_{1}\beta_{1}\cdots\alpha_{N}\beta_{N}|.\] If we set \(\xi=0\) and \(N_{max}\rightarrow\infty\), this mapping becomes the MYT mapping [4], from which we obtain the boson expansions of Holstein and Primakoff, and if we take \(\xi=\pm 1\), they become mapping operators for the Dyson boson expansions [5]. Taking \(N_{max}\rightarrow\infty\), \(U_{\xi}^{(A)}\) maps the whole fermion space that consists of even numbers of quasi-particles to the boson subspace. ### The case where the maximum phonon excitation number is 1 In the case where the maximum phonon excitation number is 1, \(|0\rangle\) and \(|\mu\rangle\) become orthonormal bases of the fermion space of even-quasi-particle excitations up to the two-quasi-particle excitations, which correspond to \(|0\rangle\) and \(|\mu\rangle\), respectively. The mapping operator becomes as \[U_{\xi}=\widetilde{U}=|0\rangle\langle 0|+\sum_{\mu}|\mu\rangle\langle\mu|, \tag{63}\] and \(\hat{Z}=\breve{1}_{B}\). As a result, the mapping operator has no dependence on \(\xi\), and the mapping becomes of the Hermitian type. The projection operator onto the fermion subspace to be mapped is given by \[\hat{T}_{F}=|0\rangle\langle 0|+\sum_{\mu}|\mu\rangle\langle\mu|.\] (64a) The projection operator onto the physical subspace, which has a one-to-one correspondence to the fermion subspace, becomes as \[\hat{T}_{B}=\breve{1}_{B}. \tag{64b}\] This indicates that the ideal boson states are the physical state vectors, with one-to-one correspondences to the fermion state vectors. The following relations, \[X_{\mu}U^{\dagger}_{-\xi}=|0\rangle(\mu|=U^{\dagger}_{-\xi}b_{t}\,\breve{1}_{B}, \tag{65a}\] \[U_{\xi}X^{\dagger}_{\mu}=|\mu\rangle\langle 0|=\breve{1}_{B}b^{ \dagger}_{\mu}U_{\xi},\] (65b) \[B_{q}U^{\dagger}_{-\xi}=\sum_{\mu^{\prime}}\sum_{\mu}\Gamma^{\mu^{\prime}\mu}_ {q}|\mu\rangle(\mu^{\prime}|, \tag{65c}\] hold, and we obtain \[(X_{\mu})_{\xi}=|0\rangle(\mu|=\breve{1}_{B}(X_{\mu})_{B}\breve{1}_{B}=(X_{\mu })_{B}\breve{1}_{B},\quad(X_{\mu})_{B}=b_{\mu}, \tag{66a}\] \[(X^{\dagger}_{\mu})_{\xi}=|\mu\rangle(0|=\breve{1}_{B}(X^{\dagger}_{\mu})_{B }\breve{1}_{B}=\breve{1}_{B}(X^{\dagger}_{\mu})_{B},\quad(X^{\dagger}_{\mu})_ {B}=b^{\dagger}_{\mu},\] (66b) \[(B_{q})_{\xi}=\sum_{\mu^{\prime}}\sum_{\mu}\Gamma^{\mu^{\prime}\mu}_{q}|\mu \rangle(\mu^{\prime}|=\breve{1}_{B}(B_{q})_{B}\breve{1}_{B}=\breve{1}_{B}(B_{ q})_{B}=(B_{q})_{B}\breve{1}_{B},\] (66c) \[(B_{q})_{B}=\sum_{\mu\mu^{\prime}}\Gamma^{\mu^{\prime}\mu}_{q}b^{ \dagger}_{\mu}b_{\mu^{\prime}}.\] The product of the operators becomes as follows: \[(O_{F}X_{\mu})_{\xi}=(O_{F})_{\xi}(X_{\mu})_{\xi}=\breve{1}_{B}(O_{F})_{B} \breve{1}_{B}(X_{\mu})_{B}\breve{1}_{B}=\breve{1}_{B}(O_{F})_{B}(X_{\mu})_{B} \breve{1}_{B}, \tag{67a}\] \[(X^{\dagger}_{\mu}O_{F})_{\xi}=(X^{\dagger}_{\mu})_{\xi}(O_{F})_{\xi}=\breve{1} _{B}(X^{\dagger}_{\mu})_{B}\breve{1}_{B}(O_{F})_{B}\breve{1}_{B}=\breve{1}_{ B}(X^{\dagger}_{\mu})_{B}(O_{F})_{B}\breve{1}_{B}, \tag{67b}\] therefore we can obtain the mapping of the product of \(X^{\dagger}_{\mu}\),\(X_{\mu}\), and \(B_{q}\) by arranging them in normal order. The commutation relations of \((X^{\dagger}_{\mu})_{B}\), \((X_{\mu})_{B}\), and \((B_{q})_{B}\) become as follows: \[[(X_{\mu})_{B},(X^{\dagger}_{\mu^{\prime}})_{B}]=\delta_{\mu,\mu^{\prime}} \tag{68a}\] \[[(B_{q})_{B},(X^{\dagger}_{\mu})_{B}]=\sum_{\mu^{\prime}}\Gamma^{\mu\mu^{ \prime}}_{q}(X^{\dagger}_{\mu^{\prime}})_{B}.\] (68b) \[[(X_{\mu})_{B},(B_{q})_{B}]=\sum_{\mu^{\prime}}\Gamma^{\mu^{ \prime}\mu}_{q}(X_{\mu^{\prime}})_{B}, \tag{68c}\] which are equal to the results of the boson approximation. From the above, when the maximum number of phonons is 1, by arranging the phonon creation and annihilation operators and the scattering operators in normal order and replacing them with \((X^{\dagger}_{\mu})_{B}\), \((X_{\mu})_{B}\), and \((B_{q})_{B}\), respectively, then the fermion subspace is completely mapped onto the boson subspace projected by \(\breve{1}_{B}\). In this way, NOM establishes the boson approximation as the boson mapping whose maximum phonon excitation number is 1. ## 4 Boson expansions ### Formulae for the boson expansions We give here the formulae used to obtain the boson expansions of the mapped fermion operators. We utilize \[\begin{array}{rcl}\widetilde{U}(N)&=&\sum_{t_{1}\leq t_{2}\leq\cdots\leq t_{N} }|t_{1}t_{2}\cdots t_{N}\rangle\langle t_{1}t_{2}\cdots t_{N}|\\ &=&\sum_{t_{1}t_{2}\cdots t_{N}}\frac{\mathcal{N}_{B}(t_{1}t_{2} \cdots t_{N})}{N!}|t_{1}t_{2}\cdots t_{N}\rangle\langle t_{1}t_{2}\cdots t_{N}| \\ &=&\sum_{t_{1}t_{2}\cdots t_{N}}\frac{1}{N!}|t_{1}t_{2} \cdots t_{N}\rangle)\langle\langle t_{1}t_{2}\cdots t_{N}|\end{array} \tag{69}\] and obtain the following series of formulae: \[\widetilde{U}(N)X_{t^{\prime}}=(X_{t^{\prime}})_{D}\widetilde{U}(N+1)\quad(N \geq 0), \tag{70a}\] \[\begin{array}{rcl}\widetilde{U}(1)X_{t}^{\dagger}&=&(X_{t}^{\dagger})_{D} \widetilde{U}(0),\\ \widetilde{U}(N+1)X_{t}^{\dagger}&=&(X_{t}^{\dagger})_{D} \widetilde{U}(N)\\ &&-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y( tt_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}\widetilde{U}(N -1)X_{\vec{t}_{1}^{\prime}}\quad(N\geq 1),\end{array}\] (70b) \[\begin{array}{rcl}\widetilde{U}(0)B_{q}&=&0,\\ \widetilde{U}(N)B_{q}&=&(B_{q})_{D}\widetilde{U}(N)+\sum_{t} \sum_{\vec{t}^{\prime}}\Gamma_{q}^{\vec{t}^{\prime}t}b_{\vec{t}}^{\dagger} \widetilde{U}(N-1)X_{\vec{t}^{\prime}}\quad(N\geq 1),\end{array}\] (70c) \[\begin{array}{rcl}\widetilde{U}(1)X_{\bar{t}}^{\dagger}&=&0,\\ \widetilde{U}(N+1)X_{\bar{t}}^{\dagger}&=&-\frac{1}{2}\sum_{t_{1}t_{2}} \sum_{t_{1}^{\prime}}Y(\bar{t}t_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{ \dagger}b_{t_{2}}^{\dagger}b_{t_{1}^{\prime}}\widetilde{U}(N)\\ &&-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y( \bar{t}t_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger} \widetilde{U}(N-1)X_{\vec{t}_{1}^{\prime}}\quad(N\geq 1),\end{array} \tag{70d}\] where \[(X_{t^{\prime}})_{D}=b_{t^{\prime}}, \tag{71a}\] \[(X_{t}^{\dagger})_{D}=b_{t}^{\dagger}-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{t_{1} ^{\prime}}Y(tt_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}b_{t_{1}^{\prime}},\] (71b) \[(B_{q})_{D}=\sum_{t}\sum_{t^{\prime}}\Gamma_{q}^{t^{\prime}t}b_{ \vec{t}}^{\dagger}b_{t^{\prime}}. \tag{71c}\] Eqs. (71) are the same as the boson expansions derived by DBET. \(\left(B_{\bar{q}}\right)_{D}^{\dagger}=(B_{q})_{D}\) holds. From these formulae, we obtain \[\widetilde{X_{t^{\prime}}}(N)=(X_{t^{\prime}})_{D}\hat{Z}(N+1)\qquad(N\geq 0), \tag{72a}\] \[\widetilde{X_{t}^{\dagger}}(0) = (X_{t}^{\dagger})_{D}\hat{Z}(0)\] \[\widetilde{X_{t}^{\dagger}}(N) = (X_{t}^{\dagger})_{D}\hat{Z}(N)-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_ {\vec{t}_{1}^{\prime}}Y(tt_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_ {t_{2}}^{\dagger}\widetilde{X_{t_{1}^{\prime}}}(N-1)\qquad(N\geq 1),\] (72b) \[\widetilde{B_{q}}(0) = 0,\] \[\widetilde{B_{q}}(N) = (B_{q})_{D}\hat{Z}(N)+\sum_{t}\sum_{\vec{t}^{\prime}}\Gamma_{q}^ {\vec{t}^{\prime}t}b_{t}^{\dagger}\widetilde{X}_{\vec{t}^{\prime}}(N-1)\qquad (N\geq 1),\] (72c) \[\widetilde{X_{\vec{t}}^{\dagger}}(0) = 0,\] \[\widetilde{X_{\vec{t}}^{\dagger}}(N) = -\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y(\vec{t }t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}b_{t_{1}^{ \prime}}\hat{Z}(N)\] \[- \frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y(\vec{t }t_{1}t_{2}\vec{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger} \widetilde{X_{t_{1}^{\prime}}}(N-1)\qquad(N\geq 1),\] where we use the following diffinitions: \(\widetilde{X_{\mu}}(N)=\widetilde{U}(N)X_{\mu}\widetilde{U}(N+1)^{\dagger}, \widetilde{B_{q}}(N)=\widetilde{U}(N)B_{q}\widetilde{U}(N)^{\dagger}\). \(\widetilde{X_{\mu}^{\dagger}}(N)=\left(\widetilde{X_{\mu}}(N)\right)^{\dagger}\). \(\widetilde{B_{q}^{\dagger}}(N)=\left(\widetilde{B}_{q}(N)\right)^{\dagger}\) holds. We can obtain the boson expansion of \(\hat{Z}(N)\) by using \[\hat{Z}(N)=\frac{1}{N}\sum_{t}(X_{t}^{\dagger})_{D}\hat{Z}(N-1)b_{t}-\frac{1} {2N}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}}Y(t_{1}^{\prime}t_{1}t_{2}\vec {t}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}\widetilde{X_{\vec{t}^{ \prime}}}(N-2)b_{t_{1}^{\prime}}\quad(N\geq 2), \tag{73}\] which is derived by applying \[\widetilde{U}^{\dagger}(N)=\frac{1}{N}\sum_{t}X_{t}^{\dagger}\widetilde{U}^{ \dagger}(N-1)b_{t}\qquad(N\geq 1) \tag{74}\] obtained from Eq. (69) to Eq. (26a), expressing \(\hat{Z}(N)\) as \[\hat{Z}(N)=\frac{1}{N}\sum_{t}\widetilde{X_{t}^{\dagger}}(N-1)b_{t}\quad(N \geq 1), \tag{75}\] and substituting Eqs. (72b) into this. \(\hat{Z}(N)\) up to \(N=2\) are as follows, \[\hat{Z}(0)=\hat{1}_{B}(0), \tag{76a}\] \[\hat{Z}(1)=\hat{1}_{B}(1), \tag{76b}\] \[\hat{Z}(2)=\hat{1}_{B}(2)\left(\hat{1}_{B}-\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1} ^{\prime}t_{2}^{\prime}}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{ \dagger}b_{t_{2}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}\right)\hat{1}_{ B}(2). \tag{76c}\] We use the following equation for the case \(N=2\), \[b_{t}\hat{1}_{B}(N)=\hat{1}_{B}(N-1)b_{t}. \tag{77}\] Once \(\widetilde{X_{\tilde{l}^{\prime}}}(N)\), \(\widetilde{X_{\tilde{l}^{\prime}}}^{\dagger}(N)\), and \(\hat{Z}(N)\) are obtained from Eq. (72d), the Hermitian conjugate of Eq. (72d), and Eq. (73), \(\widetilde{X_{t^{\prime}}}(N)\), \(\widetilde{X_{t}}^{\dagger}(N)\), and \(\widetilde{B_{q}}(N)\) are given by substituting these into Eqs. (72). ### On the use of ideal boson state vectors The effect of the Pauli exclusion principle is reflected generally in the boson operators and the boson state vectors by the mapping. While, if we restrict the types of phonon excitation modes and the number of phonon excitations so that zero eigenvalues do not appear in the norm matrices of the multiphonon state vectors, then \[\hat{T}_{B}=\breve{1}_{B}, \tag{78}\] holds. In this case, the ideal boson state vectors \(|N;t)\), which do not bear the effect of the Pauli exclusion principle, become the physical state vectors. As a result, all effects of the Pauli exclusion principle are fully reflected in the mapped operators. In order that the boson expansion method is practical, the phonon excitation modes and the maximum number of excitations should be chosen so that the ideal boson state vectors become the physical state vectors [13]. ### Boson expansions as a small parameter expansion In this subsection, we obtain the norm operator and the other mapped operators in the boson expansion being a small parameter expansion, where \(\Gamma_{q}^{\mu\mu^{\prime}}\) are regarded as of the order of magnitude \(O(\Gamma)\). 3.1 On the conditions for being a small parameter expansion and the evaluation of the order of magnitude of each term of expansions For realizing a small parameter expansion where the boson approximation becomes the zeroth order approximation, \(\hat{Z}\approx\breve{1}_{B}\) must hold as the zeroth order approximation. For that purpose, it is necessary to limit the type of mode and the number of phonon excitations in the mapping operator so that zero eigenvalues do not appear in the norm matrices of the multiphonon state vectors. This is the same condition for the ideal boson state vectors to become physical. This condition is necessary but not sufficient, however. Denoting the matrix each element of which is \(\langle t_{1}^{\prime}t_{2}^{\prime}|t_{1}t_{2}\rangle\) as \({\bf Z}(N)\), \({\bf Z}^{(A)}(N)\) is expressed as \[{\bf Z}^{(A)}(N)=\left(\begin{array}{cc}{\bf Z}(N)&{\bf W}(N)\\ {\bf W}(N)^{T}&{\bf Z}^{\prime}(N).\end{array}\right), \tag{79}\] As shown in the appendix, if \({\bf W}(2)={\bf 0}(2)\), i.e. \(Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=0\), then \({\bf W}(N)={\bf 0}(N)\) for \(N\geq 3\). Hence, in this case, we obtain \[{\bf Z}^{(A)}(N)=\left(\begin{array}{cc}{\bf Z}(N)&{\bf 0}(N)\\ {\bf 0}(N)^{T}&{\bf Z}^{\prime}(N)\end{array}\right). \tag{80}\] Substituting this into Eq.(50), \[{\bf Z}(N)^{2}=(2N-1)!!{\bf Z}(N), \tag{81a}\] \[{\bf Z}^{\prime}(N)^{2}=(2N-1)!!{\bf Z}^{\prime}(N), \tag{81b}\] are obtained. From Eq. (81a), we obtain \[\hat{Z}(N)^{2}=(2N-1)!!\hat{Z}(N), \tag{82}\] from which we find \[\hat{Z}(N)=(2N-1)!!\hat{T}_{B}(N)=(2\hat{N}_{B}-1)!!\hat{T}_{B}(N),\quad\hat{N }_{B}=\sum_{t}b_{t}^{\dagger}b_{t}, \tag{83a}\] \[\hat{Z}=(2\hat{N}_{B}-1)!!\hat{T}_{B}. \tag{83b}\] \(\{t\}\) and \(N_{max}\) are set so that no zero eigenvalue appears in \({\bf Z}(N)\). It is \({\bf Z}^{\prime}(N)\) that has zero eigenvalues. Therefore the eigenvalues of \({\bf Z}(N)\) are only \((2N-1)!!\). \(\hat{T}_{B}(N)=\hat{1}_{B}(N)\), and then \(\hat{T}_{B}=\hat{1}_{B}\) hold. Even so, \(\hat{Z}\approx\hat{1}_{B}\) does not hold as the zeroth order approximation, and the boson expansions can not be obtained as the small parameter expansion. \({\bf W}(2)\) must not be a zero matrix if the small parameter expansion holds. We investigate the case of N=2 to establish the small parameter expansion and an order evaluation of the terms in them. Substituting Eq. (79) into Eq. (50) and taking \(N=2\), we obtain \[{\bf Z}(2)^{2}+{\bf W}(2){\bf W}(2)^{T}=3{\bf Z}(2), \tag{84a}\] \[{\bf Z}(2){\bf W}(2)+{\bf W}(2){\bf Z}^{\prime}(2)=3{\bf W}(2),\] (84b) \[{\bf W}(2)^{T}{\bf W}(2)+{\bf Z}^{\prime}(2)^{2}=3{\bf Z}^{\prime}(2), \tag{84c}\] from which we derive \[\sum_{\mu\mu^{\prime}}Y(t^{\prime}_{1}\mu\mu^{\prime}t^{\prime}_{2})Y(t_{1}\mu \mu^{\prime}t_{2})=4((t^{\prime}_{1}t^{\prime}_{2}|t_{1}t_{2}))-2Y(t^{\prime}_ {1}t_{1}t_{2}t^{\prime}_{2}), \tag{85a}\] \[\sum_{\mu\mu^{\prime}}Y(t^{\prime}_{1}\mu\mu^{\prime}t^{\prime}_{2})Y(\bar{t}_ {1}\mu\mu^{\prime}\mu_{1})+2Y(t^{\prime}_{1}\bar{t}_{1}\mu_{1}t^{\prime}_{2}) =0,\] (85b) \[\sum_{\mu\mu^{\prime}}Y(\bar{t}^{\prime}_{1}\mu\mu^{\prime}\mu^{\prime}_{1})Y( \bar{t}_{1}\mu\mu^{\prime}\mu_{1})=4((\bar{t}^{\prime}_{1}\mu^{\prime}_{1}| \bar{t}_{1}\mu_{1}))-2Y(\bar{t}^{\prime}_{1}\bar{t}_{1}\mu_{1}\mu^{\prime}_{1}), \tag{85c}\] Since \({\bf Z}^{(A)}(2)\) has zero eigenvalues [13], these relations include some parts where the small parameter expansion breaks down. \(Y(\mu_{1}\mu_{2}\mu_{3}\mu_{4})\sim O(\Gamma^{2})\) should hold. Therefore if \(\mu\)-sums do not affect the evaluation of the order of magnitude, these equations have discrepancies in the order of magnitude of each term. The naive evaluation does not hold, and we must correctly evaluate the case where we take \(\mu\)-sum. We choose \(\{t\}\) so that the small parameter expansion holds in any situation. \(\sum_{tt^{\prime}}Y(t_{1}tt^{\prime}t_{2})Y(t^{\prime}_{1}tt^{\prime}t^{ \prime}_{2})\) should, then, be estimated as \(O(\Gamma^{4})\). To find out more about \(\bar{t}\)-sum, we take up \(\sum_{\mu}Y(t_{1}t_{2}t_{3}\mu)\Gamma^{\mu t_{4}}_{q}\). Because we choose \(\{t\}\) so that \(\sum_{t}Y(t_{1}t_{2}t_{3}t)\Gamma^{tt_{4}}_{q}\sim O(\Gamma^{3})\) hold, then we obtain \[\sum_{\mu}Y(t_{1}t_{2}t_{3}\mu)\Gamma^{\mu t_{4}}_{q}=\sum_{\bar{t}}Y(t_{1}t_ {2}t_{3}\bar{t})\Gamma^{\bar{t}t_{4}}_{q}+O(\Gamma^{3}). \tag{86}\] While \[\sum_{\mu}Y(t_{1}t_{2}t_{3}\mu)\Gamma^{\mu t_{4}}_{q}=\sum_{q^{\prime}q^{ \prime\prime}}\sum_{\alpha\beta\gamma}\varphi_{q}(\alpha\beta)\varphi_{q^{ \prime}}(\gamma\alpha)\varphi_{q^{\prime\prime}}(\gamma\alpha)(\Gamma^{t_{1}t _{2}}_{q^{\prime}}\Gamma^{t_{3}t_{4}}_{q^{\prime\prime}}+\Gamma^{t_{1}t_{3}} _{q^{\prime}}\Gamma^{t_{2}t_{4}}_{q^{\prime\prime}}), \tag{87}\] holds, which indicates that the order of the right-hand side is \(O(\Gamma^{2})\). Therefore the estimation of \(\bar{t}\)-sum should become as \[\sum_{\bar{t}}Y(t_{1}t_{2}t_{3}\bar{t})\Gamma^{\bar{t}t_{4}}_{q}\sim O(\Gamma^ {2}). \tag{88}\] This indicates that if we take a single \(\bar{t}\)-sum, we should estimate its magnitude by one order lower. Based on this evaluation, we evaluate \[\sum_{t\bar{t}}Y(t_{1}t\bar{t}t_{2})Y(t^{\prime}_{1}t\bar{t}t^{\prime}_{2}) \sim O(\Gamma^{3}), \tag{89}\] By applying these order evaluations to Eqs. (85), we obtain \[\sum_{\vec{t}\vec{t}^{\prime}}Y(\mu_{1}^{\prime}\vec{t}\vec{t}^{\prime}\mu_{2}^{ \prime})Y(\mu_{1}\vec{t}\vec{t}^{\prime}\mu_{2})=4((\mu_{1}^{\prime}\mu_{2}^{ \prime}|\mu_{1}\mu_{2}))-2Y(\mu_{1}^{\prime}\mu_{1}\mu_{2}\mu_{2}^{\prime})+O( \Gamma^{3}), \tag{90a}\] \[\sum_{\vec{t}\vec{t}^{\prime}}Y(t_{1}^{\prime}\vec{t}\vec{t}^{\prime}t_{2}^{ \prime})Y(\bar{t}_{1}\bar{t}\vec{t}^{\prime}\mu_{1})+2Y(t_{1}^{\prime}\bar{t}_ {1}\mu_{1}t_{2}^{\prime})=O(\Gamma^{3}), \tag{90b}\] \[\sum_{\vec{t}\vec{t}^{\prime}}Y(\vec{t}_{1}^{\prime}\vec{t}\vec{t}^{\prime}\mu _{1}^{\prime})Y(\bar{t}_{1}\bar{t}\vec{t}^{\prime}\mu_{1})=4((\vec{t}_{1}^{ \prime}\mu_{1}^{\prime}|\bar{t}_{1}\mu_{1}))-2Y(\vec{t}_{1}^{\prime}\bar{t}_{ 1}\mu_{1}\mu_{1}^{\prime})+O(\Gamma^{3}). \tag{90c}\] We can identify that the parts where the double \(\bar{t}\)-sums are performed across two coefficients are responsible for the failure of the small parameter expansion. Eqs. (90) become conditions for the small parameter expansion to hold. #### 4.3.2 Boson expansions of mapped operators as the small parameter expansion Here we perform the boson expansions of the mapped operators as the small parameter expansion. Eq. (43b) indicates that we can derive the boson expansions of \((O_{F})_{\xi}\) from those of the norm operator \(\hat{Z}\) and \(\widetilde{O_{F}}\). We give the terms of the boson expansions up to the order of magnitude \(O(\Gamma^{4})\). From Eq. (72d), its Hermitian conjugate, and Eq. (73), we find the recurrence formulae for obtaining the boson expansions of \(\hat{Z}(N)\), \(\widetilde{X_{\vec{t}^{\prime}}}(N)\), and \(\widetilde{X_{\vec{t}}^{\dagger}}(N)\) up to the desired order of magnitude. These recurrence formulae generate no parts where double \(\bar{t}\)-sums are performed across two coefficients in the expansions, which makes it possible to avoid convergence difficulty caused by them. The recurrence formulae of \(\hat{Z}(N)\) are as follows: \[\hat{Z}(N)=\sum_{k=1}^{4}\hat{Z}^{(k)}(N)+O(\Gamma^{5});\quad\hat{Z}^{(k)}(N )\sim O(\Gamma^{k}), \tag{91a}\] \[\hat{Z}^{(0)}(N)=\frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(0)}(N-1)b_{t}, \tag{91b}\] \[\hat{Z}^{(1)}(N)=0, \tag{91c}\] \[\hat{Z}^{(2)}(N) = \frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(2)}(N-1)b_{t}\] \[-\frac{1}{2N}\sum_{t_{1}t_{2}}\sum_{\begin{subarray}{c}t_{1}^{ \prime}t_{2}^{\prime}\\ \end{subarray}}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t _{2}}^{\dagger}b_{t_{1}^{\prime}}\hat{Z}^{(0)}(N-1)b_{t_{2}^{\prime}},\] \[\hat{Z}^{(3)}(N) = \frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(3)}(N-1)b_{t}\] \[+\frac{1}{4N}\sum_{t_{1}t_{2}t_{3}}\sum_{t_{1}^{\prime}t_{2}^{ \prime}t_{3}^{\prime}}\sum_{\bar{t}}Y(t_{3}^{\prime}t_{1}t_{2}\bar{t})Y(\bar{t} t_{1}^{\prime}t_{2}^{\prime}t_{3})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}\hat{Z}^{( 0)}(N-1)b_{t_{3}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}b_{t_{3}^{ \prime}},\] \[\hat{Z}^{(4)}(N)=\frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(4)}(N-1)b_{t}- \frac{1}{2N}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime}}Y(t_{2}^{\prime }t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}b_{t_{1}^{ \prime}}\hat{Z}^{(2)}(N-1)b_{t_{2}^{\prime}}\] \[-\frac{1}{8N}\sum_{t_{1}t_{2}t_{3}t_{4}}\sum_{t_{1}^{\prime}t_{2}^{\prime}t_{3 }^{\prime}t_{4}^{\prime}}\sum_{\bar{t}^{\prime}}Y(t_{4}^{\prime}t_{1}t_{2}\bar {t})Y(\bar{t}t_{2}^{\prime}t_{3}^{\prime}\bar{t}^{\prime})Y(\bar{t}^{\prime}t _{3}t_{4}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}b_{t_{3}}^{ \dagger}b_{t_{4}}^{\dagger}b_{t_{1}^{\prime}}\hat{Z}^{(0)}(N-1)b_{t_{2}^{ \prime}}b_{t_{3}^{\prime}}b_{t_{4}^{\prime}}. \tag{91f}\] The solution of Eq,(91b) is easily obtained as \[\hat{Z}^{(0)}(N)=\frac{1}{N!}\sum_{t_{1}t_{2}\cdots t_{N}}b_{t_{1}}^{\dagger} b_{t_{2}}^{\dagger}\cdots b_{t_{N}}^{\dagger}\hat{Z}(0)b_{t_{1}}b_{t_{2}} \cdots b_{t_{N}}=\hat{1}_{B}(N). \tag{92}\] Substituting it into Eq. (91d) and using Eq. (77), we obtain \[\hat{Z}^{(2)}(N) = \frac{1}{N}\sum_{t}b_{t}^{\dagger}\hat{Z}^{(2)}(N-1)b_{t}\] \[-\frac{1}{2N}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime} }Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}\hat{1}_{B}(N-2)b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}.\] For finding the solution, assuming it as \[\hat{Z}^{(2)}(N)=y^{(2)}(N)\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime }}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}\hat{1}_{B}(N-2)b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}, \tag{93b}\] substituting this into the recurrence formula, and using \[\sum_{t}b_{t}^{\dagger}\hat{1}_{B}(N)b_{t}=(N+1)\hat{1}_{B}(N+1), \tag{93c}\] we find \[y^{(2)}(N)=\frac{N-2}{N}y^{(2)}(N-1)-\frac{1}{2N}. \tag{93d}\] \(y^{(2)}(N)=-1/4\) is the solution, and we obtain \[\hat{Z}^{(2)}(N)=-\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{ \prime}}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}\hat{1}_{B}(N-2)b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}. \tag{93e}\] Following the same procedure in order, we can obtain the solution of the recurrence formulae for each order of magnitude. Organizing the solutions obtained in this way using Eq. (77), we finally obtain \[\hat{Z}(N)=\hat{\mathcal{Z}}\hat{1}_{B}(N)=\hat{1}_{B}(N)\hat{\mathcal{Z}}=\hat{1} _{B}(N)\hat{\mathcal{Z}}\hat{1}_{B}(N), \tag{94a}\] \[\hat{\mathcal{Z}}=\hat{\mathcal{Z}}^{(0)}+\hat{\mathcal{Z}}^{(2)}+ \hat{\mathcal{Z}}^{(3)}+\hat{\mathcal{Z}}^{(4)}+O(\Gamma^{5}),\] (94b) \[\hat{\mathcal{Z}}^{(0)}=\hat{1}_{B},\] (94c) \[\hat{\mathcal{Z}}^{(2)}=-\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2 }^{\prime}}Y(t_{2}^{\prime}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2 }}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}},\] (94d) \[\hat{\mathcal{Z}}^{(3)}=\frac{1}{12}\sum_{t_{1}t_{2}t_{3}}\sum_{t_{1}^{ \prime}t_{2}^{\prime}t_{3}^{\prime}}\sum_{\bar{t}}Y(t_{3}^{\prime}t_{1}t_{2} \bar{t})Y(\bar{t}t_{1}^{\prime}t_{2}^{\prime}t_{3})b_{t_{1}}^{\dagger}b_{t_{2 }}^{\dagger}b_{t_{3}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}b_{t_{3}^{ \prime}},\] (94e) \[\hat{\mathcal{Z}}^{(4)}= \hat{\mathcal{Z}}^{(4)}_{in}+\hat{\mathcal{Z}}^{(4)}_{out}\] (94f) \[\hat{\mathcal{Z}}^{(4)}_{out}=-\frac{1}{32}\sum_{t_{1}t_{2}t_{3}t_ {4}}\sum_{t_{1}^{\prime}t_{2}^{\prime}t_{3}^{\prime}}\sum_{\bar{t}\bar{t}^{ \prime}}Y(t_{4}^{\prime}t_{1}t_{2}\bar{t})Y(\bar{t}t_{2}^{\prime}t_{3}^{ \prime}\bar{t}^{\prime})Y(\bar{t}^{\prime}t_{3}t_{4}t_{1}^{\prime})b_{t_{1}}^{ \dagger}b_{t_{2}}^{\dagger}b_{t_{3}}^{\dagger}b_{t_{4}^{\prime}}b_{t_{1}^{ \prime}}b_{t_{2}^{\prime}}b_{t_{3}^{\prime}}b_{t_{4}^{\prime}}.\] From these results, we can easily find the norm operator as \[\hat{Z} = \hat{\mathcal{Z}}\hat{1}_{B}=\hat{1}_{B}\hat{\mathcal{Z}}=\hat{1 }_{B}\hat{\mathcal{Z}}\hat{1}_{B}, \tag{95}\] \[\hat{\mathcal{Z}} =\hat{1}_{B}+\hat{\mathcal{Y}},\] \[\hat{\mathcal{Y}}=\hat{\mathcal{Y}}_{in}+\hat{\mathcal{Y}}_{out},\] \[\hat{\mathcal{Y}}_{in}=\hat{\mathcal{Z}}^{(2)}+\hat{\mathcal{Z}}^{( 4)}_{in}+O(\Gamma^{5}),\] \[\hat{\mathcal{Y}}_{out}=\hat{\mathcal{Z}}^{(3)}+\hat{\mathcal{Z}} ^{(4)}_{out}+O(\Gamma^{5}).\] The \(\xi\)-th power of \(\hat{Z}\) becomes \[\hat{Z}^{\xi}=\hat{\mathcal{Z}}^{\xi}\hat{1}_{B}=\check{1}_{B} \hat{\mathcal{Z}}^{\xi}, \tag{96}\] \[\hat{\mathcal{Z}}^{\xi}=\hat{1}_{B}+\xi\hat{\mathcal{Y}}+\frac{1} {2}\xi(\xi-1)\hat{\mathcal{Y}}^{2}+O(\Gamma^{6}).\] Once \(\hat{Z}(N)\) is known, we can obtain \(\widetilde{X_{\bar{t}}}(N)\) from the following recurrence formula derived from Eqs. (72d), \[\widetilde{X_{\bar{t}^{\prime}}}(N) = -\frac{1}{2}\sum_{t_{1}}\sum_{t_{1}^{\prime}t_{2}^{\prime}}Y(\bar {t}^{\prime}t_{1}^{\prime}t_{2}^{\prime}t_{1})\hat{Z}(N)b_{t_{1}}^{\dagger}b_{ t_{1}^{\prime}}b_{t_{2}^{\prime}}\] \[+\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime}t_ {3}^{\prime}}\sum_{\bar{t}}Y(\bar{t}^{\prime}t_{2}^{\prime}t_{3}^{\prime}\bar{t })Y(\bar{t}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{\dagger}b_{t_{ 2}^{\prime}}\hat{Z}(N-1)b_{t_{2}^{\prime}}b_{t_{3}^{\prime}}\] \[+\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{t_{1}^{\prime}t_{2}^{\prime}} \sum_{\bar{t}}\sum_{\bar{t}_{1}^{\prime}}Y(\bar{t}^{\prime}t_{1}^{\prime}t_{2}^ {\prime}\bar{t})Y(\bar{t}t_{1}t_{2}\bar{t}_{1}^{\prime})b_{t_{1}}^{\dagger}b_{t _{2}}^{\dagger}\widetilde{X_{\bar{t}_{1}^{\prime}}}(N-2)b_{t_{1}^{\prime}}b_{t _{2}^{\prime}},\] and find the solutions as \[\widetilde{X_{\vec{t}^{\prime}}}(N)=(X_{\vec{t}^{\prime}})_{L}\hat{\cal Z} \hat{1}_{B}(N+1)=\hat{1}_{B}(N)(X_{\vec{t}^{\prime}})_{L}\hat{\cal Z}, \tag{98a}\] \[(X_{\vec{t}^{\prime}})_{L}=(X_{\vec{t}^{\prime}})_{L}^{(2)}+(X_{\vec{t}^{\prime}} )_{L}^{(3)}+(X_{\vec{t}^{\prime}})_{L}^{(4)}+O(\Gamma^{5}),\] (98b) \[(X_{\vec{t}^{\prime}})_{L}^{(2)} = -\frac{1}{2}\sum_{t_{1}}\sum_{\vec{t}_{1}^{\prime}\vec{t}_{2}^{ \prime}}Y(\vec{t}^{\prime}t_{1}^{\prime}t_{2}^{\prime}t_{1})b_{t_{1}}^{\dagger} b_{t_{1}^{\prime}}b_{t_{2}^{\prime}},\] \[(X_{\vec{t}^{\prime}})_{L}^{(3)} = \frac{1}{4}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}\vec{t}_{2}^ {\prime}\vec{t}_{3}^{\prime}}\sum_{\vec{t}}Y(\vec{t}^{\prime}t_{2}^{\prime}t_{ 3}^{\prime}\hat{\vec{t}})Y(\vec{t}t_{1}t_{2}t_{1}^{\prime})b_{t_{1}}^{\dagger} b_{t_{2}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}b_{t_{3}^{\prime}},\] (98c) \[(X_{\vec{t}^{\prime}})_{L}^{(4)} = [\hat{\cal Z}^{(2)},(X_{\vec{t}^{\prime}})_{L}^{(2)}]\] (98d) \[-\frac{1}{8}\sum_{t_{1}t_{2}t_{3}}\sum_{\vec{t}_{1}^{\prime}t_{2}^ {\prime}t_{3}^{\prime}}\sum_{\vec{t}\vec{t}^{\prime\prime}}Y(\vec{t}^{\prime}t _{1}^{\prime}t_{2}^{\prime}\hat{\vec{t}})Y(\vec{t}t_{1}t_{2}\vec{t}^{\prime \prime})Y(\vec{t}^{\prime\prime}t_{3}^{\prime}t_{4}^{\prime}t_{3})b_{t_{1}}^{ \dagger}b_{t_{2}}^{\dagger}b_{t_{3}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{ \prime}}b_{t_{3}^{\prime}}^{\prime}b_{t_{4}^{\prime}},\] \[[\hat{\cal Z}^{(2)},(X_{\vec{t}^{\prime}})_{L}^{(2)}]=-\frac{1}{4 }\sum_{t_{1}}\sum_{\vec{t}_{1}^{\prime}\vec{t}_{2}^{\prime}}Y(\vec{t}^{ \prime}tt^{\prime}t_{1})Y(t_{1}^{\prime}tt^{\prime}t_{2}^{\prime})b_{t_{1}}^{ \dagger}b_{t_{1}^{\prime}}b_{t_{2}}\] \[-\frac{1}{4}\sum_{t_{1}t_{2}}\sum_{\vec{t}_{1}^{\prime}\vec{t}_{2 }^{\prime}t_{3}^{\prime}}\sum_{\vec{t}}\left\{2Y(t_{1}t_{1}^{\prime}t_{2}^{ \prime}t)Y(t\vec{t}^{\prime}t_{2}t_{3}^{\prime})-Y(\vec{t}^{\prime}t_{1}^{ \prime}t_{2}^{\prime}t)Y(tt_{1}t_{2}t_{3}^{\prime})\right\}b_{t_{1}}^{\dagger} b_{t_{2}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}^{\prime}}b_{t_{3}^{\prime}}^{ \prime}.\] From these, we obtain \[\widetilde{X_{\vec{t}^{\prime}}}=\sum_{N=0}^{N_{max}-1}\widetilde{X_{\vec{t}^{ \prime}}}(N)=(X_{\vec{t}^{\prime}})_{L}\hat{\cal Z}\hat{1}_{B}=\hat{1}_{B}^{(-1) }(X_{\vec{t}^{\prime}})_{L}\hat{\cal Z}=\hat{1}_{B}(X_{\vec{t}^{\prime}})_{L} \hat{\cal Z}\hat{1}_{B}, \tag{99}\] where \[\hat{1}_{B}^{(\Delta N)}=\sum_{N=0}^{N_{max}+\Delta N}\hat{1}_{B}(N). \tag{100}\] \(\hat{1}_{B}^{(0)}=\hat{1}_{B}\) and \(\hat{1}_{B}\hat{1}_{B}^{(-1)}=\hat{1}_{B}^{(-1)}\hat{1}_{B}=\hat{1}_{B}^{(-1)}\) hold. Organizing the Hermitian conjugate of Eq. (98a), we find \[\begin{array}{rcl}\widetilde{X_{\vec{t}}^{\dagger}}(N)&=&((X_{\vec{t}})_{L} \hat{\cal Z}\hat{1}_{B}(N+1))^{\dagger}=\hat{1}_{B}(N+1)\hat{\cal Z}(X_{\vec{t }})_{L}^{\dagger}\\ &=&\hat{\cal Z}(X_{\vec{t}})_{L}^{\dagger}\hat{1}_{B}(N)=\left\{\hat{\cal Z}(X_ {\vec{t}})_{L}^{\dagger}\hat{\cal Z}^{-1}\right\}\hat{\cal Z}\hat{1}_{B}(N)\\ &=&(X_{\vec{t}}^{\dagger})_{L}\hat{\cal Z}\hat{1}_{B}(N),\end{array}\] (101a) where \[(X_{\vec{t}}^{\dagger})_{L}=\hat{\cal Z}(X_{\vec{t}})_{L}^{\dagger}\hat{\cal Z }^{-1}, \tag{101b}\] \[(X_{\vec{t}}^{\dagger})_{L}=(X_{\vec{t}}^{\dagger})_{L}^{(2)}+(X_{\vec{t }}^{\dagger})_{L}^{(3)}+(X_{\vec{t}}^{\dagger})_{L}^{(4)}+O(\Gamma^{5}),\] (101c) \[(X_{\vec{t}}^{\dagger})_{L}^{(2)} = ((X_{\vec{t}})_{L}^{(2)})^{\dagger},\quad(X_{\vec{t}}^{\dagger})_{L} ^{(3)}=((X_{\vec{t}})_{L}^{(3)})^{\dagger},\] \[(X_{\vec{t}}^{\dagger})_{L}^{(4)} = ((X_{\vec{t}})_{L}^{(4)})^{\dagger}+[\hat{\cal Z}^{(2)},((X_{\vec{t }})_{L}^{(2)})^{\dagger}]\] \[= -\frac{1}{8}\sum_{t_{1}t_{2}t_{3}t_{4}}\sum_{\vec{t}_{1}^{\prime}t _{2}^{\prime}t_{3}^{\prime}}\sum_{\vec{t}\vec{t}^{\prime\prime}}Y(\vec{t}t_{1}t_{2 }\vec{t}^{\prime})Y(\vec{t}^{\prime}t_{1}^{\prime}t_{2}^{\prime}\vec{t}^{\prime \prime})Y(\vec{t}^{\prime\prime}t_{3}t_{4}t_{3}^{\prime})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}b_{t_{3}}^{\dagger}b_{t_{4}}^{\dagger}b_{t_{1}^{\prime}}b_{t_{2}}^{ \prime}b_{t_{3}}^{\dagger}b_{t_{4}}^{\prime}b_{t_{1}^{\prime}}^{\prime}b_{t_{2}}^{ \prime}b_{t_{3}^{\prime}}^{\prime},\] and obtain \[\widetilde{X_{\tilde{t}}^{\dagger}}=\sum_{N=0}^{N_{max}-1}\widetilde{X_{\tilde{t} }^{\dagger}}(N)=(X_{\tilde{t}}^{\dagger})_{L}\hat{\mathcal{Z}}\dot{\mathrm{I}}_ {B}^{(-1)}=\breve{\mathrm{I}}_{B}(X_{\tilde{t}}^{\dagger})_{L}\hat{\mathcal{Z }}=\breve{\mathrm{I}}_{B}(X_{\tilde{t}}^{\dagger})_{L}\hat{\mathcal{Z}}\dot{ \mathrm{I}}_{B}. \tag{102}\] In this way, we can obtain \((X_{\tilde{t}^{\prime}})_{L}\) and \((X_{\tilde{t}}^{\dagger})_{L}\) as infinite expansions. Dealing with the terms up to \(O(\Gamma^{4})\), we have found that \(\hat{Z}(N)=\hat{\mathcal{Z}}\hat{1}_{B}(N)=\hat{1}_{B}(N)\hat{\mathcal{Z}}\), \(\widetilde{X_{\tilde{t}^{\prime}}}(N)=\widetilde{X_{\tilde{t}^{\prime}}}\hat{ 1}_{B}(N+1)=\hat{1}_{B}(N)\widetilde{X_{\tilde{t}^{\prime}}}\), and \(\widetilde{X_{\tilde{t}}^{\dagger}}(N)=\hat{1}_{B}(N+1)\widetilde{X_{\tilde{t} }^{\dagger}}=\widetilde{X_{\tilde{t}}^{\dagger}}\hat{1}_{B}(N)\). Oppositely, assuming that these hold for any \(N\), and substituting them into Eq. (72d) and Eq. (73), we can find the relational expressions for \(\hat{\mathcal{Z}}\), \(\widetilde{X_{\tilde{t}^{\prime}}}\), and \(\widetilde{X_{\tilde{t}}^{\dagger}}\), and solve these for each order of magnitude, then we obtain the same results. This result suggests that the \(N\) dependency of these operators found up to \(O(\Gamma^{4})\) generally holds. Applying the above results to Eqs. (72) and summing up \(N\), we obtain \[\begin{array}{lcl}\widetilde{X_{t^{\prime}}}&=&(X_{t^{\prime}})_{L}\hat{ \mathcal{Z}}\dot{\mathrm{I}}_{B}=\breve{\mathrm{I}}_{B}^{(-1)}(X_{t^{\prime}} )_{L}\hat{\mathcal{Z}}=\breve{\mathrm{I}}_{B}(X_{t^{\prime}})_{L}\hat{ \mathcal{Z}}\breve{\mathrm{I}}_{B},\\ (X_{t^{\prime}})_{L}&=&(X_{t^{\prime}}^{\dagger})_{D},\end{array} \tag{103a}\] \[\begin{array}{lcl}\widetilde{X_{t}^{\dagger}}&=&\breve{\mathrm{I}}_{B}(X_{t }^{\dagger})_{L}\hat{\mathcal{Z}}=(X_{t}^{\dagger})_{L}\hat{\mathcal{Z}} \breve{\mathrm{I}}_{B}^{(-1)}=\breve{\mathrm{I}}_{B}(X_{t}^{\dagger})_{L} \hat{\mathcal{Z}}\breve{\mathrm{I}}_{B},\\ (X_{t}^{\dagger})_{L}&=&(X_{t}^{\dagger})_{D}+(X_{t}^{\dagger})_{out}\\ (X_{t}^{\dagger})_{out}&=&-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{ \tilde{t}^{\prime}_{1}}Y(tt_{1}t_{2}\tilde{t}^{\prime}_{1})b_{t_{1}}^{\dagger }b_{t_{2}}^{\dagger}(X_{\tilde{t}^{\prime}_{1}})_{L},\end{array}\] (103b) \[\begin{array}{lcl}\widetilde{B_{q}}&=&(B_{q})_{L}\hat{\mathcal{Z}} \breve{\mathrm{I}}_{B}=\breve{\mathrm{I}}_{B}(B_{q})_{L}\hat{\mathcal{Z}}= \breve{\mathrm{I}}_{B}(B_{q})_{L}\hat{\mathcal{Z}}\breve{\mathrm{I}}_{B},\\ (B_{q})_{L}&=&(B_{q})_{D}+(B_{q})_{out},\\ (B_{q})_{out}&=&\sum_{t}\sum_{\tilde{t}^{\prime}}\Gamma_{q}^{\tilde{t}^{ \prime}t}b_{t}^{\dagger}(X_{\tilde{t}^{\prime}})_{L}.\end{array} \tag{103c}\] While, from \(B_{q}=B_{\tilde{q}}^{\dagger}\), \(\widetilde{B_{q}}=\widetilde{B_{q}}^{\dagger}\) and we find another expression for \(\widetilde{B_{q}}\) as \[\begin{array}{lcl}\widetilde{B_{q}}&=&\breve{\mathrm{I}}_{B}\hat{\mathcal{Z }}(B_{\tilde{q}})_{L}^{\dagger}=\hat{\mathcal{Z}}(B_{\tilde{q}})_{L}^{\dagger }\breve{\mathrm{I}}_{B}=\breve{\mathrm{I}}_{B}\hat{\mathcal{Z}}(B_{\tilde{q}} )_{L}^{\dagger}\breve{\mathrm{I}}_{B},\\ (B_{\tilde{q}})_{L}^{\dagger}&=&(B_{q})_{D}+(B_{\tilde{q}})_{out}^{\dagger},\\ (B_{\tilde{q}})_{out}^{\dagger}&=&\sum_{t}\sum_{\tilde{t}}\Gamma_{q}^{\prime \tilde{t}^{\prime}\tilde{t}}(X_{\tilde{t}})_{L}^{\dagger}b_{t^{\prime}},\end{array} \tag{104}\] where we use \((B_{q})_{D}=(B_{\tilde{q}})_{D}^{\dagger}\). Using two types of expressions for \(\widetilde{B_{q}}\), we obtain \[\breve{\mathrm{I}}_{B}[(B_{q})_{D},\hat{\mathcal{Z}}]\breve{\mathrm{I}}_{B}= \breve{\mathrm{I}}_{B}\{\hat{\mathcal{Z}}(B_{\tilde{q}})_{out}^{\dagger}-(B_{ q})_{out}\hat{\mathcal{Z}}\}\breve{\mathrm{I}}_{B}. \tag{105}\] From Eq. (24) and Eq. (96), we can express the mapping operator as \[U_{\xi}=\hat{\mathcal{Z}}^{\xi-\frac{1}{2}}\widetilde{U}, \tag{106}\] and Eq. (43) becomes as follows: \[|\psi^{\prime}\rangle_{\xi}=\hat{\cal Z}^{\xi-\frac{1}{2}}\widetilde{|\psi^{ \prime}\rangle},\qquad_{-\xi}(\psi|=\widetilde{(\psi|}\hat{\cal Z}^{-\xi-\frac{ 1}{2}}, \tag{107a}\] \[(O_{F})_{\xi}=\hat{\cal Z}^{\xi-\frac{1}{2}}\widetilde{O_{F}}\hat{\cal Z}^{-\xi- \frac{1}{2}}, \tag{107b}\] If \(O_{F}\) is a phonon creation, a phonon annihilation, or a scattering operator, \(\widetilde{O_{F}}\!=\!\breve{1}_{B}(O_{F})_{L}\hat{\cal Z}^{\breve{\imath}} \!_{B}\) holds. Therefore the mapped \(O_{F}\) can be expressed as \[(O_{F})_{\xi}=\breve{1}_{B}(O_{F})_{B(\xi)}\breve{1}_{B};\quad(O_{F})_{B(\xi) }=\hat{\cal Z}^{\xi-\frac{1}{2}}(O_{F})_{L}\hat{\cal Z}^{-\xi+\frac{1}{2}}, \tag{108}\] and \[_{-\xi}(\psi|(O_{F})_{\xi}|\psi^{\prime})_{\xi}=_{-\xi}(\psi|(O_{F})_{B(\xi) }|\psi^{\prime})_{\xi} \tag{109}\] holds. Therefore we can regard \((O_{F})_{\xi}\) as \((O_{F})_{B(\xi)}\) in the physical subspace. The boson expansions of \((O_{F})_{B(\xi)}\) become infinite expansions for an arbitrary \(\xi\) because those of \((O_{F})_{L}\) become infinite expansions. For \(\xi\neq 0\), the boson expansions become of the non-Hermitian type. In the case of \(\xi=\frac{1}{2}\), \((O_{F})_{\xi(\frac{1}{2})}=(O_{F})_{L}\) holds. For \(\xi=0\), the boson expansions become the Hermitian type and can be derived using \[\begin{array}{rcl}(O_{F})_{B(0)}&=&\hat{\cal Z}^{-\frac{1}{2}}(O_{F})_{L} \hat{\cal Z}^{\frac{1}{2}}\\ &=&(O_{F})_{L}+\frac{1}{2}[(O_{F})_{L},\hat{\cal Y}]-\frac{3}{8}\hat{\cal Y}[( O_{F})_{L},\hat{\cal Y}]-\frac{1}{8}[(O_{F})_{L},\hat{\cal Y}]\hat{\cal Y}+O( \Gamma^{6}).\end{array} \tag{110}\] The boson expansions of the phonon creation and annihilation operators and the scattering operators are as follows: \[(X_{t^{\prime}})_{B(0)}=(X_{t^{\prime}})_{B(0)in}+(X_{t^{\prime}})_{B(0)out}, \tag{111a}\] \[(X_{t^{\prime}})_{B(0)in}=b_{t^{\prime}}+(X_{t^{\prime}})_{B(0)in}^{(2)}+(X_{ t^{\prime}})_{B(0)in}^{(4)}+O(\Gamma^{5}),\] (111b) \[(X_{t^{\prime}})_{B(0)in}^{(2)}=-\frac{1}{4}\sum_{t_{1}}\sum_{t^{\prime}_{1}t ^{\prime}_{2}}Y(t^{\prime}t^{\prime}_{1}t^{\prime}_{2}t_{1})b^{\dagger}_{t_{ 1}}b_{t^{\prime}_{1}}b_{t^{\prime}_{2}},\] (111c) \[(X_{t^{\prime}})_{B(0)in}^{(4)}=-\frac{1}{32}\sum_{t_{1}}\sum_{t^{\prime}_{1} t^{\prime}_{2}}\sum_{tt^{\prime\prime}}Y(t^{\prime}tt^{\prime\prime}t^{\prime}_{1})Y(t^{ \prime}_{2}tt^{\prime\prime}t^{\prime}_{1})b^{\dagger}_{t_{1}}b_{t^{\prime}_ {1}}b_{t^{\prime}_{2}}\] \[+\frac{1}{96}\sum_{t_{1}t_{2}}\sum_{t^{\prime}_{1}t^{\prime}_{2} t^{\prime}_{3}}\sum_{t}\{2Y(t^{\prime}_{1}t_{1}t^{\prime}t)Y(tt^{\prime}_{2}t^{ \prime}_{3}t_{2})\] (111d) \[-5Y(t^{\prime}t^{\prime}_{1}t^{\prime}_{2}t)Y(tt_{1}t_{2}t^{ \prime}_{3})\}b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b_{t^{\prime}_{1}}b_{t^{ \prime}_{2}}b_{t^{\prime}_{3}},\] \[(X_{t^{\prime}})_{B(0)out}=(X_{t^{\prime}})_{B(0)out}^{(3)}+(X_{t^{ \prime}})_{B(0)out}^{(4)}+O(\Gamma^{5}) \tag{111e}\] \[\begin{split}(X_{t^{\prime}})^{(3)}_{B(0)out}=\frac{1}{24}\sum_{t_{1}t_{2 }}\sum_{t^{\prime}_{1}t^{\prime}_{2}t^{\prime}_{3}}\sum_{\bar{t}}\{2Y(t^{\prime }_{1}t_{1}t^{\prime}\bar{t})Y(\bar{t}t^{\prime}_{2}t^{\prime}_{3}t_{2})\\ +Y(t^{\prime}_{1}t_{1}t_{2}\bar{t})Y(\bar{t}t^{\prime}_{2}t^{ \prime}_{3}t^{\prime})\}b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b^{\prime}_{t_{ 1}}b^{\prime}_{t^{\prime}_{2}}b^{\prime}_{t^{\prime}_{3}},\end{split} \tag{111f}\] \[\begin{split}(X_{t^{\prime}})^{(4)}_{B(0)out}=-\frac{1}{16}\sum_{t_{ 1}t_{2}t_{3}}\sum_{t^{\prime}_{1}t^{\prime}_{2}t^{\prime}_{3}t^{\prime}_{4}} \sum_{\bar{t}\bar{t}^{\prime}}Y(t^{\prime}_{1}t^{\prime}t_{1}\bar{t})Y(\bar{t} t^{\prime}_{2}t^{\prime}_{3}\bar{t}^{\prime})Y(\bar{t}^{\prime}t_{2}t_{3}t^{ \prime}_{4})\\ b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b^{\dagger}_{t_{3}}b^{ \dagger}_{t^{\prime}_{1}}b^{\prime}_{t^{\prime}_{2}}b^{\prime}_{t^{\prime}_{3} }b^{\prime}_{t^{\prime}_{4}}.\end{split}\] (111g) \[\begin{split}(X_{\bar{t}^{\prime}})_{B(0)}=(X_{\bar{t}^{\prime}})_{B( 0)}^{(2)}+(X_{\bar{t}^{\prime}})_{B(0)}^{(3)}+(X_{\bar{t}^{\prime}})_{B(0)}^{( 4)}+O(\Gamma^{5}),\end{split}\] (112a) \[\begin{split}(X_{\bar{t}^{\prime}})_{B(0)}^{(2)}=(X_{\bar{t}^{ \prime}})_{L}^{(2)},\quad(X_{\bar{t}^{\prime}})_{B(0)}^{(3)}=(X_{\bar{t}^{ \prime}})_{L}^{(3)},\end{split}\] (112b) \[\begin{split}(X_{\bar{t}^{\prime}})_{B(0)}^{(4)}=(X_{\bar{t}^{ \prime}})_{L}^{(4)}-\frac{1}{2}[\hat{\mathcal{Z}}^{(2),}(X_{\bar{t}^{\prime}} )_{L}^{(2)}].\end{split}\] (112c) \[\begin{split}(B_{q})_{B(0)}&=(B_{q})_{L}+\frac{1}{2} \hat{\mathcal{Z}}\{(B_{\bar{q}})_{out}{}^{\dagger}-(B_{q})_{out}\}+O(\Gamma^{5 }),\end{split}\] (113a) \[\begin{split}(B_{q})_{B(0)in}^{(4)}=\frac{1}{2}\{(B_{q})_{out}^{(k )}+(B_{\bar{q}})_{out}^{(k)}{}^{\dagger}\}\quad(k=2,3),\end{split}\] (113b) \[\begin{split}(B_{q})_{B(0)out}^{(4)}=\frac{1}{2}\{(B_{q})_{out}^{(4) }+(B_{\bar{q}})_{out}^{(4)}{}^{\dagger}\}+\frac{1}{2}\hat{\mathcal{Z}}^{(2)} \{(B_{\bar{q}})_{out}^{(2)}{}^{\dagger}-(B_{q})_{out}^{(2)}\},\end{split}\] (113c) \[\begin{split}(B_{q})_{B(0)out}^{(k)}=\frac{1}{2}\{(B_{q})_{out}^{(k )}+(B_{\bar{q}})_{out}^{(k)}{}^{\dagger}\}\quad(k=2,3),\end{split}\] (113d) \[\begin{split}(B_{q})_{B(0)out}^{(4)}=\frac{1}{2}\{(B_{q})_{out}^{(4) }+(B_{\bar{q}})_{out}^{(4)}{}^{\dagger}\}+\frac{1}{2}\hat{\mathcal{Z}}^{(2)} \{(B_{\bar{q}})_{out}^{(2)}{}^{\dagger}-(B_{q})_{out}^{(2)}\},\end{split}\] (113e) \[\begin{split}\frac{1}{2}\hat{\mathcal{Z}}^{(2)}\{(B_{\bar{q}})_{ out}^{(2)}{}^{\dagger}-(B_{q})_{out}^{(2)}\}=\frac{1}{8}\sum_{t_{1}t_{2}}\sum_{t^{ \prime}_{1}t^{\prime}_{2}}\sum_{tt^{\prime}}\sum_{\bar{t}}\{\Gamma^{t^{\prime }_{1}\bar{t}}Y(\bar{t}\bar{t}t^{\prime}_{2})-\Gamma^{\bar{t}t}_{q}Y(\bar{t}t^{ \prime}_{1}t^{\prime}_{2}t^{\prime})\}Y(tt_{1}t_{2}t^{\prime})\\ b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b_{t^{\prime}_{1}}b_{t^{ \prime}_{2}}\\ +\frac{1}{8}\sum_{t_{1}t_{2}t_{3}}\sum_{t^{\prime}_{1}t^{\prime}_{ 2}t^{\prime}_{3}}\sum_{\bar{t}}\{\Gamma^{t^{\prime}_{1}\bar{t}}Y(\bar{t}t_{1}t ^{\prime}_{2})-\Gamma^{\bar{t}t}_{q}Y(\bar{t}t^{\prime}_{1}t^{\prime}_{2}t_{1})- \Gamma^{\bar{t}t}_{q}Y(\bar{t}t^{\prime}_{1}t^{\prime}_{2}t)\}\\ +\frac{1}{16}\sum_{t_{1}t_{2}t_{3}t_{4}}\sum_{t^{\prime}_{1}t^{ \prime}_{2}t^{\prime}_{3}t^{\prime}_{4}}\sum_{\bar{t}}\{\Gamma^{t^{\prime}_{1} \bar{t}}Y(\bar{t}t_{1}t_{2}t^{\prime}_{2})-\Gamma^{\bar{t}t}_{1}Y(\bar{t}t^{ \prime}_{1}t^{\prime}_{2}t_{2})\}Y(t^{\prime}_{4}t_{3}t_{4}t^{\prime}_{3})\\ b^{\dagger}_{t_{1}}b^{\dagger}_{t_{2}}b^{\dagger}_{t_{3}}b^{ \dagger}_{t_{4}}b^{\prime}_{t_{1}}b^{\prime}_{t^{\prime}_{2}}b^{\prime}_{t^{ \prime}_{3}}b^{\prime}_{t^{\prime}_{4}}.\end{split} \tag{113f}\] Here, we use Eq. (105) to find \((B_{q})_{B(0)}\). From Eq. (103c), \((B_{q})_{out}^{(k)}=\sum_{t}\sum_{\bar{t}^{\prime}}\Gamma^{\bar{t}^{\prime}}_{q}b^ {\dagger}_{t}(X_{\bar{t}^{\prime}})_{L}^{(k)}\). Finally, we deal with the product of operators. Let \(O_{F}\) and \(O^{\prime}_{F}\) be the phonon creation, annihilation operators, or scattering operators, respectively, we can derive the boson expansions of their product as \[(O_{F}O^{\prime}_{F})_{B(\xi)}=\hat{\cal Z}^{\xi-\frac{1}{2}}\widetilde{O_{F}O^{ \prime}_{F}}\hat{\cal Z}^{-\xi-\frac{1}{2}}. \tag{114}\] If \(\widetilde{O_{F}O^{\prime}_{F}}=\breve{1}_{B}(O_{F})_{L}\breve{1}_{B}(O^{ \prime}_{F})_{L}\hat{\cal Z}\breve{1}_{B}\) holds, we obtain \[(O_{F}O^{\prime}_{F})_{B(\xi)}=(O_{F})_{B(\xi)}(O^{\prime}_{F})_{B(\xi)}, \tag{115}\] and if Eq. (40) holds, \[(O_{F}O^{\prime}_{F})_{B(\xi)}\approx(O_{F})_{B(\xi)}(O^{\prime}_{F})_{B(\xi)}. \tag{116}\] In the case that Eq. (115) and Eq. (116) hold, it is sufficient to obtain only the boson expansions of the basic fermion pair operators. Conventional practical boson expansion methods have used, as a matter of course, the approximation of Eq. (116). Eq. (114) makes it possible to judge whether this approximation is good or bad. We present \(\widetilde{O_{F}O^{\prime}_{F}}\) in the appendix. We finally point out that \(\bar{t}\)-sum does not need to sum all \(\bar{t}\) in the following cases. \[[[X_{\tau_{1}},X^{\dagger}_{\tau_{2}}],X^{\dagger}_{\tau_{3}}]\approx-\sum_{ \tau^{\prime}}Y(\tau_{1},\tau_{2},\tau_{3},\tau^{\prime})X^{\dagger}_{\tau^{ \prime}}, \tag{117}\] are satisfied for the phonon excitation modes \(\{\tau\}\), and \(\{\tau\}\) contains \(\{t\}\) and is set up such that the small parameter expansion holds, then \(\bar{t}\) not contained in \(\{\tau\}\) can be neglected in \(\bar{t}\)-sum. An example is a case where \(\{\tau\}\) contains a sufficient variety of phonon excitation modes and \[\sum_{\tau}\psi_{\tau}(\alpha\beta)\psi_{\tau}(\alpha^{\prime}\beta^{\prime}) \approx\delta_{\alpha\alpha^{\prime}}\delta_{\beta\beta^{\prime}}-\delta_{ \alpha\beta^{\prime}}\delta_{\beta\alpha^{\prime}} \tag{118}\] are satisfied. In this case, \[a^{\dagger}_{\alpha}a^{\dagger}_{\beta}\approx\sum_{\tau}\psi_{\tau}(\alpha\beta) \tag{119}\] are satisfied, therefore, \[X^{\dagger}_{\tau}\approx 0 \tag{120}\] are satisfied, and Eq. (117) are derived. In this case, however, \(\{\tau\}\) cannot be regarded as \(\{t\}\) because \(\{\tau\}\) contains so sufficient variety of phonon excitation modes that \(\hat{Z}\) includes zero eigenvalues. ### Boson expansions in the case where the double commutators of the phonon operators are closed In this section, we treat the boson expansions in the case where the double commutators of Eq. (8) are closed in \(\{t\}\). If \(Y(t_{1}^{\prime}t\bar{t}t_{2}^{\prime})=0\), then the double commutators of Eq. (8) are closed in \(\{t\}\). For further analysis, we denote more concretely \({\bf W}(2)\) and \({\bf Z}^{\prime}(2)\) in Eq. (79) as follows, \[{\bf W}(2)=\left({\bf W}^{(1)}(2)\ {\bf W}^{(2)}(2)\right), \tag{121a}\] \[{\bf Z}^{\prime}(N)=\left(\begin{array}{cc}{\bf Z}^{(1)}(2)&{{\bf Z}^{\prime }}^{(3)}(2)\\ {{\bf Z}^{\prime}}^{(3)}(2)^{T}&{{\bf Z}^{\prime}}^{(2)}(2)\end{array}\right), \tag{121b}\] where \({\bf W}^{(1)}(2)\) is what becomes a zero matrix when \(Y(t_{1}^{\prime}t\bar{t}t_{2}^{\prime})=0\). Substituting this into Eq. (84b), then \({\bf W}^{(2)}(2){{\bf Z}^{\prime}}^{(3)}(2)^{T}\) becomes a zero matrix, and we obtain \[\sum_{\bar{t}\bar{t}^{\prime}}Y(t_{1}^{\prime}\bar{t}t_{2}^{\prime})Y(t_{1} \bar{t}t^{\prime}\bar{t}_{1})=0, \tag{122}\] which indicates that Eqs. (90) do not hold. It also indicates that if \(Y(t_{1}^{\prime}t\bar{t}t_{2}^{\prime})=0\), then \(Y(t_{1}^{\prime}\bar{t}_{1}\bar{t}_{2}t_{2}^{\prime})=0\) should be satisfied. \({\bf W}(2)={\bf 0}(2)\) holds, and \(|t_{1}t_{2}\rangle\) and \(|\bar{t}\mu\rangle\) become orthogonal. Therefore, if the double commutators of Eq. (8) are closed in \(\{t\}\), the boson expansions are not obtained as the small parameter expansion. Starting with Eqs. (72) and applying \(Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=0\) to them, we derive \(\widetilde{X}_{\bar{t}}^{\dagger}(N)=0\), from which we obtain \(\widetilde{X}_{\bar{t}}=0\) and \(\widetilde{X}_{\bar{t}}^{\dagger}=0\). Therefore, \((X_{\bar{t}^{\prime}})_{\xi}=0\) and \((X_{\bar{t}}^{\dagger})_{\xi}=0\) hold. Inversely, if \((X_{\bar{t}^{\prime}})_{\xi}=0\) and \((X_{\bar{t}}^{\dagger})_{\xi}=0\) hold, then \(Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=0\) should hold, that is \(|t_{1}^{\prime}t_{2}^{\prime}\rangle\) and \(|\bar{t}\mu\rangle\) should be orthogonal because \[Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=((t_{1}^{\prime}t_{2}^{\prime}| \bar{t}\mu))-\langle\langle t_{1}^{\prime}t_{2}^{\prime}|\bar{t}\mu\rangle\rangle, \tag{123}\] and \[\langle\langle t_{1}^{\prime}t_{2}^{\prime}|\bar{t}\mu\rangle\rangle=\langle 0 |X_{t_{1}^{\prime}}X_{t_{2}^{\prime}}X_{\bar{t}}^{\dagger}X_{\mu}|0\rangle=(0| (X_{t_{1}^{\prime}})_{\xi}(X_{t_{2}^{\prime}})_{\xi}(X_{\bar{t}}^{\dagger})_ {\xi}(X_{\mu})_{\xi}|0\rangle \tag{124}\] hold. It is a necessary and sufficient condition for \((X_{\bar{t}^{\prime}})_{\xi}=0\) and \((X_{\bar{t}}^{\dagger})_{\xi}=0\) that \(|t_{1}^{\prime}t_{2}^{\prime}\rangle\) and \(|\bar{t}\mu\rangle\) are orthogonal. We also obtain \(\widetilde{X}_{t^{\prime}}=(X_{t^{\prime}})_{D}\hat{Z}\), \(\widetilde{X}_{t}^{\dagger}=(X_{t}^{\dagger})_{D}\hat{Z}\), and \(\widetilde{B}_{q}=(B_{q})_{D}\hat{Z}\). Therefore, \[(O_{F})_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}(O_{F})_{D}\hat{Z}^{-\xi+\frac{1}{2}}, \tag{125}\] where \(O_{F}\) is \(X_{t^{\prime}}\), \(X_{t}^{\dagger}\), or \(B_{q}\). From Eq, (2b) and Eq. (72c), \([(B_{q})_{D},\hat{Z}(N)]=0\) holds, then \[[(B_{q})_{D},\hat{Z}]=0. \tag{126}\] Hence \[(B_{q})_{\xi}=(B_{q})_{D}\hat{T}_{B}=\hat{T}_{B}(B_{q})_{D}=\hat{T}_{B}(B_{q})_{ D}\hat{T}_{B} \tag{127}\] holds for any \(\xi\). For \(O_{F}\) and \(O^{\prime}_{F}\) being the phonon operators or the scattering operators, respectively, we obtain \[(O_{F}O^{\prime}_{F})_{\xi}=\hat{Z}^{\xi-\frac{1}{2}}(O_{F})_{D}(O^{\prime}_{F })_{D}\hat{Z}^{-\xi+\frac{1}{2}}, \tag{128}\] In addition, the following, \[(O_{F}O^{\prime}_{F})_{\xi}=(O_{F})_{\xi}(O^{\prime}_{F})_{\xi}, \tag{129}\] is satisfied if \(\hat{T}_{B}\) becomes \(\breve{1}_{B}\) and \(O_{F}O^{\prime}_{F}\) is normal ordered because \(\hat{Z}^{\xi-\frac{1}{2}}\hat{Z}^{-\xi+\frac{1}{2}}=\breve{1}_{B}\) and \(\breve{1}_{B}(X^{\dagger}_{t})_{D}=\breve{1}_{B}(X^{\dagger}_{t})_{D}\breve{1 }_{B}\), \((X_{t^{\prime}})_{D}\breve{1}_{B}=\breve{1}_{B}(X_{t^{\prime}})\breve{1}_{B}\), and Eq. (126) are satisfied. Eq. (73) becomes \[\hat{Z}(N)=\frac{1}{N}\sum_{t}(X^{\dagger}_{t})_{D}\hat{Z}(N-1)b_{t}\quad(N \geq 2). \tag{130}\] The solution of Eq. (130) should be given by Eq. (83a). While Eq. (130) can be solved directly in the case that \({\bf Z}(2)\) has no zero eigenvalues. In this case, \(Y(t^{\prime}_{1}t_{1}t_{2}t^{\prime}_{2})=-2((t^{\prime}_{1}t^{\prime}_{2}|t_{ 1}t_{2}))\) holds. Therefore, \[(X^{\dagger}_{t})_{D}=b^{\dagger}_{t}(2\hat{N}_{B}+1), \tag{131}\] from which we obtain \[\hat{Z}(N)=\frac{(2N-1)}{N}\sum_{t}b^{\dagger}_{t}\hat{Z}(N-1)b_{t}. \tag{132}\] \(\hat{Z}(2)=3\hat{1}_{B}(2)\), and if \(\hat{Z}(N-1)=(2N-3)!!\hat{1}_{B}(N-1)\), then \(\hat{Z}(N-1)=(2N-1)!!\hat{1}_{B}(N)\). These match Eq. (83a) in the case that \(\hat{T}_{B}(N)=\hat{1}_{B}(N)\) holds. Eq. (130) can be also solved formally as \[\hat{Z}(N)=\frac{1}{N!}\sum_{t_{1}\cdots t_{N}}(X_{t_{1}}^{\dagger})_{D}\cdots(X_ {t_{N}}^{\dagger})_{D}|0)(0|b_{t_{1}}\cdots b_{t_{N}}. \tag{133}\] From Eq. (133), we find the relation, \[\hat{Z}(N)b_{t}^{\dagger}=(X_{t}^{\dagger})_{D}\hat{Z}(N-1). \tag{134}\] While, from Eq. (83a) and Eq. (130), we obtain \[(2N-1)\hat{T}_{B}(N)b_{t}^{\dagger}=(X_{t}^{\dagger})_{D}\hat{T}_{B}(N-1). \tag{135}\] The mapped operators are given as follows, \[(O_{F})_{\xi}=\hat{T}_{B}(O_{F})_{B(\xi)}\hat{T}_{B}, \tag{136a}\] \[(O_{F})_{B(\xi)}=\left\{(2\hat{N}_{B}-1)!!\right\}^{\xi-\frac{1}{2}}(O_{F})_ {D}\left\{(2\hat{N}_{B}-1)!!\right\}^{-\xi+\frac{1}{2}} \tag{136b}\] From Eq. (128), we obtain \[(O_{F}O_{F}^{\prime})_{B(\xi)}=\left\{(2\hat{N}_{B}-1)!!\right\}^{\xi-\frac{1 }{2}}(O_{F})_{D}(O_{F}^{\prime})_{D}\left\{(2\hat{N}_{B}-1)!!\right\}^{-\xi+ \frac{1}{2}}. \tag{137}\] The difference due to \(\xi\) is renormalized to the boson excitation number, and the remaining boson expansions are the same as those of the DBET, which are finite. Therefore, we can substantially treat all types of boson expansions as finite expansions when the concerning states are the physical states that are eigenstates of the boson number operator. If \(\xi=0\), we obtain finite boson expansions of the Hermitian type. In the case that \(O_{F}\) preserves the number of quasi-particles, then \((O_{F})_{D}\) preserves the number of bosons, and the norm operator parts cancel out completely. As a result, we obtain finite boson expansions for any \(\xi\) such as \[(O_{F})_{B(\xi)}=(O_{F})_{D}, \tag{138}\] from which we also derive \[(O_{F})_{B(0)}=(O_{F})_{B(\xi)}. \tag{139}\] Even if \(O_{F}^{(1)}\) and \(O_{F}^{(2)}\) do not necessarily preserve the quasi-particle number, respectively, if \(O_{F}=O_{F}^{(1)}O_{F}^{(2)}\) preserves the quasi-particle number,Eq. (137) enables us to derive \[(O_{F}^{(1)}O_{F}^{(2)})_{B(0)}=(O_{F}^{(1)}O_{F}^{(2)})_{B(\xi)}=(O_{F}^{(1) })_{D}(O_{F}^{(2)})_{D}. \tag{140}\] Hence \((O_{F})_{D}\) and \((O_{F}^{(1)})_{D}(O_{F}^{(2)})_{D}\) are regarded as a finite boson expansion of the Hermitian type. For \(\xi=\frac{1}{2}\), the norm operator does not appear in the mapped fermion operators, and we can obtain the boson expansions as follows, \[\left(O_{F}\right)_{B(\frac{1}{2})}=(O_{F})_{D}, \tag{141}\] \[\left(O_{F}O_{F}^{\prime}\right)_{B(\frac{1}{2})}=\left(O_{F}\right)_{B(\frac{1 }{2})}(O_{F}^{\prime})_{B(\frac{1}{2})}=(O_{F})_{D}(O_{F}^{\prime})_{D}. \tag{142}\] We obtain the finite expansions of DBET. From Eqs. (44), it is straitforward to proof that Hermitian treatment[14] holds exactly for the eigenvector of \(\hat{Z}\), \(|N;a)\). Therefore, when \(\hat{T}_{B}=\hat{1}_{B}\) holds, it is applied precisely for the ideal boson state vector, \(|N:t)\). For \(\xi=0\), the boson mapping becomes of the Hermite type. We can obtain mapped operators as follows, \[\begin{array}{rcl}(X_{t}^{\dagger})_{B(0)}&=&\left\{(2\hat{N}_{B}-1)!! \right\}^{-\frac{1}{2}}(X_{t}^{\dagger})_{D}\left\{(2\hat{N}_{B}-1)!!\right\} ^{\frac{1}{2}},\\ &=&(X_{t}^{\dagger})_{D}\frac{1}{\sqrt{1+2\hat{N}_{B}}}=b_{t}^{ \dagger}\sqrt{1+2\hat{N}_{B}},\end{array} \tag{143a}\] \[\begin{array}{rcl}(B_{q})_{B(0)}=\sum_{tt^{\prime}}\Gamma_{q}^{t^{\prime}t} b_{t}^{\dagger}b_{t^{\prime}}.\end{array} \tag{143b}\] Here we use the relation, \[\hat{T}_{B}b_{t}^{\dagger}(2\hat{N}_{B}+1)=(X_{t}^{\dagger})_{D}\hat{T}_{B}, \tag{144}\] obtained from Eq. (135) for the derivation of \((X_{t}^{\dagger})_{B(0)}\). The scattering operators are expressed as finite expansions in the physical subspace. The phonon operators do not become of the small parameter expansion whose zeroth-order approximation becomes the boson approximation. The boson approximation holds only when the phonon excitation number does not exceed one. ### On the role of the norm operator In this subsection, based on the results obtained so far, we summarize the role the norm operator plays in the boson expansion method. What is important is the relation between the norm operator consisting of all kinds of phonons and the norm operator constituting the boson mapping operator. What kinds of boson expansions are derived is determined by the structure of the norm operator when all modes are adopted, \(\hat{Z}^{(A)}\). The structure is determined by the introduced single-particle states, \(\{\alpha\}\), and the amplitudes of the Tam-Dancoff phonons, \(\psi_{\mu}(\alpha\beta)\). is composed of the norm operator \(\hat{Z}\), which is used for mapping, and other operators as \[\hat{Z}^{(A)}=\hat{Z}+\hat{W}+\hat{W}^{\dagger}+\hat{Z},^{\prime}\] (145a) where \[\hat{Z}=\breve{1}_{B}\hat{Z}^{(A)}\breve{1}_{B},\quad\hat{W}=\breve{1}_{B}\hat{Z }^{(A)}(\breve{1}_{B}^{(A)}-\breve{1}_{B}),\quad\hat{Z}^{\prime}=(\breve{1}_ {B}^{(A)}-\breve{1}_{B})\hat{Z}^{(A)}(\breve{1}_{B}^{(A)}-\breve{1}_{B}), \tag{145b}\] The condition Eq. (52) is imposed on \(\hat{Z}^{(A)}\), which is regardless of how to take \(\psi_{\mu}(\alpha\beta)\). \(\hat{Z}\), \(\hat{W}\), and \(\hat{Z}^{\prime}\) are determined so as to satisfy the condition Eq. (52). The double commutation relations of the phonon operators that constitute \(\hat{Z}\) are generally not closed among them. \(\hat{Z}\) must have eigenvalues nearly to 1 for the small parameter expansion, which also allows the use of ideal boson state vectors as physical. It is possible to directly check whether this condition is satisfied because \(\hat{Z}\) is specifically obtained by the boson expansion assuming the small parameter expansion. Only this condition is not, however, a sufficient condition for realizing the small parameter expansion. Including not only \(\hat{Z}\) but also \(\hat{W}\) and \(\hat{Z}^{\prime}\) gives the necessary and sufficient condition. In this case, it is not allowed to treat \(\hat{W}\) as a zero operator, that is, to assume that the double commutation relation of the phonon operators constituting \(\hat{Z}\) is closed because the small parameter expansion does not hold. In the case that \(\hat{W}\) can be regarded as zero, the boson expansions can be substantially treated as finite expansions. The realization of this type of practical boson expansion is more difficult because \(\{t\}\) should selected so that the ideal boson state vectors become physical and the dynamics are reflected enough, as with the small parameter expansion, under the above condition. ## 5 Comments on the conventional methods Conventional practical boson expansion methods, without exception, discard the phonon excitation modes that are not adopted for the boson excitation modes. We call this procedure the non-adopted modes discarding (NAMD). Since NAMD closes double commutators of phonon operators within the adopted modes for those of bosons, it is incompatible with a small expansion. The incompatibility between NAMD and the small parameter expansion has not been considered in formulating the conventional practical boson expansion methods. In the case that NADM is precisely applicable, DBET is formulated exactly, which does not mean that DBET necessarily has its exceptional superiority because we can substantially obtain finite expansions of the Hermite type by treating the boson number operator parts appropriately. In the case that the small parameter expansion is applicable, all the boson expansions become infinite ones and include the terms neglected by NAMD. Applying NADM to Eq. (111), Eq. (112), and Eq. (113), it is found that the remaining terms up to \(O(\Gamma^{2})\) coincide with those obtained by NOLCEXPT. The order of magnitude of the neglected terms is also \(O(\Gamma^{2})\), respectively. NOLCEXPT obtains, with NADM, the terms only up to \(O(\Gamma^{2})\). On the other hand, the finite boson expansions of DBET are obtained from\(\left(O_{F}\right)_{B(\frac{1}{2})}\) by applying NADM. The order of magnitude of the neglected terms by NADM is also \(O(\Gamma^{2})\) in the same order as the smallest one of the non-neglected terms by NADM. In both NOLCEXPT and DBET, NADM neglects terms of the order of magnitude that should be adopted, which indicates that NAMD can not be used as a proper approximation under the small parameter expansion. The investigation so far makes it clear that the comment of NOLCEXPT [13, 15] on NAMD is incorrect. NOLCEXPT claims that the scattering operators are expressed as finite expansions. It is realized only when NADM is precisely or well approximately applied and not when the small parameter expansion is realized. Eqs. (143) indicate that it is impossible to express the phonon operators as infinite normal-ordered small parameter expansions although the scattering operators become finite. NOLCEXPT has failed to refute Marshalek's claim [11, 12] that KT-1 [24] and KT-2 [8] are chimerical boson expansions. As already mentioned, Hermitian treatment becomes exact when NAMD becomes exact. On the other hand, in the case that the small parameter expansion is applicable, Hermitian treatment becomes an approximation, and it can be generally evaluated using the norm operator by following the method of [22]. It is concluded that Hermitian treatment holds as far as it is possible to neglect terms of \(O(\Gamma^{4})\). Next, we comment on the problems related to a modified Marumori boson expansion method [7, 16]. The modified Marumori boson expansion method concludes that NADM is good using a norm of a multi-phonon state vector despite the small parameter expansion being available [7]. The reason why the conclusion is derived incorrectly is as follows: The norm of the multi-phonon state vector is obtained from \(\hat{Z}(N)\). Since the neglected terms by NADM do not appear up to \(O(\Gamma^{3})\) in \(\hat{Z}(N)\), it is impossible to evaluate whether NADM is a good approximation by investigating \(\hat{Z}(N)\) only up to \(O(\Gamma^{2})\) with the small parameter expansion. For explanation, we adopt a case where \(\{t\}\) consists of only one type of excitation mode \(c\). We define \(|N\rangle\) and \(|N\rangle\) as \[|N\rangle=|\,\overbrace{c\cdots c}^{N},\quad|N\rangle=|\,\overbrace{c\ldots c }^{N}\rangle. \tag{146}\] They sattisy \(\langle N|N|N\rangle=(N|\hat{Z}(N)|N)\). Assuming the small parameter expansion, and setting \(\langle 2|2\rangle=1-\varepsilon\), then \(\varepsilon=\frac{1}{2}Y(cccc)\sim O(\Gamma^{2})\). Expressing \(\langle N|N\rangle\) derived by the small parameter expansion up to \(O(\Gamma^{2})\) as \(\mathcal{N}^{(2)}(N)\), we obtain \(\mathcal{N}^{(2)}(N)=1-N(N-1)\frac{\varepsilon}{2}+O(\Gamma^{3})\). On the other hand, Eq. (130) enables us to obtain \(\hat{Z}(N)\) with the sum of all terms without the neglected ones by NADM. Expressing \(\langle N|N\rangle\) thus obtained as \(\mathcal{N}^{(all)}(N)\), \[\mathcal{N}^{(all)}(N)=\mathcal{N}^{(all)}(N-1)\left(1+(\langle 2|2\rangle-1)(N- 1)\right), \tag{147}\] holds, and we obtain \(\mathcal{N}^{(all)}(N)=(1-(N-1)\varepsilon)(1-(N-2)\varepsilon)\cdots(1- \varepsilon)\). As \(N\) becomes large, the difference between both becomes prominent, which is, however, only \(2\varepsilon^{2}\sim O(\Gamma^{4})\) for \(N=3\). It indicates that the small parameter expansion is well applied for this case and that both coincide well up to \(O(\Gamma^{2})\) with the exact one. Therefore it is impossible to judge whether NADM is good or not by the comparison of \(\mathcal{N}^{(all)}(3)\) with the exact one. It is a wrong conclusion that NADM holds well for \(\varepsilon\approx 0.1\)[7], where the small parameter expansion becomes possible. \(\varepsilon\) should become approximately -2 for NADM to become good. In addition, it is not the strong but the weak effect of the Pauli exclusion principle that makes the small parameter expansion possible. The comment on the convergence of the modified Marumori boson expansions is mistaken. Conventional practical boson expansion methods restrict, for mapping, only the sort of phonon excitation modes and not the number of those. \(\hat{Z}(N)\) necessarily has zero eigenvalues for large phonon excitation numbers even if restricting the sorts of modes, which makes ideal boson state vectors unphysical and the small parameter expansion impossible. We should restrict the phonon excitation number beforehand. Nevertheless, NOLCEXPT does not restrict the phonon excitation number. Instead, it treats, without a clear basis, all the norm matrices of the multi-phonon state as having no zero eigenvalues. Nevertheless, it gives the correct expansion terms up to \(O(\Gamma^{2})\) under NADM. The restriction of the phonon excitation number beforehand gives a clear reason for it because we obtain \[\lim_{N_{max}\rightarrow\infty}(O_{F})_{\xi}=(O_{F})_{B(\xi)} \tag{148}\] from Eq. (108). It indicates that we can obtain the correct results of the small parameter expansion without limiting the phonon excitation number beforehand. Afterward, we should limit the boson excitation number. As for BREXP, by replacing the collective modes \(\{c\}\) and the non-collective modes \(\{n\}\) to \(\{t\}\) and \(\{\bar{t}\}\), respectively, and suppressing the fermion excitations, we can obtain the boson expansions from BREXP [17, 18]. The Hermitian-type boson expansions obtained thus agree with those obtained from BREXP by adopting the proper transformation [25]. Further comparison requires the derivation of higher-order terms in BREXP. Summary We have proposed a new boson expansion theory, the norm operator method, where the norm operator plays a crucial role. The different treatment of the norm operator determines the type of boson expansions as Hermitian or non-Hermitian. The mapping operator limits the number of phonon excitations in addition to the phonon excitation modes beforehand to use the ideal boson state vectors as physical and avoid the breakdown of the small parameter expansion whose zeroth-order approximation is the boson approximation. In the case that the closed algebraic approximation or phonon truncation approximation holds, that is, the double commutation relations between the phonons with excitation modes adopted as boson excitations are closed, the small parameter expansion is not available. The norm operator is expressed as a function of the boson number operator, which substantially makes all types of boson expansions be of finite expansion. The small parameter expansion is not compatible with the closed-algebra approximation or the phonon truncation approximation. The contribution of the phonon excitation modes neglected by the approximation makes the boson expansion become infinite expansion regardless of whether it is of the Hermitian type or not. We have obtained the higher-order terms of the boson expansion not expanded so far in addition to the neglected by the approximation. Conventional practical boson expansion methods have used the closed-algebra approximation or the phonon truncation approximation without recognizing its playing role mentioned above, and the claims derived from this approximation have no validity: The normal-ordered linked-cluster expansion theory has failed to refute Marshalek's claim that KT-1 and KT-2 are of the chimerical boson expansion. The Dyson boson expansion theory does not have exceptional superiority over the other types of boson expansions. The boson-fermion expansion theory derives the same boson expansions with the Hermitian-type boson expansions obtained here up to the next-to-leading order. The boson-fermion expansion theory should derive higher-order expansion terms for further comparison. ## References * [1] A. Klein and E. R. Marshalek, Rev. Mod. Phys. **63**, 375 (1991). * [2] S. T. Beliaev and V. G. Zelevinsky, Nucl. Phys. **39**, 582 (1962). * [3] T. Usui, Prog. Theor. Phys. **23**, 787 (1960). * [4] T. Marumori, M. Yamamura, and A. Tokunaga, Prog. Theor. Phys. **31**, 1009 (1964). * [5] D. Janssen, F. Donau, and S. Frauendorf, Nucl. Phys. **A172**, 145 (1971). * [6] S. G. Lie and G. Holzwarth, Phys. Rev. **C12**, 1035 (1975). * [7] G. Holtzwarth, D. Janssen, and R. V. Jolos, Nucl. Phys. **A261**, 1 (1976). * [8] T. Kishimoto and T. Tamura, Nucl. Phys. **A270**, 317 (1976). * [9] H. Tsukuma, H. Thorn, and K. Takada, Nucl. Phys. **A466**, 70 (1987). * [10] H. Sakamoto and T. Kishimoto, Nucl. Phys. **A528**, 73 (1991). * [11] E. R. Marshalek, Nucl. Phys. **A347**, 253 (1980). * [12] E. R. Marshalek, Phys. Lett. **95B**, 337(1980). * [13] T. Kishimoto and T. Tamura, Phys. Rev. **C27**, 341 (1983). * [14] K. Takada, Prog. Theor. Phys. Suppl. **141**, 179 (2001). * [15] H. Sakamoto and T. Kishimoto, Nucl. Phys. **A486**, 1 (1988). * [16] T. Marumori, K. Takada, and F. Sakata, Suppl. Prog. Theor. Phys. **71**, 1 (1981) * [17] K. Taniguchi and Y. Miyanishi, Prog. Theor. Phys. **84**, 568 (1990). * [18] K. Taniguchi and Y. Miyanishi, Prog. Theor. Phys. **86**, 151 (1991). * [19] T. Kishimoto, T Kammuri, and H. Sakamoto, Prog. Theor. Phys. **85** 1057 (1991). * [20] K. Takada, Phys. Rev. **C34**, 750 (1986). * [21] K. Takada, Phys. Rev. **C38**, 2450 (1988). * [22] A. Kajiyama, K. Taniguchi, and Y. Miyanishi, Prog. Theor. Phys. **101**, 579 (1999). * [23] M. Sato, Y. R. Shimizu, and K. Takada, Prog. Theor. Phys. **102**, 287 (1999). * [24] T. Kishimoto and T. Tamura, Nucl. Phys. **A192**, 246 (1972). * [25] K. Taniguchi, A. Kajiyama, and Y. Miyanishi, Prog. Theor, Phys. **92**, 975 (1994). ## Appendix A formulae of the product of the pair operators We denote \(B_{q}\), \(X_{t^{\prime\prime}}\), or \(X_{\bar{t}^{\prime}}\) as \(O_{F}\). the following equations hold: \[\widetilde{X_{t^{\prime}}O_{F}}=\breve{1}_{B}(X_{t^{\prime}})_{L}B(O_{F})_{ L}\hat{\mathcal{Z}}\breve{1}_{B}=\breve{1}_{B}(X_{t^{\prime}})_{L}B(O_{F})_{L} \hat{\mathcal{Z}}\breve{1}_{B},\] (A1a) \[\widetilde{O_{F}^{\dagger}X_{t}^{\dagger}}=\breve{1}_{B}(O_{F}^{\dagger})_{L}(X _{t}^{\dagger})_{L}\hat{\mathcal{Z}}\breve{1}_{B}.\] (A1b) \[\widetilde{X_{t}^{\dagger}X_{t^{\prime}}}=\breve{1}_{B}\left\{(X_{t}^{ \dagger})_{D}(X_{t^{\prime}})_{D}-\frac{1}{2}\sum_{t_{1}t_{2}}\sum_{\bar{t}^{ \prime}}Y(tt_{1}t_{2}\bar{t}^{\prime}_{1})b_{t_{1}}^{\dagger}b_{t_{2}}^{ \dagger}(X_{t^{\prime}})_{L}(X_{\bar{t}^{\prime}_{1}})_{L}\right\}\hat{ \mathcal{Z}}\breve{1}_{B}.\] (A2) \[\widetilde{X_{\bar{t}}^{\dagger}X_{\bar{t}^{\prime}}}=\breve{1}_{B}(X_{\bar{t }}^{\dagger})_{L}(X_{\bar{t}^{\prime}}^{\dagger})_{L}\hat{\mathcal{Z}}\breve {1}_{B}+O(\Gamma^{5}),\] (A3a) \[\widetilde{X_{\bar{t}}X_{\bar{t}^{\prime}}}=\breve{1}_{B}(X_{\bar{t }}^{\dagger})_{L}(X_{\bar{t}^{\prime}})_{L}\hat{\mathcal{Z}}\breve{1}_{B}+O( \Gamma^{5}).\] (A3b) \[\widetilde{X_{\bar{t}}^{\dagger}X_{\bar{t}^{\prime}}}=\breve{1}_{B}(X_{\bar{t }}^{\dagger})_{L}(X_{\bar{t}^{\prime}})_{L}\hat{\mathcal{Z}}\breve{1}_{B}+O( \Gamma^{5}).\] (A4) \[\widetilde{B_{q}X_{\mu^{\prime}}}=(B_{q})_{D}\widetilde{X_{\mu^{\prime}}}+\sum _{t}\sum_{\bar{t}^{\prime}}\Gamma_{q}^{\bar{t}^{\prime}t}b_{t}^{\dagger} \widetilde{X_{\bar{t}^{\prime}}X_{\mu^{\prime}}},\] (A5a) \[\widetilde{X_{\mu}^{\dagger}B_{q}}=\widetilde{X_{\mu}^{\dagger}}(B_{q})_{D}+ \sum_{t^{\prime}}\sum_{\bar{t}}\Gamma_{q}^{t^{\prime}\bar{t}}\widetilde{X_{\mu }^{\dagger}X_{\bar{t}}^{\dagger}}b_{t^{\prime}}.\] (A5b) Proof of \({\bf W}(N)={\bf 0}(N)\) for \(N\geq 3\) \(\hat{Z}(N)\) is related to \(\hat{Z}^{(A)}(N)\) as \[\hat{Z}(N)=\check{1}_{B}\hat{Z}^{(A)}(N)\check{1}_{B}=\hat{1}_{B}(N)\hat{Z}^{(A )}(N)\hat{1}_{B}(N).\] (B1) Introducing \[\begin{array}{rcl}\hat{Z}^{\prime}(N)&=&(\check{1}_{B}^{(A)}-\check{1}_{B}) \hat{Z}^{(A)}(N)(\check{1}_{B}^{(A)}-\check{1}_{B})\\ &=&(\hat{1}_{B}^{(A)}(N)-\hat{1}_{B}(N))\hat{Z}^{(A)}(N)(\hat{1}_{B}^{(A)}(N)- \hat{1}_{B}(N)),\end{array}\] (B2) and \[\hat{W}(N)=\check{1}_{B}\hat{Z}^{(A)}(\check{1}_{B}^{(A)}-\check{1}_{B})=\hat {1}_{B}(N)\hat{Z}^{(A)}(\hat{1}_{B}^{(A)}(N)-\hat{1}_{B}(N)),\] (B3) we obtain \[\hat{Z}(N)^{(A)}=\hat{Z}(N)+\hat{W}(N)+\hat{W}(N)^{\dagger}+\hat{Z}^{\prime}( N).\] (B4) Eq. (79) expresses this relation as those of the matrices where \({\bf Z}^{(A)}(N)\), \({\bf Z}(N)\), \({\bf Z}^{\prime}(N)\), and \({\bf W}(N)\) are matrices representing \(\hat{Z}^{(A)}(N)\), \(\hat{Z}(N)\), \(\hat{Z}^{\prime}(N)\), and \(\hat{W}(N)\), respectively. If \(Y(t_{1}^{\prime}\bar{t}\mu t_{2}^{\prime})=0\), then \(\hat{W}(2)=0\). On the other hand, from Eq. (73), we obtain \[\hat{Z}^{(A)}(N)=\frac{1}{N}\sum_{\mu}(X_{\mu}^{\dagger})_{D}\hat{Z}(N-1)^{(A )}b_{\mu}.\] (B5) If \(\hat{W}(N-1)=0\), then \[\hat{Z}^{(A)}(N)=\frac{1}{N}\sum_{\mu}(X_{\mu}^{\dagger})_{D}Z(N-1)b_{\mu}+ \frac{1}{N}\sum_{\mu}(X_{\mu}^{\dagger})_{D}\hat{Z}^{\prime}(N-1)b_{\mu}.\] (B6) On the ohrer hand, \(\hat{1}_{B}(N-1)b_{\mu}(\hat{1}_{B}^{(A)}(N)-\hat{1}_{B}(N))=0\) and \(\hat{1}_{B}(N)(X_{\mu}^{\dagger})_{D}(\hat{1}_{B}^{(A)}(N-1)-\hat{1}_{B}(N-1))=0\) hold. Therefore if \(\hat{W}(N-1)=0\), then \(\hat{W}(N)=0\). That is \(\hat{W}(N)=0\) for \(N\geq 3\), and then \({\bf W}(N)={\bf 0}(N)\) for \(N\geq 3\).
2309.03770
Neural lasso: a unifying approach of lasso and neural networks
In recent years, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. In this article, the statistical technique lasso for variable selection is represented through a neural network. It is observed that, although both the statistical approach and its neural version have the same objective function, they differ due to their optimization. In particular, the neural version is usually optimized in one-step using a single validation set, while the statistical counterpart uses a two-step optimization based on cross-validation. The more elaborated optimization of the statistical method results in more accurate parameter estimation, especially when the training set is small. For this reason, a modification of the standard approach for training neural networks, that mimics the statistical framework, is proposed. During the development of the above modification, a new optimization algorithm for identifying the significant variables emerged. Experimental results, using synthetic and real data sets, show that this new optimization algorithm achieves better performance than any of the three previous optimization approaches.
David Delgado, Ernesto Curbelo, Danae Carreras
2023-09-07T15:17:10Z
http://arxiv.org/abs/2309.03770v1
# Neural lasso: a unifying approach of lasso and neural networks ###### Abstract In recent years, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. In this article, the statistical technique lasso for variable selection is represented through a neural network. It is observed that, although both the statistical approach and its neural version have the same objective function, they differ due to their optimization. In particular, the neural version is usually optimized in one-step using a single validation set, while the statistical counterpart uses a two-step optimization based on cross-validation. The more elaborated optimization of the statistical method results in more accurate parameter estimation, especially when the training set is small. For this reason, a modification of the standard approach for training neural networks, that mimics the statistical framework, is proposed. During the development of the above modification, a new optimization algorithm for identifying the significant variables emerged. Experimental results, using synthetic and real data sets, show that this new optimization algorithm achieves better performance than any of the three previous optimization approaches. neural networks, lasso, cross-validation, feature selection ## 1 Introduction Nowadays, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. An example of the above can be found in the area of statistical item response theory, and specifically in the development of computerized adaptive tests [1; 2]. Yan, Lewis, and Stocking and, later, Ueno and Songmuang proposed the use of decision trees as an alternative to the computerized adaptive tests [3; 4]. Later, Delgado-Gomez et al. established mathematically an equivalence between these two techniques that allows the administration of computerized adaptive tests in real-time using item selection criteria that are computationally very intensive [5]. Recently, several works using neural networks have been published in this field [6; 7]. Regarding these last works, it is interesting to note the synergies that are being generated between the areas of Statistics and Neural Networks [8; 9]. Representing statistical models using neural networks provides them with the flexibility and optimization methods of the latter. In a previous pilot study, Laria et al. indicated how the least absolute shrinkage and selection operator (lasso) algorithm can be represented as a neural network [10]. Conversely, linking neural networks to statistical models allows to improve the interpretability of the former [11]. These synergies have occurred in several domains of Statistics such as regression, dimensional reduction, time series, or quality control [12]. In this article, the widely used lasso algorithm is developed from the perspective of neural networks. To this end, in Section 2, the most relevant features of the lasso algorithm are presented in order to understand the elaboration of its neural version. After that, in Section 3, the entire mathematical formulation proposed by Laria et al. is extended, and the optimization is redefined [10]. Both linear and logistic regressions are considered. In Section 4, several experiments are carried out to evaluate the performance of the neural version and compare it with their statistical counterpart. These experiments are performed on both real and simulated data. Finally, the article concludes in Section 5 with a discussion of the obtained results and future research lines. ## 2 The lasso Following, the lasso algorithm is briefly presented highlighting the most relevant elements in relation to our proposal. Hereafter, the lasso algorithm will be referred to as _statistical lasso_ to differentiate it from its neural version throughout the article. ### Formulation Let \((\mathbf{x}_{i},y_{i})\), \(i=1,\ldots,N\), be a set containing \(N\) observations where \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) represents the predictors, and \(y_{i}\in\mathbb{R}\) are the associated responses. It is assumed that the predictors are standardized and the responses are centered, i.e., \[\sum_{i=1}^{N}x_{ij}=0,\hskip 28.452756pt\sum_{i=1}^{N}x_{ij}^{2}=1,\hskip 28.452756pt \sum_{i=1}^{N}y_{i}=0,\hskip 28.452756pt\text{for }j=1,2,\ldots,p \tag{1}\] The lasso technique was introduced for generalized linear models in the supervised context by Tibshirani [13]. It is formulated as the following optimization problem \[\underset{\mathbf{\beta}}{argmin}\,\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{\beta})+\lambda \lVert\mathbf{\beta}\rVert_{1} \tag{2}\] where \(\mathbf{X}\) is the (standardized) matrix that contains the observations as rows, \(\mathbf{y}\) is the vector with the corresponding labels, \(\mathbf{\beta}\) is the vector containing the weights of the regression, and \(\lambda\lVert\mathbf{\beta}\rVert_{1}\) is a penalization term. \(\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{\beta})\) represents the error term. In this work, we will focus on linear and logistic regression. For linear regression, the error term is given by \[\mathcal{R}_{Lin}(\mathbf{y},\mathbf{X}\mathbf{\beta})=\frac{1}{N}\sum_{i=1}^{N}(y_{i}- \mathbf{x}_{i}^{t}\mathbf{\beta})^{2} \tag{3}\] while the error term for the logistic regression is given by: \[\mathcal{R}_{Log}(\mathbf{y},\mathbf{X}\mathbf{\beta})=\frac{1}{N}\sum_{i=1}^{N}\left[ \log(1+e^{\mathbf{x}_{i}^{t}\mathbf{\beta}})-y_{i}\mathbf{x}_{i}^{t}\mathbf{\beta}\right] \tag{4}\] ### Optimization Given a fixed \(\lambda\), the values of \(\mathbf{\beta}\) are estimated using coordinate descent. As an example, the coordinate descent update for the \(j^{th}\) coefficient in the linear regression case is given by \[\hat{\beta}_{j}=\mathcal{S}_{\lambda}(\frac{1}{N}\langle\mathbf{X}_{j}, \mathbf{r}_{j}\rangle) \tag{5}\] where \(\mathbf{X}_{j}\) is the \(j^{th}\) column of matrix \(\mathbf{X}\), the \(i^{th}\) component of \(\mathbf{r}_{j}\) is obtained by \[\mathbf{r}_{j}(i)=y_{i}-\sum_{k\neq j}x_{ik}\hat{\beta}_{k} \tag{6}\] and \(\mathcal{S}_{\lambda}\) is the soft-thresholding operator defined by \[S_{\lambda}(x)=\text{sign}(x)(|x|-\lambda)_{+} \tag{7}\] The optimal value of \(\lambda\) is obtained through a k-fold crossvalidation. A more detailed discussion of the lasso optimization can be found in the book by Hastie, Tibshirani and Wainwright [14]. A schematic representation of the lasso optimization algorithm is shown in the upper panel of Figure 3. ## 3 The neural lasso Similarly to the previous section, the formulation and optimization of the neural lasso is presented. ### Formulation Following, the neural representation of the lasso is presented. It begins by presenting the mathematical formulation for linear regression and, afterward, it is extended to logistic regression. #### Linear regression When the error term is given by the mean squared error (MSE), lasso can be characterized as the neural network shown in Figure 1. In this case, the loss function is given by \[\begin{split}\mathcal{L}(\mathbf{w})&=\frac{1}{N}\sum _{i=1}^{N}\Biggl{(}y_{i}-\gamma\sum_{j=1}^{p}x_{ij}w_{j}\Biggr{)}^{2}+\ell_{1} \sum_{j=1}^{p}|w_{j}|\\ &=\frac{1}{N}\|\mathbf{y}-\gamma\mathbf{X}\mathbf{w}\|_{2}^{2}{+} \ell_{1}\|\mathbf{w}\|_{1}\end{split} \tag{8}\] where \((\mathbf{w},\gamma)\) are the parameters of the network, and \(\ell_{1}\) is a regularization hyper-parameter. Notice that, by making \(\mathbf{\beta}=\gamma\mathbf{w}\) and \(\lambda=\frac{\ell_{1}}{\gamma}\), equation (8) is equivalent to equation (2) using MSE as error term. Figure 1: Neural Representation of lasso for linear regression An important aspect to keep in mind is that, unlike the statistical lasso, the neural network optimization does not set the weights exactly to zero. Therefore, it is necessary to establish a condition that determines which weights are zeros after each training epoch, and sets them to this value. To do this, we calculate the derivative of the loss function defined in equation (8) with respect to \(w_{j}\) \[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\sum_{i=1 }^{N}\Biggl{(}y_{i}-\gamma\sum_{k=1}^{p}x_{ik}w_{k}\Biggr{)}x_{ij}+\ell_{1}s_{j} \tag{9}\] where the term \(s_{j}\) is the subgradient defined by \[s_{j}=\left\{\begin{array}{cc}1&w_{j}>0\\ -1&w_{j}<0\\ [-1,1]&w_{j}=0\end{array}\right.. \tag{10}\] Equation (9) can be rewritten as \[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\Biggl{(} \sum_{i=1}^{N}y_{i}x_{ij}-\gamma\sum_{i=1}^{N}x_{ij}\sum_{k\neq j}x_{ik}w_{k}- \gamma w_{j}\sum_{i=1}^{N}x_{ij}^{2}\Biggr{)}+\ell_{1}s_{j} \tag{11}\] and, equivalently, in vector form \[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\Biggl{(} \mathbf{X}_{j}^{t}\mathbf{y}-\Bigl{(}\gamma\mathbf{X}_{j}^{t}\mathbf{X}\mathbf{w} _{j}^{*}-\gamma w_{j}\Bigr{)}+\ell_{1}s_{j} \tag{12}\] where \(\mathbf{X}_{j}^{t}\) is the transpose of the \(j^{th}\) column of matrix \(\mathbf{X}\) (containing observations as rows) and \(\mathbf{w}_{j}^{*}\) is the vector \(\mathbf{w}\) with the \(j^{th}\) component equal to 0. To obtain the above expression, it has been taken into account that \(\sum_{i=1}^{2}x_{ij}^{2}=1\) since the data are standardized. Equating the derivative to 0 leads to \[w_{j}=\frac{\frac{2}{N}\gamma\mathbf{X}_{j}^{t}\Biggl{(}\mathbf{y}-\gamma \mathbf{X}\mathbf{w}_{j}^{*}\Biggr{)}-\ell_{1}s_{j}}{\frac{2}{N} \gamma^{2}} \tag{13}\] From where it follows that \[w_{j}^{op}=\left\{\begin{array}{ll}\dfrac{2}{N}\gamma\mathbf{X }_{j}^{t}\Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}-\ell_{1}&\\ \dfrac{2}{N}\gamma^{2}&\text{if }\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}>\ell_{1}\\ \dfrac{2}{N}\gamma\mathbf{X}_{j}^{t}\Bigg{(}\mathbf{y}-\gamma \mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}+\ell_{1}&\\ \dfrac{2}{N}\gamma^{2}&\text{if }\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}<-\ell_{1}\\ 0&\text{if }\left|\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}\right|\leq\ell_{1} \end{array}\right. \tag{14}\] Indicate that, unlike lasso which needs the three updates of equation 14, neural lasso only uses the last condition to make weights zero. This is because the update of the weights is performed implicitly during the training of the network. Concisely, after each training epoch, the network will determine if any of the weights can be replaced by 0 by checking if the last condition of the equation (14) is satisfied using the current estimates. This difference will be relevant later in the logistic regression. #### Logistic regression As shown below, the optimization problem for the logistic case is formulated by \[\underset{\mathbf{\beta}}{argmin}\frac{1}{N}\sum_{i=1}^{N}\Big{[}\log(1+e^{ \mathbf{x}_{i}^{t}\mathbf{\beta}+\beta_{0}})-y_{i}\left(\mathbf{x}_{i}^{t}\mathbf{ \beta}+\beta_{0}\right)\Big{]}+\lambda\norm{\mathbf{\beta}}_{1} \tag{15}\] This problem can be characterized by the neural network shown in Figure 2. Figure 2: Neural representation of lasso for logistic regression Note that the linear activation of the output layer has been replaced by a sigmoid. In addition, the MSE has been replaced by the binary cross-entropy function whose formula is given by \[-\frac{1}{N}\sum_{i=1}^{N}y_{i}\log\hat{y}_{i}+(1-y_{i})\log(1-\hat{y}_{i}) \tag{16}\] Therefore, the loss function of the network is given by \[\mathcal{L}(\mathbf{w})=-\frac{1}{N}\sum_{i=1}^{N}\Biggl{(}y_{i}\log \left(\frac{1}{1+e^{-\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)+(1-y_{i})\log\left( 1-\frac{1}{1+e^{-\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)\Biggr{)} \tag{17}\] \[+\ell_{1}\|\mathbf{w}\|_{1}\] It can be seen that equation (17) is equivalent to equation (15) as follows. Focusing on the error term of equation (17): \[\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{w}) = -\frac{1}{N}\sum_{i=1}^{N}\Biggl{(}y_{i}\log\left(\frac{1}{1+e^{ -\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)+(1-y_{i})\log\left(\frac{1}{1+e^{ \gamma x_{i}^{t}\mathbf{w}+b_{0}}}\right)\Biggr{)}\] \[= -\frac{1}{N}\sum_{i=1}^{N}\left(-y_{i}\log(1+e^{-\gamma\mathbf{x}_{i} ^{t}\mathbf{w}-b_{0}})-(1-y_{i})\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log(1+e^{-\gamma\mathbf{x}_{i}^ {t}\mathbf{w}-b_{0}})+\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})-y_{i}\log(1+e^{ \gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log\left(\frac{1+e^{-\gamma \mathbf{x}_{i}^{t}\mathbf{w}-b_{0}}}{1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}\right)+ \log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log\left(e^{-\gamma\mathbf{x}_{i }^{t}\mathbf{w}-b_{0}}\right)+\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{ w}+b_{0}})-y_{i}(\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0})\right)\] Therefore, (17) becomes \[\mathcal{L}(\mathbf{w})=\frac{1}{N}\sum_{i=1}^{N}\left(\log(1+e^{\gamma\mathbf{x}_{i}^ {t}\mathbf{w}+b_{0}})-y_{i}(\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0})\right)+\ell_{1}\| \mathbf{w}\|_{1} \tag{18}\] Defining, as above, \(\mathbf{\beta}=\gamma\mathbf{w}\), \(\lambda=\ell_{1}/\gamma\), formulation (17) is equivalent to formulation (15). Similar to the linear case, it is necessary to establish a mechanism that makes the weights associated with the non-significant variables equal to 0. Taking the derivative of the loss function in equation (18) \[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{1}{N}\sum_{i=1}^{N} \left(\frac{\gamma x_{ij}e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}{1+e^{\gamma\mathbf{x} _{i}^{t}\mathbf{w}+b_{0}}}-y_{i}\gamma x_{ij}\right)+\ell_{1}s_{j} \tag{19}\] Unfortunately, unlike the linear case, it is not possible to isolate the vector \(\mathbf{w}\). The problem is, therefore, approached from a different perspective. Rearranging and equating the above equation to zero \[\frac{\gamma}{N}\sum_{i=1}^{N}\left(\frac{e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}} }{1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}-y_{i}\right)x_{ij}+\ell_{1}s_{j}=0 \tag{20}\] which is equivalent to \[\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x} _{i}^{t}\mathbf{w}+b_{0}}}\right)x_{ij}=s_{j} \tag{21}\] Following Simon et al. [15], this is satisfied for \(w_{j}=0\) if \[\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x} _{i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}=s_{j} \tag{22}\] where \(\mathbf{w}_{j}^{*}\) is the vector \(\mathbf{w}\) with the \(j^{th}\) component equal to \(0\). Therefore, \[\left|\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma \mathbf{x}_{i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}\right|=|s_{j}|\leq 1 \tag{23}\] Rearranging gives \[\left|\frac{\gamma}{N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x}_{ i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}\right|\leq\ell_{1} \tag{24}\] which vectorially can be written as \[\left|\frac{\gamma}{N}\mathbf{X}_{j}^{t}\Bigg{(}\mathbf{y}-\sigma\left(\gamma \mathbf{X}\mathbf{w}_{j}^{*}+\mathbf{b}\right)\Bigg{)}\right|\leq\ell_{1} \tag{25}\] where \(\sigma(x)=1/(1+e^{-x})\) is the sigmoid activation function and \(\mathbf{b}\) is the N-dimensional vector whose all components are equal to \(b_{0}\). It is important to note that the way by which neural lasso obtains the condition that determines whether a weight is zero is different from that of the statistical lasso. The latter uses a quadratic approximation of the error term since it also needs to have an explicit expression of the update of the non-zero weights. Neural lasso only needs to know which weights are zero since the update of the non-zero weights is implicitly performed during the training of the network. ### Optimization An important aspect to discuss is how to estimate the neural lasso weights. In this section, three optimization algorithms are proposed which are shown schematically in the three lower panels of Figure 3. Normally, when working with neural networks, its layout is determined by cross-validation and the estimation of its weights by simple validation. Figure 3: Statiscal lasso and neural lasso algorithms. That is, once the network layout has been determined, the available data are divided into a training set and a validation set. The training set is used to estimate the network parameters, while the validation set is used to evaluate the performance of the network in an independent set. The resulting network is the one whose weights minimize the validation error. As the network layout is predefined in neural lasso, it is only necessary to estimate its weights using simple validation. This way of training the network will be called _standard neural lasso_. However, the standard neural lasso may present a disadvantage with respect to the statistical lasso because of how they estimate the weights. The fact that statistical lasso employs cross-validation allows it to use all available observations to obtain an estimate of the error, whereas the standard neural lasso obtains this estimate using only a subset of the observations because it relies on simple validation. For this reason, a second algorithm called _restricted neural lasso_ has been developed to train the neural network by mimicking statistical lasso. Restricted neural lasso sets the value of \(\gamma\) equal to 1 and establishes it as a non-trainable parameter. Once the \(\gamma\) value has been fixed, it also sets the value of the hyper-parameter \(\ell_{1}\) to one of the \(\lambda\) values that the statistical lasso considers during its optimization. Having fixed the value of these two parameters, it is possible to perform the cross-validation and the algorithm selects the value of \(\ell_{1}\) that minimizes the cross-validation error. In a second step, the algorithm estimates the weights using the optimal value of \(\ell_{1}\) and setting \(\gamma\) equal to 1. Assuming that the network layout is correct, the performance of this second optimization method should be practically identical to that obtained by the statistical lasso. Finally, during the development of this work, a third optimization approach emerged. This new optimization algorithm, called _voting neural lasso_, combines all the optimization approaches discussed above. Specifically, it uses the cross-validation design used by the restricted neural lasso and by the statistical lasso. However, it does not search for the value of the hyper-parameter \(\lambda\) that minimizes the average validation error in the K configurations. For each of the K settings, it selects the value of \(\lambda\) with which the smallest validation error is obtained in a similar way to the standard neural lasso. A variable is considered to be significant when it has been selected in most of the K settings. In a second phase, the weights of only these significant variables are estimated without taking into account the penalty term. It is important to note that this approach is not a relaxed lasso [16]. To summarize the above, three optimization algorithms with three different purposes will be considered. Standard neural lasso obtains the estimation of the weights using the usual procedure of training neural networks. Restricted neural lasso mimics the statistical lasso method. If these two methods obtain very similar results, a bridge between Statistics and Machine Learning would be built. Finally, voting neural lasso proposes a new way of estimating weights that can be used for both the statistical and the neural versions. For the standard neural lasso and for the voting neural lasso, the network is initialized with \(\gamma=1\) and \(\ell_{1}=\max_{j}\left|\frac{2}{N}\mathbf{X}_{j}^{t}\mathbf{y}\right|\) for the linear case and \(\ell_{1}=\max_{j}\left|\frac{1}{N}\mathbf{X}_{j}^{t}(\mathbf{y}-\sigma(0))\right|\) for the logistic case. In addition, in this article, the Adam optimization algorithm is used to adjust the weights [17]. ## 4 Experimental Results In order to evaluate the performance of the proposed method, three experiments were conducted. The first two focused on the linear case. Specifically, the first one is performed with simulated data and the second one uses several real data sets. The two previous experiments are complemented with a third one aiming to evaluate the proposed method in the logistic case using real data. ### Experiment 1: Linear case, Simulated data In the first study, the data were simulated according to the model \(y=\mathbf{X}\boldsymbol{\beta}+\epsilon\) where \(\mathbf{X}\) is the matrix containing the observations as row, \(\epsilon_{i}\sim N(0,1)\) and \[\beta=[1\,2\,3\,4\,\underbrace{0\,\ldots\,0}_{p-4}]\] Moreover, the data were simulated from a centered normal distribution so that \(\rho_{ij}=0.5^{|i-j|}\) for \(1\leq i<j\leq p\). In addition, the columns with the predictors were randomly rearranged to avoid possible positional effects. In order to test the performance of the different algorithms, training sets for \(p\in\{20,100,200\}\) with sample size \(N\) equal to 50 were generated. For each of the three scenarios, a repeated validation was performed with 100 runs. In all the repetitions, a test set of 1000 observations was generated. As performance measures, we calculated the MSE on the test set, the precision (percentage of non-significant variables correctly identified), and the recall (percentage of significant variables correctly identified). The number of folders K was set to five for the statistical lasso, restricted neural lasso, and voting neural lasso algorithms. Standard neural lasso used 20% of the training data as validation set. Indicate that the analyses using the non-neural versions were performed using the glmnet R package [18], while the neural versions were implemented in Pytorch [19]. The obtained results are shown in Table 1. This table shows that the standard neural lasso performs significantly worse than the non-neural version. As noted above, this is because the standard neural lasso only obtains knowledge of its performance during training on the small validation subset. It is also observed that the performance of the statistical lasso and the restricted neural lasso is almost identical. This confirms that the network design is correct. Finally, a result of results were obtained by the voting neural lasso algorithm which significantly improves those obtained by the three previous approaches. ### Experiment 2: Linear case, Real data The evaluation of the proposed technique was further evaluated using five different real data sets. Specifically, three datasets were obtained from the University of Caroline-Irvine (UCI) repository, and two own datasets were used. The datasets used are the following: * UCI White wine quality [20]. This database, containing 4898 observations, was built to predict the quality of Portuguese "Vinho Verde" from 11 predictors. In each of the repetitions, the training set consisted of 4000 training observations, and the test set was made up of 898 observations. * UCI Boston housing [21]. This dataset consists of 506 observations with 12 attributes each. These attributes correspond to the dependent variable, which indicates the median value of owner-occupied homes, and the 11 predictors used to estimate it. In each of the repetitions, the training set consisted of 400 training observations, and the test set was made up of 106. * UCI Abalone [22]. This dataset was collected to predict the age of the abalone from physical measurements. It contains 4177 observations with nine attributes each. In each of the repetitions, the training set consisted of 3342 training observations, and the test set was made up of 1935. * Suicide attempt severity. This database contains information on the severity of 349 suicide attempts as measured by the Beck suicide intent scale [23]. The predictors are 30 items of the Barrat impulsivity scale [24]. In each repetition, the training set consisted of 200 training observations, and \begin{table} \begin{tabular}{c l c c c} \hline \hline & Method & MSE & Precision & Recall \\ \hline \hline \multirow{3}{*}{p=20} & Statistical lasso & 1.294 (0.188) & 0.671 (0.207) & 1 (0) \\ & Standard neural lasso & 1.465\({}^{**}\) (0.341) & 0.644 (0.249) & 1 (0) \\ & Restricted neural lasso & 1.298 (0.188) & 0.668 (0.210) & 1 (0) \\ & Voting neural lasso & 1.188\({}^{**}\) (0.144) & 0.934\({}^{**}\) (0.072) & 1 (0) \\ \hline \multirow{3}{*}{p=100} & Statistical lasso & 1.680 (0.419) & 0.848 (0.087) & 0.998 (0.025) \\ & Standard neural lasso & 2.129\({}^{**}\) (0.789) & 0.808\({}^{**}\) (0.136) & 0.998 (0.025) \\ & Restricted neural lasso & 1.695 (0.447) & 0.853 (0.096) & 0.998 (0.025) \\ & Voting neural lasso & 1.419\({}^{**}\) (0.360) & 0.976\({}^{**}\) (0.017) & 0.998 (0.025) \\ \hline \multirow{3}{*}{p=200} & Statistical lasso & 1.806 (0.383) & 0.910 (0.053) & 1 (0) \\ & Standard neural lasso & 2.338\({}^{**}\) (0.717) & 0.827\({}^{**}\) (0.166) & 0.995 (0.035) \\ \cline{1-1} & Restricted neural lasso & 1.821 (0.395) & 0.910 (0.065) & 1 (0) \\ \cline{1-1} & Voting neural lasso & 1.403\({}^{**}\) (0.425) & 0.992\({}^{**}\) (0.007) & 0.990 (0.049) \\ \hline \hline \end{tabular} \end{table} Table 1: Results obtained for the linear scenario with synthetic data. For each of the three statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively. the test set was made up of 149. * Attention Deficit Hyperactivity Disorder (ADHD). It contains the responses provided by 59 mothers of children with ADHD to the Behavior Rating Inventory of Executive Function-2 containing 63 items [25]. This dataset has two possible dependent variables measuring the degree of inattention and the degree of hyperactivity of the children as measured by the ADHD rating scale [26]. The training set for each repetition consists of 47 observations and the validation set consists of 12 observations. As with the previous experiment, 100 repeated validations are performed, the number of K-folders is set to five, and the validation set contains 20% of the training data. Obtained results, shown in Table 2, strengthen the conclusions obtained with synthetic data. In particular, it is observed that the voting neural lasso obtains an MSE similar to that of the statistical lasso but with the advantage of using a significantly smaller number of predictors. It is also observed that the worst performance is obtained with the standard neural lasso. In addition, it can be seen that the statistical lasso and restricted neural lasso obtain practically identical results. \begin{table} \begin{tabular}{l l c c} \hline \hline Dataset & Method & MSE & Selected Var. (\%) \\ \hline \hline \multirow{3}{*}{White wine quality} & Statistical lasso & 0.567 (0.027) & 0.899 (0.087) \\ & Standard neural lasso & 0.566 (0.027) & 0.960\({}^{**}\) (0.073) \\ & Restricted neural lasso & 0.567 (0.027) & 0.898 (0.084) \\ & Voting neural lasso & 0.566 (0.028) & 0.905 (0.070) \\ \hline \multirow{3}{*}{Boston housing} & Statistical lasso & 25.530 (5.603) & 0.864 (0.093) \\ & Standard neural lasso & 25.865 (5.844) & 0.910\({}^{*}\) (0.082) \\ & Restricted neural lasso & 25.529 (5.600) & 0.865 (0.093) \\ & Voting neural lasso & 25.611 (5.625) & 0.764\({}^{*}\) (0.098) \\ \hline \multirow{3}{*}{Abalone} & Statistical lasso & 5.063 (0.420) & 0.981 (0.048) \\ & Standard neural lasso & 5.334\({}^{**}\) (0.458) & 0.571\({}^{**}\) (0) \\ & Restricted neural lasso & 5.061 (0.420) & 0.981 (0.048) \\ & Voting neural lasso & 5.060 (0.418) & 0.964\({}^{*}\) (0.062) \\ \hline \multirow{3}{*}{Suicide attempt} & Statistical lasso & 31.126 (2.380) & 0.095 (0.123) \\ & Standard neural lasso & 31.915\({}^{*}\) (2.276) & 0.683\({}^{**}\) (0.282) \\ & Restricted neural lasso & 31.127 (2.382) & 0.078 (0.133) \\ & Voting neural lasso & 31.025 (2.424) & 0.002\({}^{**}\) (0.008) \\ \hline \multirow{3}{*}{ADHD} & Statistical lasso & 3.616 (1.389) & 0.257 (0.065) \\ & Standard neural lasso & 3.680 (1.433) & 0.334\({}^{**}\) (0.229) \\ \cline{1-1} & Restricted neural lasso & 3.614 (1.388) & 0.252 (0.064) \\ \cline{1-1} & Voting neural lasso & 3.787 (1.230) & 0.145\({}^{**}\) (0.034) \\ \hline \multirow{3}{*}{ADHD} & Statistical lasso & 3.465 (1.251) & 0.312 (0.153) \\ \cline{1-1} & Standard neural lasso & 3.883\({}^{*}\) (1.686) & 0.346 (0.205) \\ \cline{1-1} & Restricted neural lasso & 3.465 (1.259) & 0.315 (0.159) \\ \cline{1-1} & Voting neural lasso & 3.637 (1.198) & 0.093\({}^{**}\) (0.029) \\ \hline \end{tabular} \end{table} Table 2: Results obtained for the linear scenario with real data. For each of the three statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively. ### Experiment 3: Logistic case, Real data This last experiment is intended to test the performance of the neural lasso in the logistic scenario. For this purpose, three databases obtained from the UCI repository and one own database are used. A brief description of these databases is given below. * UCI Wisconsin Breast cancer [27]. This dataset is composed of 569 observations. Each observation has 30 predictors and a dependent variable indicating whether the predictors were obtained from a malignant tumor. The training set was made up of 445 observations while the test set consisted of 124. * UCI Spam [28]. This dataset is made up of 4601 instances. Each of them contains 57 predictors and one dependent variable indicating whether the email was spam. The training set consisted of 3975 observations while the test set comprised 626. * UCI Ionosphere [29]. This database is composed of 351 instances with 34 predictors and a dependent variable indicating whether the radar signal passed through the ionosphere or not. The training set was made up of 299 observations while the test set consisted of 52. * Suicidal Behaviour [30]. This database consists of 700 observations. Each contains 106 predictors consisting of responses to items of various scales, and a dependent variable indicating whether the respondent had recently made an attempt. The set-up used was similar to that of the two previous sections (K equal to five, 100 repetitions, and the validation set composed of 20% of the training data). The results obtained are shown in Table 3. Results obtained for the logistic case are similar to those obtained in the linear scenario and presented in the previous two sections. It is observed that the best results are achieved by the voting neural lasso in three of the four settings. A significantly lower accuracy than the statistical lasso is obtained only in the spam data set. It is also observed that the restricted neural lasso and the statistical lasso obtain equivalent results, which again shows the convergence of the neural technique with the statistical one. A small difference, with respect to the results achieved previously, is that the standard neural lasso gets better results than the statistical neural lasso in two settings (Cancer and Ionosphere). ## 5 Conclusions In this work, the lasso algorithm has been implemented by means of neural networks. Specifically, the network layout has been defined and three possible optimization algorithms for estimating its weights have been compared. It has been observed that estimating the weights in the way a neural network is usually trained results in poor performance. It has also been shown that it is possible to mimic the optimization of the statistical lasso algorithm with a neural network obtaining almost identical results. The only difference is that the former uses coordinated descent while the latter uses gradient descent. This result brings the fields of Statistics and Machine Learning closer. Finally, an algorithm using a majority vote has been proposed which takes into account how many of the cross-validation scenarios the variable is considered significant. This third algorithm has shown an exceptionally better performance than the widely used statistical lasso. In particular, it has been shown that voting neural lasso either obtains a lower error or obtains a better variable selection in both the linear and logistic cases. Moreover, these results have been obtained using training sets that present a great diversity. They contain a number of observations ranging from only 47 to 4000 and a number of predictors varying from 9 to 200. These results open up new lines of research such as developing neural versions of other shrinkage techniques such as the elastic net or extending these algorithms to non-linear versions using the flexibility of neural networks. It is also important to note that the development of the voting neural lasso has been limited to simple cross-validation which is the information available to the other techniques. However, the use of repeated repetitions or repeated \begin{table} \begin{tabular}{l l c c} \hline \hline Dataset & Method & ACC & Selected Var. (\%) \\ \hline \hline \multirow{4}{*}{Cancer} & Statistical lasso & 0.963 (0.016) & 0.359 (0.092) \\ & Standard neural lasso & 0.964 (0.018) & 0.160\({}^{**}\) (0.039) \\ & Restricted neural lasso & 0.964 (0.016) & 0.360 (0.096) \\ & Voting neural lasso & 0.969\({}^{**}\) (0.015) & 0.111\({}^{**}\) (0.018) \\ \hline \multirow{4}{*}{Spam} & Statistical lasso & 0.923 (0.011) & 0.926 (0.024) \\ & Standard neural lasso & 0.904\({}^{**}\) (0.014) & 0.528\({}^{**}\) (0.056) \\ & Restricted neural lasso & 0.924 (0.011) & 0.927 (0.024) \\ & Voting neural lasso & 0.915\({}^{**}\) (0.010) & 0.462\({}^{**}\) (0.025) \\ \hline \multirow{4}{*}{Ionosphere} & Statistical lasso & 0.828 (0.048) & 0.448 (0.079) \\ & Standard neural lasso & 0.823 (0.051) & 0.388\({}^{**}\) (0.071) \\ & Restricted neural lasso & 0.827 (0.047) & 0.447 (0.080) \\ & Voting neural lasso & 0.829 (0.048) & 0.245\({}^{**}\) (0.040) \\ \hline \multirow{4}{*}{Suicide} & Statistical lasso & 0.650 (0.030) & 0.093 (0.057) \\ & Standard neural lasso & 0.627\({}^{**}\) (0.048) & 0.166\({}^{**}\) (0.253) \\ \cline{1-1} & Restricted neural lasso & 0.651 (0.029) & 0.088 (0.061) \\ \cline{1-1} & Voting neural lasso & 0.652 (0.031) & 0.031\({}^{**}\) (0.010) \\ \hline \end{tabular} \end{table} Table 3: Results obtained for the logistic scenario with real data. For each of the two statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively. cross-validations, and obtaining confidence intervals, on them might result in a more robust algorithm. ## Funding This research was partially funded by: Ministerio de Ciencia e Innovacion, Proyectos de Transicion Ecologica y Transicion Digital TED2021-130980B-I00, and Instituto Salud Carlos III, grant number DTS21/00091. ## Data availability The real data used in this study for the linear regression problem can be obtained from the UCI repository ([https://archive.ics.uci.edu/datasets](https://archive.ics.uci.edu/datasets)). The real data used for the logistic regression experiment are available from the corresponding author upon request. ## Declarations **Conflict of interest.** The authors have no relevant financial or non-financial interests to disclose.
2309.11374
Cooperative Spin Amplification
Quantum amplification is recognized as a key resource for precision measurements. However, most conventional paradigms employ an ensemble of independent particles that usually limit the performance of quantum amplification in gain, spectral linewidth, etc. Here we demonstrate a new signal amplification using cooperative 129Xe nuclear spins embedded within a feedback circuit, where the noble-gas spin coherence time is enhanced by at least one order of magnitude. Using such a technique, magnetic field can be substantially pre-enhanced by more than three orders and is in situ readout with an embedded 87Rb magnetometer. We realize an ultrahigh magnetic sensitivity of 4.0 fT/Hz$^{1/2}$ that surpasses the photon-shot noise and even below the spin-projection noise of the embedded atomic magnetometer, allowing for exciting applications including searches for dark matter with sensitivity well beyond supernova constraints. Our findings extend the physics of quantum amplification to cooperative spin systems and can be generalized to a wide variety of existing sensors, enabling a new class of cooperative quantum sensors.
Minxiang Xu, Min Jiang, Yuanhong Wang, Haowen Su, Ying Huang, Xinhua Peng
2023-09-20T14:55:34Z
http://arxiv.org/abs/2309.11374v1
# Cooperative Spin Amplification ###### Abstract Quantum amplification is recognized as a key resource for precision measurements. However, most conventional paradigms employ an ensemble of independent particles that usually limit the performance of quantum amplification in gain, spectral linewidth, etc. Here we demonstrate a new signal amplification using cooperative \({}^{129}\)Xe nuclear spins embedded within a feedback circuit, where the noble-gas spin coherence time is enhanced by at least one order of magnitude. Using such a technique, magnetic field can be substantially pre-enhanced by more than three orders and is in situ readout with an embedded \({}^{87}\)Rb magnetometer. We realize an ultrahigh magnetic sensitivity of \(4.0\,\mathrm{f\Omega/Hz^{1/2}}\) that surpasses the photon-shot noise and even below the spin-projection noise of the embedded atomic magnetometer, allowing for exciting applications including searches for dark matter with sensitivity well beyond supernova constraints. Our findings extend the physics of quantum amplification to cooperative spin systems and can be generalized to a wide variety of existing sensors, enabling a new class of "cooperative quantum sensors". Quantum amplification that offers the capability of enhancing weak signals is ubiquitous and essential to various frontiers of science [1], ranging from ultrasensitive magnetic and electric field sensing [2; 3; 4], mechanical oscillator motion measurements [5], and optical amplifiers [6; 7] to determination of fundamental constants [8], frequency standards [9], and searches for dark matter [10; 11; 12] and exotic forces beyond the standard model [13]. To date, the well-established paradigm of quantum amplification is mostly based on using independent quantum systems, including superconducting qubits [2], atomic and molecular spins [11; 12; 13], photons [6; 7], nitrogen-vacancy centers in diamonds [14; 4], trapped-ion qubits [3; 15], etc. The individuals in independent systems amplify the measured signal independently and the total response is the summation of individuals, which in practice leads to limits on the performance of quantum amplifiers, including operation frequency, spectral linewidth, and gain. Cooperative systems have recently attracted extensive attention and provided opportunities for novel applications [16; 17; 18; 19; 20; 21; 22; 23; 24]. In contrast to independent systems, the individuals in cooperative systems experience each other and their evolution depends on the state of the entirety. Various experimental systems have explored the rich phenomena of cooperative systems, for example, cooperative emitting [16; 17; 18; 19; 25] and scattering [20; 21], one-axis-twisting dynamics [22], and spectral narrowing [23; 26]. Cooperative systems could be a promising platform to explore advanced quantum amplification beyond independent systems, partially because such systems provide an ideal way to engineer the coherence time of quantum systems and thus enhance signal response. The combination of cooperative systems and quantum amplification may open up exciting opportunities for developing new quantum amplifiers with improved performance, especially in gain. Such amplifiers would find promising applications in precision measurements, for example, ultrasensitive magnetometers [27; 28], magnetencephalography [29; 30], geomagnetic anomaly detection [51], and searches for new physics beyond the standard model [12; 13]. In this Article, we demonstrate a new magnetic-field signal amplification using cooperative noble-gas nuclear spins. In experiment, we prepare cooperative \({}^{129}\)Xe spins by acquiring the \({}^{129}\)Xe signal with an embedded \({}^{87}\)Rb magnetometer and then feeding the signal back to the \({}^{129}\)Xe spins with a feedback circuit. Our investigation shows the dynamics under different feedback strength. The nuclear-spin coherence time is significantly prolonged by more than one order of magnitude, and 2400-fold improvement in signal amplification is realized using such cooperative spins. We name these collective phenomena as "cooperative amplification". As a first application, our approach constitutes a new technology for enhancing and measuring magnetic fields with a sensitivity of \(4.0\,\mathrm{f\Omega/Hz^{1/2}}\), which surpasses photon-shot noise and even spin-projection noise of the embedded \({}^{87}\)Rb magnetometer. It is noteworthy that this quantum-enhanced measurement scheme does not rely on entanglement [32]. We discuss the promising applications of our amplification technique in the searches for hypothetical particles with a sensitivity well beyond the stringent supernova constraints [33; 34]. The present amplification technique should be generic for a wide range of sensors and constitute a new class of cooperative sensors. Our experiments are carried out in a setup similar to that of Refs. [19; 35], as depicted in Fig. 1(a). A \(0.5\,\mathrm{cm^{3}}\) cubic vapor cell contains \(20\,\mathrm{t}\mathrm{r}\mathrm{r}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{t} \mathrm{e}\mathrm{r}\mathrm{t}\mathrm{N}_{2}\), and a droplet of enriched \({}^{87}\)Rb. The \({}^{129}\)Xe spins are polarized through spin-exchange collision with optically pumped \({}^{87}\)Rb atoms, as there are no optical transitions available for \({}^{129}\)Xe spins from the ground levels. A bias field \(B_{0}\) is applied along the pumping direction (the \(z\) axis). The two steps, i.e. measurement and feedback, establish the indirect interaction among spins. The \({}^{129}\)Xe nuclear magnetization generates an effective magnetic field \(\mathbf{B}_{\mathrm{eff}}=\lambda M_{0}\mathbf{P}\) on \({}^{87}\)Rb atoms through Fermi-contact collisions [36, 37], where \(\lambda=8\pi\kappa_{0}/3\) is the Fermi-enhancement factor, \(\kappa_{0}\approx 540\) for \({}^{87}\)Rb-\({}^{129}\)Xe system, \(M_{0}\) is the maximum magnetization of the \({}^{129}\)Xe with unity polarization, \(\mathbf{P}\) is the equilibrium polarization vector of the \({}^{129}\)Xe nucleus. The \({}^{87}\)Rb atoms in the vapor cell serve as a sensitive magnetometer to in situ read out the \({}^{129}\)Xe magnetization. The real-time output signal of the \({}^{87}\)Rb magnetometer is connected to a feedback coil and generates a corresponding feedback field \(B_{\mathrm{fb}}\), with a rheostat in series with the coils to adjust feedback strength [Fig. 1(a), more details are presented in Supplementary Section I]. Because the \({}^{87}\)Rb magnetometer measures both the \(x\) and \(y\) component of \({}^{129}\)Xe polarization (with response \(C_{x}\) and \(C_{y}\) respectively), the feedback field can be expressed as \(B_{\mathrm{fb}}=\chi_{1}P_{x}-\chi_{2}P_{y}\). Here, \(\chi_{1}\) and \(\chi_{2}\) represent the feedback gain associated with "measuring \(P_{x}\) and providing feedback in \(y\)" and "measuring \(P_{y}\) and providing feedback in \(y\)", respectively. The values of \(\chi_{1}\) and \(\chi_{2}\) depend on factors such as the magnetometer response, the rheostat, and the coil coefficient. The self-induced feedback field carries the information about the \({}^{129}\)Xe spins and then produces a torque on the spins. Equivalently, each single spin experiences the torque from the collective spins and its time evolution depends on the entirety. Notably, this torque does not come from the dipole-dipole interaction between the single spin and the collective spins, but is mediated by the feedback field. We now consider the dynamics of cooperative \({}^{129}\)Xe spins under the self-induced feedback field. The polarization of \({}^{129}\)Xe in the \(x\), \(y\), and \(z\) directions is denoted as \(P_{x}\), \(P_{y}\), and \(P_{z}\) respectively. The dynamics of cooperative \({}^{129}\)Xe spins in the feedback circuit can be described by the Bloch equation: \[\frac{\mathrm{d}P_{x}}{\mathrm{d}t} =\gamma(P_{y}B_{0}-P_{z}B_{\mathrm{fb}})-\Gamma P_{x} \tag{1}\] \[=(\gamma B_{0}+\gamma\chi_{2}P_{z})P_{y}-(\Gamma+\gamma\chi_{1}P_ {z})P_{x},\] where \(\gamma\) is the gyromagnetic ratio of \({}^{129}\)Xe, \(\Gamma=1/T_{2}\) corresponds to the spin decoherence rate, and \(T_{2}\) represents the intrinsic coherence time. In this equation, we adopt the small angle approximation, treating \(P_{z}\) as a constant. To simplify the equation, we introduce two additional parameters, namely \(\xi=\gamma\chi_{1}P_{z}\) and \(\Delta_{\mathrm{fb}}=\gamma\chi_{2}P_{z}\). The parameter \(\xi\), associated with the process of "measuring \(P_{x}\) and providing feedback in \(y\)", represents the modification of decoherence induced by feedback (incoherent effect). On the other hand, the parameter \(\Delta_{\mathrm{fb}}\), linked to the process of "measuring in \(P_{y}\) and providing feedback in \(\hat{y}\)", describes a feedback-induced frequency shift (coherent effect). The rheostat controls the amplitude of both \(\xi\) and \(\Delta_{\mathrm{fb}}\), while the sign is determined by the connecting polarity of the feedback coil. The ratio \(\Delta_{\mathrm{fb}}/\xi\) remains constant Figure 1: Setup and conceptual description of cooperative dynamics. (a) Sketch of experimental setup. The polarization and probing of \({}^{129}\)Xe atoms are achieved through spin-exchange collisions with \({}^{87}\)Rb atoms. Real-time feedback is provided to the system via a feedback coil. The feedback field includes \(P_{x}\) and \(P_{y}\) signals of \({}^{129}\)Xe. The amplitude of the feedback is controlled by an adjustable rheostat, and the sign is controlled by the connecting polarity. A bias field \(B_{0}\) is applied along the pumping direction. The diagram does not include the pump beam. (b) Refocusing effect in positive feedback mode. Some spins experience dephasing at certain points in time (highlighted with bright colors). The feedback field applies a torque on the dephased spins, causing them to reorient and refocus towards the collective spin. The right inset illustrates the spin dynamics. Each individual spin undergoes a torque (indicated by red arrows) parallel to the collective spin. As a result, the dephased spins tend to refocus, leading to an effective enhancement of the coherence time. Precession is omitted in the dynamical diagram. (c) Spreading effect in negative feedback mode. In this mode, the feedback-induced torque is anti-parallel to the collective spin, causing the dephased spins to align in the opposite direction. Consequently, the effective coherence time decreases as the dephased spins deviate from the collective spin. and is determined by the \({}^{87}\)Rb magnetometer. We show that the cooperative spin coherence time can be significantly enhanced through manipulating the feedback strength. According to Eq. (1), the decoherence rate modified by the feedback \(\xi\) becomes \[\frac{1}{T_{\text{eff}}}=\Gamma+\xi, \tag{2}\] where \(T_{\text{eff}}\) is the effective coherence time. In order to clearly illustrate that the behaviors of the spins are closely connected with relation between \(\Gamma\) and \(\xi\), we define the parameter \(C=-\xi/\Gamma\). In our analysis, we focus solely on the \(\chi_{1}\) component, disregarding the contribution of \(\chi_{2}\) which primarily induces a frequency shift. For \(0<C<1\) (positive feedback), we demonstrate that spins, initially dephased from the collective spin due to random noise, exhibit a tendency to refocus towards the collective spin [Fig. 1(b)]. In the presence of the feedback field, each spin experiences a torque parallel to the collective spin, compelling them to rotate until they realign with the collective spin (Supplementary Section II). As a result, unlike in independent dephasing scenarios, the cooperative spins are able to correct their precession phase according to the entirety, leading to an extended coherence time. Conversely, when \(C<0\) (negative feedback), the feedback-induced torque is anti-parallel to the collective spin [Fig. 1(c)]. Under this torque, the dephased spins tend to spread out until they align in the opposite direction, effectively canceling the collective spin. As a consequence, the decoherence rate worsens with the presence of feedback. It is this modulation of the decoherence process that distinguishes cooperative systems from independent systems. We demonstrate cooperative \({}^{129}\)Xe spin dynamics by adjusting feedback parameter \(\xi\). When \(\xi\) is set in the \(C\leq 0\) and \(0<C<1\) regime, the transverse magnetization decays exponentially with modified rate. To track changes in the coherence time, we apply a transverse pulse to tilt the \({}^{129}\)Xe spins at a small angle about \(5^{\circ}\) and record the resu01tant decay signal. The signals are fitted by exponentially-decayed sinusoidal function to determine the corresponding coherence time. In the \(C\leq 0\) regime, the coherence time decreases from \(31\,\text{s}\) to \(4\,\text{s}\) with increasing \(\xi\) [Fig. 2(a)]. In the \(0<C<1\) regime, the coherence signal decays slower for larger \(|\xi|\), and realizes \(T_{\text{eff}}>T_{2}\) [Fig. 2(b)]. In our experiment, the coherence time \(T_{\text{eff}}\) can be tuned to about \(545\,\text{s}\), which is more than one order longer than that observed without feedback (\(\approx\)31 s). Furthermore, Figure 2(c) shows the effective \({}^{129}\)Xe coherence time for different values of \(\xi\), which can be well fitted with the theoretical inverse function. When \(\xi\) is set in the \(C>1\) regime, superradiance-shaped pulses and maser occur instead of exponentially decayed signal, and \(T_{\text{eff}}\) can no longer be defined in such regime. Significant magnetic-field amplification is observed using cooperative \({}^{129}\)Xe spins. A transverse oscillating magnetic field \(\mathbf{B}_{\text{ac}}\) is applied on \({}^{129}\)Xe spins and generates transverse magnetization of \({}^{129}\)Xe; the magnetization induces an effective magnetic field \(\mathbf{B}_{\text{eff}}^{\perp}\) through Fermi-contact collisions with \({}^{87}\)Rb atoms. As reported in Refs. [12; 13; 35], the amplitude of \(\mathbf{B}_{\text{eff}}^{\perp}\) can be significantly larger than that of \(\mathbf{B}_{\text{ac}}\) with an amplification factor \(\eta_{0}=|\mathbf{B}_{\text{eff}}^{\perp}|/|\mathbf{B}_{\text{ac}}|\). The factor is determined by \(\eta_{0}=\frac{\lambda}{2}M_{0}P_{0}\gamma T_{2}\), where \(T_{2}\) is the intrinsic coherence time. Such amplifiers are based on independent \({}^{129}\)Xe spins, and their amplification ranges from 20-200 [12; 13; 35]. In contrast, our approach enhance the coherence time with the cooperative \({}^{129}\)Xe spins as demonstrated, leading to a modified cooperative amplification (Supplementary Section III) \[\eta=\frac{\lambda}{2}M_{0}P_{0}\gamma T_{\text{eff}}, \tag{3}\] where the coherence time is \(T_{\text{eff}}\) instead of the intrinsic \(T_{2}\). This provides new opportunities to realize improved spin amplification. We experimentally measure \(\eta\) and the bandwidth of the amplifier by sweeping frequency around \({}^{129}\)Xe resonance and recording signal response [Fig. 3(a)]. The fitting curve of Lorentz profile is overlaid on the experimental data. We further investigate \(\eta\) under different \(T_{\text{eff}}\) by tuning \(\xi\) and show that the resonance peak becomes narrower and higher with longer \(T_{\text{eff}}\). For example, when \(T_{\text{eff}}\) is tuned to be about \(163\,\text{s}\), the amplification \(\eta\) reaches approximately 2500. We also find that the resonance frequency \(f\) deviates from Larmor frequency \(f_{0}\) in the presence of the feedback field [see inset of Fig. 3(a)]. As derived in Supplementary Section II, the shift \((f-f_{0})\) linearly depends on \(\xi\) and its slope equals to \(-C_{y}/C_{x}\). The fitted result is \(f-f_{0}\approx-0.46\xi\). The relative amplification \(\eta/\eta_{0}\) is shown in Fig. 3(b). The cooperative response leads to a 5-fold enhancement in the relative amplification \(\eta/\eta_{0}\). Further enhancement of \(\eta\) is realized when \(\xi\) approaches \(-\Gamma\). However, in practice, the fluctuation of \({}^{87}\)Rb magnetometer response or feedback circuit resistance limits the precision of \(\xi\) and makes \({}^{129}\)Xe spins leave the amplification regime \(0<C<1\). The inset of Fig. 3(b) shows \(\eta\) values under different bias field \(B_{0}\) from \(0.08\,\mu\text{T}\) to \(3\,\mu\text{T}\) with \(\xi\approx 0.006\,\text{s}^{-1}\), where the amplification factor \(\eta\) is nearly independent of \(B_{0}\) and its average is about 820. In contrast to spin-exchange-relaxation-free magnetometers that require the operation at near-zero fields below \(100\,\text{nT}\)[38], the present \({}^{129}\)Xe cooperative sensor can be operated in \(\mu\)T-level magnetic field. As a first application, we use cooperative spin amplification to realize magnetic-field precision measurements with a fT/Hz\({}^{1/2}\)-level sensitivity. As an example, the bias field is set to \(B_{0}\approx 850\,\text{nT}\), corresponding to \({}^{129}\)Xe Larmor frequency \(f_{0}\approx 10.03\,\text{Hz}\). By tuning the feedback strength, the effective coherence time is set to \(T_{\text{eff}}\approx 300\,\text{s}\). A resonant oscillating field \(B_{\text{ac}}\approx 13.8\,\text{pT}\) along the \(y\) direction is applied as a test field. Benefiting from cooperative \({}^{129}\)Xe amplification, the applied test field is pre-amplified into \(65\,\text{nT}\). By taking the response of the cooperative spin amplifier into account, the magnetic sensitivity of \({}^{87}\)Rb magnetometer is effectively enhanced to about \(4.0\,\text{fT/Hz}^{1/2}\) around resonance frequency, as illustrated in Fig. 4(a). The sensitivity is over 1800 times better than photon-shot noise limit (\(\approx\)7.3 pT/Hz\({}^{1/2}\)) of \({}^{87}\)Rb magnetometer. Moreover, it surpasses spin-projection noise (\(\approx\)8.7 FT/Hz\({}^{1/2}\)) of \({}^{87}\)Rb magnetometer by 2.2 fold (Supplementary Section IV). Figure 4(b) depicts the magnetic-field sensitivity with various feedback strengths that correspond to different enhanced coherence time \(T_{\text{eff}}\). The sensitivity data are fitted with the function \([(a/T_{\text{eff}})^{2}+b^{2}]^{1/2}\), where the coefficients are estimated to be \(a\approx 860.3\) and \(b\approx 3.2\). Here the first term originates from non-magnetic photon-shot noise, which is not amplified and can be suppressed by the amplifier. The second term denotes real magnetic noise about 3.2 fT/Hz\({}^{1/2}\) that can be amplified by the cooperative amplifier, including magnetic-shield Johnson noise and unavoidable feedback circuit magnetic noise. As one can see, the current sensitivity is dominantly limited by the magnetic noise, which can be suppressed by existing techniques. For example, magnetic-shield Johnson noise can be minimized by using ferrite shielding [39]. The theoretical sensitivity is indicated by the dashed line when the potential magnetic noise is removed, e.g. the sensitivity can be improved to better than 1 fT/Hz\({}^{1/2}\) when \(T_{\text{eff}}\) is tuned to 900 s. Further improvement of the cooperative amplifier can be implemented with smaller coefficient \(a\), which requires high noble-gas number density, noble-gas spin polarization, and alkali-metal magnetometer response. Extrapolating the present results to devices with alkali-noble-gas pairs with smaller spin-destruction cross section such as K-\({}^{3}\)He, \(a\) should be reduced to about 80 with 3 atm \({}^{3}\)He. \({}^{3}\)He spins also possess longer intrinsic coherence time (\(\approx\)1000 s), which can be hours-long after enhanced by cooperative approach. These methods would extend sensitivity below 0.1 fT/Hz\({}^{1/2}\). **Discussions.** We would like to emphasize the main difference between this work and Fermi-contact enhancement. First, the Fermi-contact enhancement factor \(\lambda\) constitutes just a fraction of amplification factor \(\eta\). It should be noted that many other parameters are also important to realize a significant amplification factor, such as \(P_{0}\) and \(T_{\text{eff}}\). In our experiment, the polarization of \({}^{129}\)Xe can achieve \(P_{0}\approx 0.18\) and \(T_{\text{eff}}\) is tuned to more than 500 s, both of which is essential to realize an amplification factor of more than three orders. Second, we introduce cooperative amplifier to further increase the amplification. A 5-fold enhancement of \(\eta\) is achieved through tuning the feedback strength, while \(\lambda\) remains unchanged. Our technique based on cooperative spins shows potential for application in other areas, such as comagnetometry - a means to measure the precession frequency of two species of nuclei, including \({}^{129}\)Xe-\({}^{131}\)Xe and \({}^{129}\)Xe-\({}^{3}\)He [40; 41]. Its ability to resist noise and systematic effects associated with the magnetic field makes it useful for searches for violation of local Lorentz invariance [41] and for new spin-dependent forces [42; 40], inertial rotation sensing [43], etc. By allowing for long measurement times, the persistent coherence of cooperative spins allows for high accuracy in determining the precession frequency of nuclear spins, which is proportional to the measurement time to the power of -3/2 according to Cramer-Rao lower bound [41]. Our cooperative approach is capable of reuniting decoherence spins and resisting magnetic field gradients, making it possible to create a new class of cooperative spin comagnetometers. According to the experiment where the coherence time is enhanced to about 20 times longer than the independent ensemble, the frequency accuracy could be improved by two orders. It is also reported that in the \({}^{129}\)Xe-\({}^{131}\)Xe isotope comagnetometer, the electric quadrupole moment of \({}^{131}\)Xe can split into triplets due to the electric field gradient induced by the glass wall [44]. These triplets may narrow down benefiting from cooperation approach, thus allowing for high precision measurements of the quadrupole splitting. Our amplification technique has potential applications in the search for hypothetical particles theorized by various models beyond the standard model, such as axions and dark Figure 2: Demonstration of cooperative \({}^{129}\)Xe dynamics with different feedback strengths. (a) In the regime where \(C\leq 0\), the coherence decay rate becomes higher as \(\xi\) increases. (b) In the regime where \(0<C<1\), the coherence decay rate becomes smaller as \(|\xi|\) increases. All the curves have been normalized and offset along the y-axis for clarity. (c) The effective coherent time \(T_{\text{eff}}\) versus the feedback strength \(\xi\). The red line shows the fit with the inverse function. \(T_{\text{eff}}\) cannot be defined in the \(C>1\) regime. Instead of exponentially decayed signal, superradiance-shaped pulses and maser occur in such \(C>1\) regime. photons [12; 45]. These particles are expected to interact with standard model particles (such as nuclear spins) and produce an oscillating pseudo-magnetic field that can be amplified using our technique. Consequently, the search sensitivity of axions and dark photons can be significantly enhanced, leading to new empirical constraints. With our current experimental parameters, one-day measurement yields the search sensitivity of axion dark matter \(|g_{\text{aNN}}|\leq 10^{-10}\,\text{GeV}^{-1}\), which surpasses the most stringent supernova constraints [33; 34] by about two orders of magnitude. The constant \(g_{\text{aNN}}\) characterizes axion-neutron coupling. Our technique can also be applied to search for exotic spin-dependent interactions [13], where axions serve as force mediators that couple the standard particles. Using our current experiments, the search sensitivity is approximately one order of magnitude better than that in previous searches [13; 46]. In conclusion, we have demonstrated a novel approach for enhancing quantum amplification through cooperative noble-gas spins, resulting in improved magnetic field sensitivity. This approach should be generic to other noble gas, as well as alkali atoms and nitrogen-vacancy centers. Notably, cooperative spin amplification can operate in the presence of finite bias fields, eliminating the need for strict \(\mu\)-metal magnetic shielding. This extended functionality facilitates applications such as exploring Schumann resonance of Earth [47] and detecting geomagnetic field anomaly [31]. In addition, the combination of cooperative spin amplification and Floquet engineering [35] may increase the bandwidth of amplification.